Your address will show here +12 34 56 78
Android App Development, iOS App Development, mobile UX
By Bill Mak, Atimi Software Inc.



















 



 


Find out more about 5G
 

Find out more about IEEE and High Speed Wireless
 

Find out more about the sublime Mihaly Csikszentmihalyi
 

Find out more about Mihaly Csikszentmihalyi’s book on FLOW
 

Find out more about the magnificent Bill Moggridge
 

Find out more about AIDA (Attention, Interest, Decision, Action)
 


Get in touch with us to find out how Atimi Software can help you build a custom, innovative, enterprise app that offers a superior user experience and stands the test of time.


778-372-2800


info@atimi.com

0

Android App Development, iOS App Development, Mobile Testing
By Min Ying, Atimi Software Inc.

Automation is not a new topic, with most software development QA teams employing its use in one way or another. There is also no lack of tools to choose from. On desktop, there are the ever popular Selenium and the HP backed HP – UFT (formerly QTP). For mobile, Appium and MonkeyTalk are among the more frequently used solutions. All of these tools are fine choices for functional and data driven tests due to their object-oriented nature. However, in my experience, there is one type of automation that is seldom mentioned, visual based testing using OCR (Optical Character Recognition) technology.


What is Visual Automation?

Visual automation relies on the appearance of on-screen elements to perform an action. This is different from traditional automation, which relies on the occurrence of elements in the background resources. To accomplish this, a set of pre-defined or determined visual images and/or transitions are stored. Scripts are written to compare the stored images to the current screen appearance in a set sequence to ensure the application is running through the expected on-screen transitions. Actions can also be scripted in response to on-screen changes. For example, the tools would check for the appearance of a login screen and compare its appearance to the expected result. If the screen matches the expected result, the tool would fill in the user name and password fields by mimicking mouse clicks and keyboard strokes.

 

Visual automation tools not only watch the screen for the appearance of specific elements but they can also act on element transitions, the disappearance of elements, or elapsed time. Actions against these on-screen elements mimic human actions. The tools can attempt to perform functions such as clicking, double-clicking, dragging and dropping, filling forms, etc. The range of action is at the full extent of what humans can do. There are several tools currently available to perform visual automation, including Squish and my favorite, Sikuli.

 

Why Visual Automation?

Visual automation acts much closer to human behavior than object-oriented automation tools. The actions and reactions are only based on visual stimuli to which humans can react. This allows testing to be conducted in a way that is much closer to the human experience than any other type of automation. Consider the following examples:

 

 


 



In the case above, a real human end user would have issues with the page but automated tools would have no trouble finding the login button as long as only the front-end graphic is missing.

 


 



The above test would pass when using object-oriented automation where the tool is used to find if an element exists without considering its proper placement whereas if visual automation is utilized, the defect would be properly identified.

The above scenarios are only a couple examples from a long list of scenarios where an automation tool that behaves similarly to a human user would be more useful.

Another advantage of an OCR-based automation tool is that it is not bound to an application while some other tools have limited access or even no access to the system outside of the application being tested. Visual automation tools can watch the entire screen for any change regardless of source. This way, it is possible to launch multiple unrelated applications and watch for their interactions. It is also possible, if one were to be inclined to do so, to launch a virtual machine and then launch multiple applications within it, with all of them under the control of a single automation tool. It can be quite powerful under the right circumstances.

 

The Case Against Visual Automation Tools

Visual automation also has some glaring disadvantages. If it didn’t, it would be much more widespread.

 

Firstly, it is not well suited for repetitive fast-paced testing. This is typical in a stress test scenario. Due to the nature of human user mimicry, this automation waits for the application to fully load and respond before proceeding. Therefore, testing time is usually much longer than with object-oriented automation. As a secondary effect of this, visual automation is also ill-suite for fast data verification. It is possible to run through a set of data (possibly stored in a spreadsheet or csv) but it would be much more time consuming than with object-oriented automation tools.

Secondly, it can’t handle multiple instances of the same application being tested. This type of automation watches the monitor for predetermined screens to show up. If multiple instances of the same or even similar screens appear at the same time, it can quickly become confusing. This is an unfortunate side effect of the ability to watch the entire system screen rather than just the single application.

Lastly and maybe most importantly, there is potentially a higher maintenance cost. Due to the fact that expected results need to be stored and updated, there would be a much higher human involvement in the maintenance of the comparison banks. Every change to the visual look would require capturing and restoring the new expected result. Even a change in transition would require script updates. Now, of course, the usual tricks of modulation and function extractions would work but this only reduces labor without eliminating it.

 

Opening New Doors

In the world of automation, visual automation (OCR-based) tools are often overlooked even though there are plenty of scenarios where they could offer a superior solution. By their nature of behaving closer to human end users, they can catch errors that would be overlooked by object-oriented tools. Having system wide influence can also open new doors in automation.

 

Yes, there are indeed several glaring shortcomings in visual automation, as I mentioned above, but I’m not saying other tools are not needed or that any tool should be used in exclusivity. For any serious automation of testing, a QA manager should evaluate all available tools and utilize any and all tools to their strengths. I just don’t want you to miss out on OCR tools and the advantages they offer.

 



Get in touch with us to find out how Atimi Software can help you build a custom, innovative, enterprise app that offers a superior user experience and stands the test of time.


778-372-2800


info@atimi.com

0

iOS App Development, mobile strategy
By Mike Woods, Atimi Software Inc.

Dynamic Type is not new. It has been around since iOS 7, but its adoption by applications has been somewhat patchy – until now. With iOS 11, Apple is making significant improvements to the feature that should lead to wide scale adoption. This article goes through these changes and considers how they will impact good app design and implementation.


What is Dynamic Type?

iOS has always included great support for text. The OS has dozens of high quality, scalable fonts and a sophisticated text rendering engine. Designers and developers have been able to leverage this rich feature set to produce attractive and functional UIs.

 

However, with a small screen, UI design is always a compromise between fitting content into the view and readability. And as the size of readable text varies from person to person, what works for one may be unusable for another. Text-heavy applications (such as news readers) might offer a text size setting but as such features need to be coded manually, most applications just don’t warrant the effort.

 

To solve this issue, Apple introduced Dynamic Type in iOS 7. It allows designers to utilize a set of seven (later increased to ten) text styles when selecting fonts. These styles are then mapped to different fonts and sizes according to the user’s text size setting. With Dynamic Type, any application can be responsive to the user’s size preference, which improves the experience for a broader range of users.

Dynamic Type supports seven size settings, allowing a significant variation in font size. For example, Body text style is 17pt at the default setting but ranges from 14pt to 23pt. However, this is not the limit as iOS includes an accessibility setting that adds five larger sizes, all the way up to 53pt for body text. (Note that, at present time, only body text size changes in the accessibility sizes – this will change in iOS 11.)

 


 


This flexibility comes with its own challenges. The dynamic range of body text is roughly 4:1, making even short sentences span multiple lines. Static layouts clearly will not function with Dynamic Text. Fortunately, Auto Layout will handle most of the heavy lifting, allowing the UI to adjust layout without the need for code.

Nevertheless, not all layout issues can be solved with Auto Layout alone. Also, retrofitting Dynamic Type into an existing application (particularly if it includes manual layout code) can be difficult. Finally, adopting Dynamic Type means abandoning the other OS-supplied fonts, not to mention custom fonts; not an easy choice for designers seeking a distinctive look.

These challenges have led many apps to be slow to adopt Dynamic Type, or to do so in a naive fashion, resulting in broken UIs, particularly for the larger settings.

However, all this should be about to change…


What is Coming in iOS 11?

At this year’s WWDC, Apple announced several improvements to Dynamic Type for iOS 11 that will have a big impact on the rate and cost of its adoption.

Perhaps the most significant is the ability to use other fonts with Dynamic Type. This allows designers effectively to redefine the text style palette (including typeface and point size) and the system will automatically scale them according to the user’s text size.

To understand the impact of this, just consider an educational application that wants to use Chalkboard SE (one of the standard iOS fonts) as its main typeface. Previously that would rule out Dynamic Type. In iOS 11, not only is this possible, but the designer could decide that the text should be slightly bigger (18pt, say, for body text) to look clearer with the handwriting typeface – and the fonts will still scale appropriately at other text sizes.

It also becomes easier to update existing UIs for Dynamic Text. Auto Layout gets the ability to adjust vertical spacing according to text size so text doesn’t get cramped at larger sizes. And for manual layout code, it is possible to scale pixel distances according to text size for similar effect.





Images can also scale to allow icons to be more visible in large accessibility text sizes. UIKit is even capable of keeping icons in vector form to avoid pixelation issues.

Beyond this, there is improved layout tuning as the text size is being made available as part of UITraitCollection, which is the standard way to track other factors affecting layout.

One final change is that now all text styles change point size with accessibility. This will greatly improve the reading experience for low-vision users as all text, not just body text, will scale. It also impacts design thinking as it means much more variation in content size.


What Does Apple Say?

Perhaps more important than the technical improvements to Dynamic Type is the push by Apple to promote accessibility in iOS 11. This includes applying “design for everyone” principles to the applications and utilities that ship with the OS. Amongst these principles are three goals for the use of text.

1.   Text should be large enough for the user to read. (In other words, text should scale with Dynamic Type.)

2.   Text should be fully readable. It shouldn’t be truncated unnecessarily and it shouldn’t be overlapped or clipping.

3.   An app’s UI should look beautiful at all text sizes.


Achieving these goals requires UIs to be more adaptive than simply allowing text to grow. For example, table cell content is often organized horizontally with an image or icon on the leading side and text label trailing. This looks great for regular text sizes but the larger accessibility fonts lead to the label looking cramped (even to the extent of long words being broken across multiple lines) while the icon sits in a large vertical whitespace. Switching to a vertical layout with the icon above the text maximizes the horizontal space for the text while fitting more content onscreen.

In other situations, accommodating larger fonts may mean reordering vertical content to ensure that action buttons don’t get pushed down by multiline text, reorganizing tool buttons into multiple rows, or hiding ancillary content to make room for important text.

None of these adaptive designs come for free but Apple makes the point that they are worth it to deliver a great experience for everybody. And by delivering such an experience within the system applications, Apple is raising the bar for third-party apps. With iOS 11, users will be more willing to enable accessibility features to improve ease of use, and apps that fail to support Dynamic Type well will ultimately lose out to those that do.


Get in touch with us to find out how Atimi Software can help you build a custom, innovative, enterprise app that offers a superior user experience and stands the test of time.


778-372-2800


info@atimi.com

0

PREVIOUS POSTSPage 1 of 12NO NEW POSTS
Scroll Up