Your address will show here +12 34 56 78
Android App Development, iOS App Development, Mobile Testing
By Min Ying, Atimi Software Inc.

Automation is not a new topic, with most software development QA teams employing its use in one way or another. There is also no lack of tools to choose from. On desktop, there are the ever popular Selenium and the HP backed HP – UFT (formerly QTP). For mobile, Appium and MonkeyTalk are among the more frequently used solutions. All of these tools are fine choices for functional and data driven tests due to their object-oriented nature. However, in my experience, there is one type of automation that is seldom mentioned, visual based testing using OCR (Optical Character Recognition) technology.


What is Visual Automation?

Visual automation relies on the appearance of on-screen elements to perform an action. This is different from traditional automation, which relies on the occurrence of elements in the background resources. To accomplish this, a set of pre-defined or determined visual images and/or transitions are stored. Scripts are written to compare the stored images to the current screen appearance in a set sequence to ensure the application is running through the expected on-screen transitions. Actions can also be scripted in response to on-screen changes. For example, the tools would check for the appearance of a login screen and compare its appearance to the expected result. If the screen matches the expected result, the tool would fill in the user name and password fields by mimicking mouse clicks and keyboard strokes.

 

Visual automation tools not only watch the screen for the appearance of specific elements but they can also act on element transitions, the disappearance of elements, or elapsed time. Actions against these on-screen elements mimic human actions. The tools can attempt to perform functions such as clicking, double-clicking, dragging and dropping, filling forms, etc. The range of action is at the full extent of what humans can do. There are several tools currently available to perform visual automation, including Squish and my favorite, Sikuli.

 

Why Visual Automation?

Visual automation acts much closer to human behavior than object-oriented automation tools. The actions and reactions are only based on visual stimuli to which humans can react. This allows testing to be conducted in a way that is much closer to the human experience than any other type of automation. Consider the following examples:

 

 


 



In the case above, a real human end user would have issues with the page but automated tools would have no trouble finding the login button as long as only the front-end graphic is missing.

 


 



The above test would pass when using object-oriented automation where the tool is used to find if an element exists without considering its proper placement whereas if visual automation is utilized, the defect would be properly identified.

The above scenarios are only a couple examples from a long list of scenarios where an automation tool that behaves similarly to a human user would be more useful.

Another advantage of an OCR-based automation tool is that it is not bound to an application while some other tools have limited access or even no access to the system outside of the application being tested. Visual automation tools can watch the entire screen for any change regardless of source. This way, it is possible to launch multiple unrelated applications and watch for their interactions. It is also possible, if one were to be inclined to do so, to launch a virtual machine and then launch multiple applications within it, with all of them under the control of a single automation tool. It can be quite powerful under the right circumstances.

 

The Case Against Visual Automation Tools

Visual automation also has some glaring disadvantages. If it didn’t, it would be much more widespread.

 

Firstly, it is not well suited for repetitive fast-paced testing. This is typical in a stress test scenario. Due to the nature of human user mimicry, this automation waits for the application to fully load and respond before proceeding. Therefore, testing time is usually much longer than with object-oriented automation. As a secondary effect of this, visual automation is also ill-suite for fast data verification. It is possible to run through a set of data (possibly stored in a spreadsheet or csv) but it would be much more time consuming than with object-oriented automation tools.

Secondly, it can’t handle multiple instances of the same application being tested. This type of automation watches the monitor for predetermined screens to show up. If multiple instances of the same or even similar screens appear at the same time, it can quickly become confusing. This is an unfortunate side effect of the ability to watch the entire system screen rather than just the single application.

Lastly and maybe most importantly, there is potentially a higher maintenance cost. Due to the fact that expected results need to be stored and updated, there would be a much higher human involvement in the maintenance of the comparison banks. Every change to the visual look would require capturing and restoring the new expected result. Even a change in transition would require script updates. Now, of course, the usual tricks of modulation and function extractions would work but this only reduces labor without eliminating it.

 

Opening New Doors

In the world of automation, visual automation (OCR-based) tools are often overlooked even though there are plenty of scenarios where they could offer a superior solution. By their nature of behaving closer to human end users, they can catch errors that would be overlooked by object-oriented tools. Having system wide influence can also open new doors in automation.

 

Yes, there are indeed several glaring shortcomings in visual automation, as I mentioned above, but I’m not saying other tools are not needed or that any tool should be used in exclusivity. For any serious automation of testing, a QA manager should evaluate all available tools and utilize any and all tools to their strengths. I just don’t want you to miss out on OCR tools and the advantages they offer.

 



Get in touch with us to find out how Atimi Software can help you build a custom, innovative, enterprise app that offers a superior user experience and stands the test of time.


778-372-2800


info@atimi.com

0

Mobile Testing

By Ashley Whitehead, Director of QA, Atimi Software Inc.


There are a number of issues that are regularly cited as the main problems with testing mobile apps: fragmentation of devices/manufacturers, OS versions, networks (changing connection types and speeds), usability, testing tools, automation and security. Whilst some are complex and require skilled and experienced testers, some of these can be overcome with a bit of knowledge of mobile apps and a good plan. I want to talk about one specifically and offer the benefit of our experience.

The fragmentation of the device market, especially Android (although more recently the iOS family has grown as well!), is overwhelming. You can find infographics that will graphically display the overabundance of devices. How can you test all these devices, and if not, how do you choose a reasonable set of devices to test?

For the purposes of this article I will focus on Android devices, and I am not considering OS version. The principles are the same whatever the platform is.

 

 




1. Factors

 

There are a number of key factors that we will need to consider:
* Screen size and screen resolution
* Device manufacturer
* Carrier
* Processor chipset
* Memory
* User demographics
* App design

 

Firstly, lets get rid of the least important ones starting with device manufacturer and carrier as these have very little impact on the performance of the app; we find very few issues that are specific to either of these factors. And you can cover a number of different manufacturers when making your device selections anyway.

 

Processor chipset and memory will affect the performance and that is why you need to consider app design. If your app is primarily downloading content from a web service and displaying it in a simple UI, the biggest influences on performance are network bandwidth and web service performance. The device performance will not have a significant impact. If your app is a UI heavy game, then you will need to test on high-end and low-end devices.

The profile of the expected users of your app will also have an influence: are you expecting affluent, low-cost, or a wide range of users. And then you may not have that information to be able to make a choice based on this factor.

Screen size and resolution are the biggest factors for most consumer apps. This also includes aspect ratio and portrait/landscape rotation. Designing your UI to work across the range of possibilities effectively whilst maximizing the appeal of your design is a challenge. The size and position of buttons and other user input controls, dynamically resizing UI controls and text, and scrolling and zooming controls are just some of the issues that you need to test for.

 

 

 



 

 

2. Selection Method

 

One of the most useful pieces of information you can have for selecting your set of devices is usage data. The volume of sales for each device is interesting but far more useful is, what devices are being used with apps that are similar to yours. If you have a mobile-friendly website that is being used by a similar group of users to those you expect to use your app, you can look at the profile of devices from that. If not, there are a few places that publish usage data by device type. You can look at those and decide whether they are close enough to your app. This will give you a long list, with each device type probably only accounting for a few percent of the traffic.
Next you need to find the screen size and resolution for each one – this may take a while first time round. Then if you are repeating the exercise every quarter, the number of new devices that will appear in your list will be relatively small. Once you have the screen size and resolutions, you will see that the devices will start to fall into groups. You can then select a single device from each group to represent that group.

 

How do you select which device represents each group? You then consider the following factors:

User profile – If you have a specific target group of users, does this affect your selection based on the price of devices? (Make sure your original list of devices was relevant to the geographical area you are considering.)

App performance – Is there an aspect of the app that will be affected by the processor, graphics chip, or memory? If so, select from high and lower performing devices.

 

 

 

 

Manufacturer – Select from a range of manufacturers.

 

 

And finally, pick devices you want to test. If there is a device that has just been released so it won’t appear on historical usage stats and despite that it is interesting to you, pick it. I’ll talk in the test planning section below about how you can test additional devices without significant extra effort.

 

 

3. Test Planning

 

Let us create an imaginary test suite that is 40% functional tests, 10% performance tests, and 50% UI tests. You have selected six devices to test. One approach you can take to avoid running every test on every device is to spread the functional tests across all six devices, running each test only once per cycle of testing. In subsequent test cycles, you can swap the devices used for a test to gain more coverage.
The UI tests can then be split into two sets: those that are unlikely to be affected by screen size/resolution and those that will. Again, you can spread the first set of tests across all six devices. You are then left with only the UI tests affected by screen size/resolution that you need to run on all devices. If you have performance issues for the app, then you would run the performance tests on the highest and lowest performing devices.
With this risk based approach, you can reduce the amount of testing from all tests on all devices by 60%.

 

 

4. Conclusion

 

Device fragmentation can be daunting when you first consider it. However, with careful consideration of the parameters of your app and your target audience, and systematic analysis of the available devices, you can achieve wide coverage without an excessive increase in testing effort. Or you can engage Atimi’s QA team and we will work with you to create and execute the optimum test suite.

 

 
Get in touch with us to find out how Atimi Software can help you build a custom, innovative, enterprise app that offers a superior user experience and stands the test of time.

 

778-372-2800

 

info@atimi.com

0

Scroll Up