|Automation is not a new topic, with most software development QA teams employing its use in one way or another. There is also no lack of tools to choose from. On desktop, there are the ever popular Selenium and the HP backed HP – UFT (formerly QTP). For mobile, Appium and MonkeyTalk are among the more frequently used solutions. All of these tools are fine choices for functional and data driven tests due to their object-oriented nature. However, in my experience, there is one type of automation that is seldom mentioned, visual based testing using OCR (Optical Character Recognition) technology.|
What is Visual Automation?
|Visual automation relies on the appearance of on-screen elements to perform an action. This is different from traditional automation, which relies on the occurrence of elements in the background resources. To accomplish this, a set of pre-defined or determined visual images and/or transitions are stored. Scripts are written to compare the stored images to the current screen appearance in a set sequence to ensure the application is running through the expected on-screen transitions. Actions can also be scripted in response to on-screen changes. For example, the tools would check for the appearance of a login screen and compare its appearance to the expected result. If the screen matches the expected result, the tool would fill in the user name and password fields by mimicking mouse clicks and keyboard strokes.|
|Visual automation tools not only watch the screen for the appearance of specific elements but they can also act on element transitions, the disappearance of elements, or elapsed time. Actions against these on-screen elements mimic human actions. The tools can attempt to perform functions such as clicking, double-clicking, dragging and dropping, filling forms, etc. The range of action is at the full extent of what humans can do. There are several tools currently available to perform visual automation, including Squish and my favorite, Sikuli.|
Why Visual Automation?
|Visual automation acts much closer to human behavior than object-oriented automation tools. The actions and reactions are only based on visual stimuli to which humans can react. This allows testing to be conducted in a way that is much closer to the human experience than any other type of automation. Consider the following examples:|
|In the case above, a real human end user would have issues with the page but automated tools would have no trouble finding the login button as long as only the front-end graphic is missing. |
The Case Against Visual Automation Tools
Opening New Doors