Finding more revenue, comprehensive quality, and a sneak peek into the next big thing in mobile experience
As systems were built that are able to “look at” and recognize the visual elements of a screen or UI (think OCR, Tesla’s driving tech, Image recognition, computer vision, etc.), a whole new breed of testing has been born: Visual Testing. But what is it?
One of the pioneers in the Visual Testing space, Applitools, phrases it as:
“Visual testing is the automated process of comparing the visible output of an app or website against a baseline image. In its most basic form, visual testing, sometimes referred to as snapshot testing compares differences in an image by looking at pixel variations.”
So, basically, according to Applitools, Visual Testing is here to compare what you’ve already built against baselines, and then point out the differences so that you can capture both visual issues and functional issues that happen to render visually.
Personally, I have my own thoughts on this definition. However, before I give my take (spoiler alert: I think it’s “right,” but misses out on a lot of potential for “Visual Testing), let’s dive a bit deeper into “Why” and “When” Visual Testing is useful.
While Visual Testing can be used for a variety of purposes, one of the major areas where it has shined as a “must-have” solution is when comparing the visual output of an application on one website or device to the output of that same application when being used in another browser, web environment, or mobile device type, etc. Because applications and websites both need to look great no matter the channel, this has saved countless time and money for teams that may have originally had to do this kind of comparison manually. Just think: there are thousands of device types out there for just mobile…and then you have to take screen size, OS, resolution, etc., into account.
Long story short? It would take forever and require armies of manual testers to ensure quality across this broad set of device permutations. While some organizations could theoretically take that approach, teams are finding much more success with the automation of that process. To return back to the OG of Visual Testing, Applitools, teams performing automated Visual Testing have seen incredible returns on their investments. According to a study run by Applitools, test creation was 5.8x faster when using automated Visual Testing. This is a significant improvement.
And it’s not just the speed at which testing can occur, it’s also the efficiency at which bugs are identified. While Functional Testing is a major building block of any worthwhile testing strategy, it cannot catch every kind of bug. And, oftentimes, these bugs that make it past Functional Testing are Visual in Nature. While I alluded to it in our recent blog around the importance of comprehensive quality, let’s take the classic example of white text vs red text on a white background in the case of a mobile banking app on multiple devices running two different operating systems and screen sizes.
I just received an overdraft notification via email. I’m not sure how this happened, but my bank sent me an email. They told me I need to transfer funds within the next 24 hours to avoid overdraft penalties. I immediately open my banking app, and I meet the log-in screen. Because I’m nervous and anxious to get this fixed, I accidentally type in the wrong password. I hit log-in and get an error message via red text above the log-in field. So, I correctly input my password. I log in and find that there was a simple mistake where certain funds were being held longer than expected. So, I transfer money from my savings account into my checking, and all is right in the world.
Now, for user #2: The same exact thing happens up until the user tries to log in. So, picking up from the log-in step:
This time, instead of the error message, Nothing happens. I wonder if I just hit the wrong area of the screen. So I hit the log-in button again. Still nothing. I try again a third time. Now, I’m taken to a new screen. It says I am locked out from accessing my banking app because I entered the wrong password too many times. But, I wasn’t even notified that I had entered the wrong password. My response? A 1-star review in the app store.
Both users were using the same app, but on different phones running different operating systems and screen sizes. A defect is raised. The support team looks into the issue and finds that the second user’s app did, in fact, raise an “incorrect password” message, but that message was somehow displayed in white text instead of red. The background of the banking app is white, so the text didn’t ever appear visibly for the end-user.
Simply put, Visual Testing should be used any time you are testing an application’s UI that has to operate within different environments. As far as where in the sprint this occurs, this really depends on the tools and technologies that you are testing as well as the tools that you are using. Some tools, like Applitools, can start to build Visual Tests on individual components during development with tools like Storybook. But, there are also solutions out there, and use cases where it makes the most sense, to focus on running automated Visual Tests on a functioning application and UI.
If you can’t tell by now, I like Applitools and other tools like them. I think that these companies introduced something very forward-looking. They are true visionaries in the test automation space.
However, I think that this first definition and understanding of Visual Testing falls somewhat short of where we could go with the concept.
As of now, all that Visual Testing tools are doing is comparing a baseline screenshot of an ideal visual output against screenshots taken of that same application or website. This comparison validates whether or not the visuals match in a way that meets requirements set by the business (assuming that the visual output in the baseline meets those requirements). So, really, when organizations talk Visual Testing today, they really talk about what I like to call “Visual Validation Testing.” For an example of what this kind of Visual Testing looks like, check out the image below from the Kobiton Portal. Today, we offer this same kind of Visual Testing, but ours is truly Mobile-optimized:
But, what about getting to the baseline in the first place? What about the ability to test what your UI designers have built and not just test the validity of requirements met, but actually test the beauty of what those requirements have defined for the UI. In other words, given the technology already built for Visual Validation, image recognition, OCR, etc., why can’t we offer solutions that are going to look at your UI and tell you exactly what beautiful is and then recommend how your UI designers can best iterate upon and improve the UI? You can think about this hypothetical solution as Visual UX Testing.
With such a solution, you could test the beauty of the UI, build it perfectly, make that baseline, and then be able to implement Visual Validation Testing to make sure that your perfect UI displays correctly everywhere. With this combination, you could add an extra layer of Visual UX Testing on top of your traditional Visual UI testing, which would allow your teams to truly test the overall visual quality of your product.
I call this approach comprehensive Visual Testing. Comprehensive Visual Testing contains both Visual Design Testing (get me to baseline) and Visual Validation Testing (validate that my UI matches the baseline). This is where I see so much potential in the space, and that’s why I’m so excited to be sharing this blog with you.
Because this is where we are going here at Kobiton.
In our last few sneak-peek blogs, I have mentioned “Project NOVA.” Project NOVA has been a confidential project that we have been working on internally, focused on leveling up the AI and ML technology at the core of the soon-to-come next-generation Kobiton Mobile Experience Platform.
Visual Design Testing is just one of the many things that our AI is learning to do. Let’s take a look at it. While I still can’t share everything, here’s some of what you will be able to do:
This is the approach that we’ve taken, and this is how your teams are going to build the best UI’s and mobile applications in the world. Our AI has been busy at work learning all the ins and outs of what makes the world’s best mobile UI’s (the “giants” in the mobile space) the best and most beautiful when it comes to User Experience. While we’ve been working on this for a while, the AI is now almost ready to deliver. With it, you’ll get the first wave of Kobiton-enabled scriptless UI Design testing.
The basic idea is that when you execute one of your scriptless automated tests through Kobiton, we are going to have solutions that automatically compare the visual output against the standards set by these “giants.” Then, our AI will feed your team recommendations around how to best optimize their app in light of those standards.
And, of course, you’ll do this right alongside our already-existing and mobile-optimized Visual Validation Testing.
Who knows? With Kobiton, maybe you’ll end up being the tallest giant around. After all, that’s our goal.
In this blog, we covered the fundamentals of the “What,” “Why,” and “When” of Visual Testing. We then talked about the areas where Visual Testing can excel, and then we talked about how Kobiton’s Project NOVA is going to deliver this next big step, among many other things. I’m personally very excited, and I hope that you are, too.
But, this isn’t the end of our sneak peeks around the next generation of Mobile Experience. In the weeks leading up to this big launch, we’ll continue to post blogs giving short insights into Project NOVA and what’s in store for you, your testing teams, and your business as a whole.
Until next blog! Happy testing!