Kobiton x Applitools
Super-Charge Your Mobile & Visual Testing
Watch this on-demand session with Martin Kowalewski, leader of the Global Sales Engineering team at Applitools, as he highlights the critical importance of flawless UI/UX in today’s fast-paced app development landscape. Explore the challenges of delivering seamless experiences across diverse devices and platforms amid rapidly evolving mobile technologies and shifting user expectations. Martin dives into the latest trends in mobile development and testing, showcasing the transformative role of AI in achieving exceptional user experiences.
Applitools: Leveraging AI for Mobile App Testing on Real Devices
Discover the importance of flawless UI/UX in mobile app development, learn the latest trends, challenges, and the transformative role of AI in delivering seamless experiences across diverse devices and platforms.
0:00 | Martin Kowaleski
Good morning. Good afternoon. Good evening. My name is Martin Kowaleski, I’m the Director of Sales Engineering here at Applitools and welcome to my session on Leveraging AI for Mobile App Testing on Real Devices. During today’s session, we are going to provide an introduction to visual testing, review, the different types of visual testing used, the benefits to using visual AI in native mobile automation. And finally, we will wrap up with a demonstration showing how we can leverage these techniques to scale your native mobile automation with full page functional and visual coverage. So, what is visual testing? Visual testing is the process of validating all the visual aspects of an application’s UI On all platforms going beyond testing tools like Appium Espresso, or XUI to ensure elements like content images, buttons etc, appear correctly and are not inhibiting the functionality or the usability… of the app. Visual testing catches the bugs. Functional tests can miss. Visual defects happen to companies of all sizes. Everyone does functional testing in some form or faction, whether that is through manual through automation or some combination of both, Sure, functional testing scripts can validate the size, the position and the color scheme of visual elements. But if you do this, your test scripts will soon balloon in size due to the additional assertion code that you have to write The assertion code. Often leads to tests that require more time to write, more time to maintain, and are less reliable and are prone to failures when changes are made to the underlying implementation. Now, within visual testing, there are typically two types of approaches. Manual and automated Manual visual testing means… comparing two screenshots one from your known good baseline image and another from the latest version of your app. So for each pair of images, you have to invest time to ensure you’ve caught all issues. And again, that takes time and effort In the native mobile area that generally starts with checking the visual elements and their function on a single operating system, single screen orientation, and single device form factor. It will continue on to other combinations and that’s where a huge amount of the test effort lies Not in the functional testing but in inspection of visual elements across a combination of devices, screen orientations and device form factors With manual testing. Each action has to be done individually on each screen configuration, which again has to be scaled across all the combinations that you need to support. And with that, there are too many combinations for manual testing to be a valid approach. Imagine you need to test your app on two operating systems, Android and iOS, two screen orientations for mobile devices, portrait and landscape, and 10 standard mobile device display resolutions which would cover phones and tablets. So if you take the two screen orientations, and you multiply that by 10 device display resolutions for phones and tablets, and then times that by two operating systems. So for a single unique screen, we have 40 configurations to test. And if our app has say 50 unique screens in support of those configurations that equates to 2000 combinations to cover. And this approach is just not efficient and it comes at a… huge cost of people time and more importantly slows down the release process to getting new valuable features to market in a fast efficient way. So how do we help address this challenge that manual visual testing provides? So we can also leverage automated visual testing cost. Automated visual testing uses software to automate the process of comparing visual elements across various screen combinations to uncover these visual defects, Automated visual testing, piggybacks on your existing functional test scripts running in a tool like Appium, XUI, Espresso, and WebDrive RIO, and automates the process of taking a screenshot and performing the comparison. And using this approach, we find it having the following benefits. So it drastically increases test coverage because… for every functional test that is executing through your application, we are also covering the visual aspects of that screen. So you are getting both functional and visual coverage out of a single test run. We also find that the number of assertions that you need to write is much less because we can use a single capture assertion of the screen, which ultimately reduces test code and the overhead of writing new tests to cover new features within the application. It also reduces the amount of test code that you have to maintain for that test moving forward. We also find that executing automated visual testing as part of your strategy, you can catch both unexpected and expected bugs and defects before they reach production. Because we’re running these regression cycles in a very fast and efficient way as part of your feature development. And ultimately, what this leads to as well is we have more teammates that can participate in the application quality. Because the maintenance of tests with the visual approach, allows individuals that may not have test skills to be able to approve and reject change and participate in the maintenance of those screens moving forward. So what are the different types of visual testing? We typically see two types of approaches in native mobile. We see either a pixel comparison approach or a visual AI approach. First generation tools as well as open source automation frameworks typically leverage pixel comparison, Pixel comparison is where a bitmap of a screen is captured at various points of a test run, and its pixels are compared to a baseline bitmap. So these typical comparison algorithms are extremely simplistic, They essentially iterate through each pair of pixels in the existing image versus the baseline, They check if the color hex code is the same and if the color codes are different, they raise a visual bug. So even if there are slight differences in the rendering of those pixels, a pixel comparison tool will highlight those differences where they may not be perceptible to a human and actually be a real defect. So when we see this approach taking with open source tools or other solutions, pixel comparison fails in handling these three critical areas, First and foremost dynamic content. So when you have dynamic content… inside of your native mobile app that shifts over time, think of things like news, ads, user submitted content recommendations for purchases for products that I may be interested in or on sale, Where you want to check to ensure that everything is laid out with the proper alignment and there are no overlaps. Pixel comparison tools can’t test for these use cases and will not provide you coverage for that area. So if you have dynamic content in your native mobile apps, that is not something that a pixel comparison tool can handle. And the typical recommended approach is to ignore the areas of the screen which actually have dynamic content and even think about in a native mobile app. On every single device, you have to deal with very dynamic data at nearly the top of every screen, Things like network strength time, battery level, and more So on every… screen that takes a screenshot, you’re going to have dynamic data, You’re going to have to ignore every area that has those network strength time and battery levels in the screenshot and ignore that and maintain that over time. And then even when your app doesn’t need to change, it can confuse pixel comparison tools. So if your baseline and test screenshots were captured on a different device that has settings that are different for display, think of Zoom, have a different Zoom level on one device versus the other, that can turn nearly the entire page into a false positive. So alternatively, in these situations where pixel comparison may not meet the needs of your dynamic native app screen, we can use AI for automated visual testing. And like pixel based comparison tools, AI powered automated visual testing takes page snapshots as your functionality tests run. But unlike pixel comparisons, AI powered automated visual testing tools, use algorithms instead of pixels to determine when errors have incurred. So unlike snapshot testers, AI powered automated visual tools do not need special environments that remain static to ensure accuracy. Ai powered tools like Appily tools allow different levels of smart comparison and have the ability to cover both static and dynamic content. Here are a few use cases that AI powered visual testing covers that pixel comparison doesn’t And this may vary depending upon the type of application that you provide. But if you think about being able to validate cross browser on a mobile platform, Chrome and Safari, trying to validate cross environment testing with pixel comparison just will not be able to cover it due to the different… differences in rendering for those mobile web browsers. Scenarios with account balances that change from run to run mobile device status, bars, news, ad, user submitted content, suggested content based on your profile notification, icons, content shifts, mouse hovers, etc, scenarios, When browser upgrades, all of these typical scenarios that you are facing in your day to day environment, these are all scenarios that pixel comparison will not support and scale to meet. Now, if you look at a simple example of pixel comparison and the power of AI powered visual testing applications, Think about a simple e commerce native app that can display different products through filtering or search or even have the same products sorted differently for different users. If I want to measure this with pixel comparison, the test is going to fail because the products… aren’t in the exact same spot between the two images. So ultimately, you have two choices. You can either apply and ignore all of the areas of that has dynamic content or look for a different approach. An AI powered approach to automated visual testing gives you complete coverage or less work. An AI powered visual test can test a wide range of visual elements across a wide range of OS browser for mobile web screen orientation and resolution combinations. Because of the intelligence of the algorithms to handle both static and dynamic data and also be able to handle the inherent changes that are only perceptible to a human. And that is where an AI approach with Applitools provides increased value versus those traditional open source or legacy visual… testing solutions. So now that we’ve talked a little bit about what is visual testing, a little bit about the benefits and how to leverage visual AI. I’m going to proceed to a demo to show you what this looks like in action. So what I am going to do is I am going to take a session from a device inside of Kobiton. So I’m going to launch a device inside of my Kobiton platform applications And what I’m going to do is install my Android application. So I’m going to install version one two. And this is actually going to launch the application. And now I am actually going to proceed to navigate through my application in a manual way. So now what I’m going to do is I’m going to click on the Dow Jones market. This will generate a quote for the visa… symbol. You can see that it successfully transitioned to the stock quote tab. I can see my image. I can see the text. I can see my symbol and then if I go over to the history, I can see any previous quotes that I’ve generated. And you can see that I actually have two quotes that I’ve generated recently. So now I’m actually going to delete those stocks to finish my end to end scenario. And I’m going to click on the delete all stocks button. And now that completes my end to end scenario. So this is creating a workflow from a manual session. And now what I’m going to do is I’m going to terminate or exit this session inside of Kobiton And you can see that it is finishing up with the upload of that. We can look at our device logs here to triage it using the power of the Kobiton platform. But here, what I want to do is I want to actually take that manual session and convert it to an automated script that I can leverage using Appium automation. So let’s go over. And by simply converting this test case and inside of the Kobiton platform and viewing that test case, you can see that the test steps that I executed including installing the app, touching on the Dow Jones element, receiving my stock quote, then clicking on the history and then deleting all of the test cases. So this enables me to now export this test case out into automation. So just simply by generating an Appium script, I can take that manual script and reuse that either locally or within my automation to be able to continually test this end to end test case. And we will soon add both functional and visual coverage to it. So here, what we’re going to do inside of Kobiton is we are going to select the automation language and runner that we want to use. In my case, it’s going to be Java JUnit. And then all I need to do is to generate that script which will allow me to review this project and add it. And as long as I open it up, I can then run this test on a go forward basis without having to write a single line of code. So the power here is that I took a manual session that may be someone who doesn’t have developer skills. They’ve now converted this to automation where that I can hand off to my automation team to then run in a scheduled basis to provide full coverage, moving forward. Now, just for simplicity sake, what I’m going to do is I’m going to open that code from a previously exported project. Now, what I’m going to do is I’m going to add our visual validation from… Applitools to the existing Appium script. I do. That by adding our Appium SDK. So here I’m just adding the Applitools Appium SDK to your existing project dependencies. And then I’ve also added the minimum information required to communicate with the Applitools platform. So just by giving it the Applitools API key, and I’ve also set a minimum level of accessibility so that as I’m uploading the screenshots, I am not only looking for visual differences but I am also looking at all the objects and comparing the color contrast against our guidelines. And here, from the generated script from Kobiton, I have a device configuration that I can continue to leverage again with no manual intervention. And then to add the visual validation as part of my functional test. All I need to do is insert a single line of code called eyes check at each point in the UI where I want to validate it. And now that’ll give me full page coverage for every element on every screen that I target visual validation. So now just by simply adding the eyes SDK and adding the eyes checks to each screen where I want to validate, I will now have full page coverage for that screen. And you can see that I’ve loaded the app. I’ve clicked on Dow Jones. I’ve clicked on the stock history, and then I validate it at each point where I change the state of the test. And then I will actually get the results which will then pass or fail the status of test into my execution. Now to actually run this test, I am actually going to run this inside of my automation. And the first time that I do that, that’ll create the baseline. So now I’m gonna take that generated test from Appium… and Kobiton and run it on the Kobiton platform. So here, I’m gonna start my test. I’m gonna go to my devices and I’m gonna actually see the test being run inside of the platform. If I launch this device, I could see it running over time with the powerful capability to run a mixed session for Kobiton. So I can look at it live while it’s generating and running an automation test. If I come over to my Applitools dashboard, I can see the test case is run The first time that I run, it will create a baseline. And as soon as it completes, we’ll see a status of new being achieved… Give it a second. And you can see that the test case has just finished. You can see that we have a new test case in Applitools because this is the first time. We’ve run that automated test And you can see that we can see the scenario that we executed… From the main screen to clicking on the Dow Jones market, to click stock history, to click, delete all stock quotes, and then returning back to the index. So this creates the baseline or the expected result for these screens. So the next time I run this as part of my automation, we will then use our Visual AI to understand if there are any visual differences and quickly be able to highlight the screens and issues, where multiple team members can then participate in the quality process. So now what I’m going to do is just change the version of the application that I’m testing against. So simulating a new feature or a new version of the app. And I’m going to run that same test again. So now if I come back over to the Kobiton platform, you’ll see that the automation is again running it in place. We are then navigating through the app one more time.
20:56 | Martin Kowaleski
And again, this is where once it is complete, we will actually see if there are any differences within the product and we will complete our project as such. And if we come back to Kobiton and what I will show you is the actual new integration to What we will soon be releasing to market. So here, if we go to our session list in Kobiton, we will show the first run And because there were no visual differences and we are establishing our baseline, you will see that the test passed. And you can see the green eyes stating from the status in Applitools. And we can see the test status in Kobiton Here. The second time we ran it, we actually see visual differences as part of the solution. And now, just by clicking on the red eyes link, meaning that the test has failed with visual differences. I can click on that link. It will actually come over to the Applitools platform where we will actually look at any differences in the test flow, You can see any differences that we highlight in pink test is running. And here, we can see that we actually have differences. So here, if I take a look at the differences between the current state of the UI which is on the right hand side, post version, change, On the left hand side, you can see that we actually have our baseline. We’ve highlighted everything in pink test that is actually different. And here, this is where we have where we can begin the collaboration. And other individuals that may not have dev skills or DevOps skills can participate in the quality of the app As long as they know what is the intended outcome of the UI. All that they need to do is either accept this change if it’s intended. If it’s not intended and it’s actually a defect, they will click thumbs down. They will actually then annotate it as part to say, hey, we have a change in text. All they need to do is create that defect inside of Applitools. And if they wanted to collaborate with a member of the team, they could just mention them and say, hey, please investigate. And this will allow them to then collaborate on the change. And that could be a member of, the product team. It could be a member of the UI UX designer team. It could be another QA individual. It could be a developer. And once that handles that, they would get an email invite to drill back to review the results. It also, you can… integrate with existing defect tracking tools within your ecosystem. So if I wanted to directly create a Jira issue within this, for this issue, I can create a Jira directly in the solution, preserving all of the context that we have gather including the screen, the OS, the branch, the viewport and the algorithm used. And this again makes that process more efficient. I don’t have to download the images. I don’t have to manually enter all that information into Jira With the direct integration. It preserves all that context and allows us to more efficiently get the defect in the hands of the developers and have all the relevant information for the team. So this is just a quick taste of what Visual Validation can do and how we can leverage your… assisting functional automation to perform intelligent comparison and provide efficient scalable visual and functional coverage with the least amount of code and leveraging the power of the Kobotan platform allows you to quickly generate automation where it may not have existed previously in a very fast and scalable way. At this. This is all that I had and I will end the demonstration at this point. As I come back to our presentation, I just want to say, thank you for your time. If you have any questions or would like to deep dive into the power of how Kobotan and Applitools can work together, you can reach out to me via email or on LinkedIn. And I am happy to support you and answer any questions.
25:47 | Cara Suarez
Okay, great. Thank you so much Martin for such an excellent presentation, We actually do have quite a few questions that came in while you were speaking. So I’ll go ahead and kick off this QA Just as a reminder to the audience. You can go to the QA tab and actually type your text based questions into the QA section and we will read them as we receive them. So the first one that we had come in was how much time do customers typically save in script creation and maintenance with the use of Visual AI?
26:27 | Martin Kowaleski
That’s a great question. Cara. So typically, what we have seen through the use of Visual Assertions is that they typically save on average about 60 to 70 percent of the time that it takes to create a test and maintain it over the lifetime that it’s in operation. And then if you’re using the self healing capabilities that Kobiton would provide from a native mobile perspective, we typically see on average about 40 percent time saved from just having to fix broken tests. So through the combination of both technologies, you know, we can see, we can absolutely increase the efficiency from a mobile automation perspective.
27:20 | Cara Suarez
Great. Thank you so much. We have actually quite a few questions coming in. Now, One from Vishal He says, what are the security and privacy considerations for visual testing?
27:33 | Martin Kowaleski
So that’s a great question. So it really comes down to I think two things. One is the data that you’re using in the environment that you’re testing against. So is there PII data within the screens that you’re capturing? Because we are just taking screenshots of the UI and uploading those to our platform? And then from a deployment perspective, we have three different models. One is a public cloud which I showed in the demonstration. Two is we have a single tenant. It’s a private environment that is dedicated to you as a consumer And it can be installed in any Microsoft Azure region of your choice. So you can control where the data lives and who has access to it. And that is the deployment model that most of our enterprise customers use and pass all enterprise security mechanisms. And then finally, we also do have a on prem deployment model as well.
28:40 | Cara Suarez
Fantastic. The next question comes in from Carlos Hernandez. He said, I would like to touch on AB, Visual Testing with Applitools and the AI approach.
28:52 | Martin Kowaleski
Yeah, that’s another great question. So when I showed in the UI where there was a screenshot of the baseline, what we ultimately do is have a capability called AB testing where you can have multiple variations of the same screen. So that no matter when you run that test, no matter which variation of that screen you hit, We will compare it appropriately based on the screen that you’re testing. So if you have a scenario where you have a temporary change that you wanted to validate a different look and feel for the UI for a single screen. And you only want to leverage a single test to do that we can match and visually validate that screen based on the variation that we have in place?
29:49 | Cara Suarez
Great. We only have nine seconds left. Can the baseline be from Figma? Or Zeppelin?
29:56 | Martin Kowaleski
We have actually a plugin where you can upload that image directly from Figma to Applitools to start as the baseline. And we can certainly talk about that approach more in depth as needed. But yes, the answer is, yes.
30:13 | Cara Suarez
Okay, great. Thank you so much. And any questions we didn’t get to today, Martin can answer by text. Thank you so much.