Webinar

Applitools: Leveraging AI for Mobile App Testing on Real Devices

Abstract

Watch this on-demand session with Martin Kowalewski, leader of the Global Sales Engineering team at Applitools, as he highlights the critical importance of flawless UI/UX in today’s fast-paced app development landscape. Explore the challenges of delivering seamless experiences across diverse devices and platforms amid rapidly evolving mobile technologies and shifting user expectations. Martin dives into the latest trends in mobile development and testing, showcasing the transformative role of AI in achieving exceptional user experiences.

Applitools: Leveraging AI for Mobile App Testing on Real Devices

Discover the importance of flawless UI/UX in mobile app development, learn the latest trends, challenges, and the transformative role of AI in delivering seamless experiences across diverse devices and platforms.

Learn More

Video Transcript

Martin Kowaleski:

good morning good afternoon good evening my name is Martin Kowaleski I’m the director of sales engineering here at Applitools and welcome to my session on leveraging AI for mobile app testing on real devices. During today’s session we are going to provide an introduction to visual testing review the different types of visual testing used the benefits to using visual AI in Native mobile Automation and finally we will wrap up with a demonstration showing how we can leverage these techniques to scale your native mobile automation with full page functional and visual coverage so what is visual testing visual testing is the process of validating all the visual aspects of an application UI on all platforms going Beyond testing tools like appiumm espresso or xui to ensure elements like content images buttons Etc appear correctly and are not inhibiting the functionality or the usability of the app visual testing catches the bugs functional test can miss visual defects happen to companies of all sizes everyone does functional testing in some form or faction whether that is through manual uh through automation or some combination of both sure functional testing scripts can validate the size the position and the scor color scheme of visual elements but if you do this you’re test scripts will soon balloon in size um due to the additional assertion code that you have to write the assertion code often leads to tests that require more time to write more time to maintain and are less reliable and are prone to failures when changes are made to the underlying implementation now within visual testing there are typically two types of approaches uh manual and automated manual visual testing means comparing two screenshots one from your known good Baseline image and another from the latest version of your app so for each pair of images you have to invest time to ensure you’ve caught all issues and again that takes time and effort in the native Mobile area that generally starts with checking the visual elements and their function on a single operating system single screen orientation and single device form factor then it will continue on to other combinations and that’s where a huge amount of the test effort lies not in the functional testing but in inspection of visual elements across a combination of devices screen orientations and device form factors with manual testing each action has to be done individually on each screen configuration which again has to be scaled across all the combinations that you need to support and with that there are too many combinations for manual testing to be a valid uh approach uh imagine you need to test your app on two operating systems Android and iOS two screen orientations for mobile devices portrait and landscape and 10 standard mobile device display resolutions which would cover um phones and tablets so if you take the two screen orientations and you multiply that by 10 device display resolutions for phones and tablets and then times that by two operating system so for a single uni unique screen we have 40 configurations to test and if our app has say 50 unique screens in support of those configurations that equates to 2,000 combinations to cover and this approach is just not efficient and it comes at a huge cost of people time and more importantly slows down the release process to getting new valuable features to Market um in a fast efficient way so how do we help address this challenge that manual visually manual visual testing uh provides so we can also leverage automated visual testing automated visual testing uses software to automate the process of comparing visual elements across various screen combinations to uncover these visual defects automated visual testing piggybacks on your existing functional test scripts running in a tool like appium xui espresso and web Drive Rao and automates the process of uh taking a screenshot and Performing the comparison and using this approach um we find it having the following benefits so it drastically increases test coverage because for every functional test that is executing through your application we are also covering the visual aspects of that screen so you are getting both functional and visual coverage out of a single test run we also find that the number of assertions that you need to write is much less because we can use a single capture assertion to cover both functional and visual aspects of the screen which ultimately redu uses test code and the overhead of writing new tests to cover new features within uh the application uh it also reduces the amount of test code that you have to maintain for that uh test moving forward um we also find that executing automated visual testing as part of your strategy you can catch both unexpected and expected bugs and defects before they reach production because we’re running these regression Cycles in a very fast and efficient way as part of your fature development and ultimately what this leads to as well is we have more teammates that can participate in the application quality because the maintain the maintenance of tests um with the visual approach allows individuals that may not have test skills to be able to approve and reject change and participate in the maintenance um of those screens moving forward so what are the different types uh of visual testing we typically see two types of approaches in Native mobile we see either a pixel comparison approach or a visual AI approach first generation tools as well as open-source automation Frameworks typically leverage uh pixel comparison pixel comparison is where a bit map of a screen is captured at various points of a test R and its pixels are compared to a baseline bitmap so these typical comparison algorithms are extremely comp extremely simplistic they essentially iterate through each pair um of pixels in the existing uh image versus the Baseline they check if the color hex code is the same and if the color codes are different they raise a visual bug so even if there are slight differences in the rendering of those pixels a pixel comparison tool will highlight those differences where they may not be perceptible to a human um and actually be a real defect so when we see this approach uh taking with open source tools or other Solutions pixel comparison fails in handling these three critical areas um first and foremost Dynamic content so when you have Dynamic content inside of your native mobile app that shifts over time think of things like News ads um User submitted content um recommendations for you know purchases for products that I may be interested in or on sale where you want to check to ensure that everything is laid out with a proper alignment and there are no overlaps pixel comparison tools can’t test for these use cases um and will not provide you coverage for that area so if you have Dynamic content in your native mobile app that is not something that a pixel comparison tool can handle and the typical recommended approach is to ignore the areas of the screen which actually have um Dynamic content and even think about um in the in a native mobile app on every single device you have to deal with very Dynamic data at nearly the top of every screen things like Network strength time battery level and more um so on every screen that takes a screenshot you’re going to have Dynamic data you’re going to have to ignore every area that has those Network strength time and battery levels in the screenshot and ignore that and maintain that over time and then even when your app doesn’t need to change it can confuse pixel comparison tools so if your Baseline and test screenshots were captured on a on a different device um that has settings that are different for display think of Zoom uh have a different zoom level on one device versus the other that can turn nearly the entire page into a false positive so alternatively in these situations where pixel comparison may not meet the needs of your um Dynamic native app we can use AI for automated visual testing and like pixel-based comparison tools AI powered automated visual testing takes page snapshots as your functionality test run but unlike pixel comparisons AI powered automated visual testing tools use algorithms instead of pixels to determine when errors have incurred so in so unlike snapshot testers AI powered automated visual tools do not need special environments that remain static to ensure accuracy AI powered tools like apple tools allow different levels of SM smart comparison and have the ability to cover both static and dynamic content here are a few use cases that AI powered visual testing covers that pixel comparison doesn’t and this may vary depending upon the type of application that that you provide but you know if you think about being able to validate cross browser um on a mobile platform you know Chrome um and Safari um trying to validate cross environment testing with pixel comparison just will not be able to cover it due to the difference in rendering for those mobile web browsers uh scarios with account balances that change from run to run mobile device status bars news ad User submitted content uh suggested content based on your profile notification icons content shifts uh Cur Mouse hovers Etc um when browser upgrades all of these typical scenarios that you are facing in your day-to-day uh environment these are all scenarios that pixel comparison will not support and scale to meet now if you look at a simple example of pixel comparison and the power of AI powered visual testing think about a simple e-commerce native app that can display different products through filtering or search or even have the same product sorted differently for different users uh if I want to measure this with pixel comparison the test is going to fail because the products aren’t in the exact same spot um between the two images so ultimately you have two choices you can either um apply and ignore all of the areas of that has Dynamic content or look for a different approach an AI powered approach to automated visual testing gives you complete coverage for Less work an AI powered visual test can test a wide range of visual elements across a wide range of os browser for mobile web screen orientation and resolution combinations uh because of the intelligence of the algorithms to handle both static and dynamic data and also be able to handle the inherent changes that are only perceptible to a human and that is where an AI approach with apply tools provides um increased value uh versus those traditional open- Source or Legacy visual testing Solutions so now that we’ve talked a little bit about you know what is visual testing um a little bit about the benefits and how to leverage visual AI I’m going to proceed to a demo to show you what this looks like in action so what I’m going to do is I am going to take a session from a device inside of Katon so I’m going to launch a device inside of my Caton platform and what I’m going to do is install my Android application so I’m going to install version 102 and this is actually going to launch the application and now I am actually going to proceed to navigate through my application in a manual way so now what I’m going to do is I’m going to click on the Dow Jones um Market this will generate a quote uh for the Visa uh symbol uh you can see that it’s successfully uh transitioned to the stock quote tab I can see my image I can see the text I can see my symbol and then if I go over to the history I can see any previous uh quotes that I’ve generated um and you can see that I actually have two quotes that I’ve generated recently so now I’m actually going to delete those stocks to finish my end to end scenario and I’m going to uh click on the delete all stocks button and now that completes my end to- end scenario so this is creating a workflow from a manual session and now what I’m going to do is I’m going to terminate or exit this session inside of Katon and you can see that um it is finishing up with the upload um of that we can look at our device logs here um to triage it using the power of the Katon platform but here what I want to do is I want to actually take that manual session and convert it to an automated script that I can leverage using um apium automation so let’s go over and by simply converting this test case and inside of the Katon platform and viewing that test case you can see that the test steps that I executed including installing the app touching on the Dow Jones element um receiving my stock quote then clicking on the history and then deleting all of the test cases so this enables me to now export this test case out into automation so just simply by generating an appium script I can take that manual script and reuse that either locally or within my automation to be able to continually test the this end to- endend test case and we will soon add both functional and visual coverage to it so here what we’re going to do inside of Caton is we are going to select the automation language and Runner that we to use in my case it’s going to be Java junit and then all I need to do is to generate that script which will allow me to review this project and add it and as long as I open it up I can then run this test on a go forward basis without having to write a single line of code so the power here is that I took a manual session that maybe someone who doesn’t have developer skills they’ve now converted this to automation where that I can hand off to my automation team to then run um in a scheduled basis to provide full coverage moving forward now just for Simplicity sake what I’m going to do is I’m going to open that um code from a previously exported project and now what I’m going to do is I’m going to add um our visual validation um from Apple tools to the existing appium script I do that by adding all appium SDK so here I’m just adding my the apply tools appium SDK to your existing project dependencies and then I’ve also added the minimum information required to communicate with the Apple tools platform so just by giving it the apply tools API key and I’ve also set a minimum level of accessibility so that as I’m uploading the screenshots I am not only looking for visual differences but I am also looking at the all the objects and comparing the color contrast against our guidelines and here from the generated script from um Katon I have a device configuration that I can continue to leverage again with no manual intervention and then to add the visual validation as part of my functional test all I need to do is insert a single line of code called eyes. check edit each point in the UI where I want to validate it and now that’ll give me full page coverage for every element on every screen that I Target so now just by simply adding the eyes SDK and adding the eyes. checks to each screen where I want to validate I will now have full page coverage for that screen and you can see that I’ve loaded the app I’ve clicked on Dow Jones I’ve clicked on the stock history and then I validate it at each point where I change the state of the test and then I will actually get the results which will then pass or fail the status of my execution now to actually run this test I am actually going to run this inside of my Automation and the first time that I do that that’ll create the Baseline so now I’m going to take that generated test from appium uh and Katon and run it on the Caton platform so here I’m going to start my test I’m going to go to my devices and I’m going to actually see the test being run inside of the platform um if I launch this device I could see it running over time um with the powerful capability to run a mix session uh for Caton so I can look at it live uh while it’s generating and running an automation test if I come over to my apply tools dashboard I can see the test case is run um the first time that I run it it will create a Baseline and as soon as it completes we’ll see a status of new being achieved give that a second and you can see that the test case has just finished you can see that we have a new test case in apply tools because this is the first time we’ve run that automated test and you can see that we can see the scenario that we executed from the main screen to clicking on the Dow Jones Market to click stock history to click delete all stock quotes and then returning back to the index so this creates the baseline or the expected result for these screens so the next time I run this as part of my automation we will then use our visual AI to understand if there are any visual differences and quickly be able to highlight the screens and issues where multiple team members can then uh participate in the quality process so now what I’m going to do is just change the version of the application that I’m testing against so simulating a new feature or a new version of the app and I’m going to run that same test again so now if I come back over to the Katon platform you’ll see that the automation is again uh running it in place we are then navigating through the app one more

time and again this is where once it is complete we will actually see if there are any differences um within the product and we will complete our project as such and if we come back to Caton and what I will show you is the actual new integration to uh what we will uh soon be releasing to Market so here if we go to our session list in Caton we will show the first run and because there were no visual differences and we are establishing our Baseline you will see that the test passed and you can see the green eyes uh stating from the status in apply tools and we can see the test status in Caton here the second time we ran it we actually see visual differences as part of the solution and now just by clicking on the red um eyes link meaning that the test has failed with visual differences I can click on that link it will actually come over to the apply tools platform where we will actually look at any differences in the uh test flow you can see any differences that we highlight um in pink and here we can see that we actually have differences so here if I take a look at the differences between the current state of the UI which is on the right hand side post version change on the leand side you can see that we actually have our Baseline we’ve highlighted everything in pink that is actually different and here this is where we have where we can begin the collaboration and other individuals that may not have Dev skills or devop skills can participate in the quality of the app as long as they know what is the intended uh outcome of the UI all that they need to do is either accept this change if it’s intended if it’s not in uh intended and it’s actually a defect they will click thumbs down they will actually then annotate it as part to say hey we have a change uh in text uh all they need to do is create that defect inside of Apple tools and if they wanted to collaborate with a member of the team they could just mention them and say hey please investigate and this will allow them to then collaborate on the change and that could be a a member of the product team it could be a member of the UI ux uh designer team it could be another QA individual it could be a developer and once that handles that they would get an email invite to drill back to review the result it also you can integrate with existing uh defect tracking tools within your ecosystem so if I wanted to directly create a jira issue within this uh for this issue I can create a jira directly in the solution preserving all of the context that we have gathered including the screen the OS um the branch the viewport and the algorithm used and this again makes that process more efficient I don’t have to um download the images I don’t have to manually enter all that information into jira with the direct integration it preserves all that context and allows us to more efficiently get the defect in the hands of the developers and have all the relevant information for the team so this is just a quick taste of what visual validation can do and how we can leverage your existing functional automation to perform intelligent comparison and provide efficient scalable Visual and functional coverage with the least amount of code and leveraging the power of the Caton platform allows you to quickly generate automation where it may not have existed previously in a very fast and scalable way so I at this this is all that I had and I will end the demonstration at this point and as I come back to our presentation I just want to say thank you for your time um if you have any questions or would like to Deep dive um into the power of how Katon and apple tools can work together you can reach out to me via email or on LinkedIn and I am happy uh to support you um and answer any questions

okay great thank you so much Martin for such an excellent presentation um we actually do have quite a few questions that came in while you were while you were speaking so I’ll go ahead and kick off this Q&A um just as a reminder to the audience um you can go to the Q&A Tab and actually type your text based questions into uh into the Q&A section and we will read them um as we receive them so the first one um that we had come in was um how much time do customers typically save in script creation and maintenance with the use of visual AI That’s a great question Cara um so typically what we have seen through the use of visual assertions is that they typically save on average about 60 to 70% of the time that it takes to create a test and maintain it over the lifetime uh that it’s in operation and then if you’re using the self-healing capabilities that Katon would provide from a native mobile perspective we typically see on average about 40% uh time saved from just having to fix broken tests so through the combination of both Technologies um you know we can see we can ab absolutely increase uh the efficiency from a mobile automation perspective great thank you so much um we have actually quite a few questions coming in now um one from vishall says what are the security and privacy considerations for visual testing uh so that’s a great question so it it really comes down to I think two things one is the data that you’re using in the environment that you’re testing against so is there pii data within the screens that you’re capturing uh because we are just taking screenshots of the UI and uploading those to our platform um and then from a deployment perspective we have three different models uh one is a public Cloud which I showed in the demonstration two is we have a single tenant uh environment that is dedicated to you as a consumer um and it can be installed in any Microsoft Azure region of your choice so you can control where the data lives and who has access to it and that is the deployment model that most of our our Enterprise customers use and pass all Enterprise security mechanisms um and then finally we also do have a on-prem deployment model as well fantastic um the next question comes in from um Carlos Hernandez he said I would like to touch on AB visual testing with apple tools and the AI approach yeah that’s another great question um so when I showed in the UI where there was a screenshot of the Baseline um what we ultimately do is have a capability called AB testing where you can have multiple variations of the same screen um so that no matter um when you run that test no matter which uh variation of that screen you hit we will compare it uh appropriately um based on the screen that you’re testing so if you have a scenario where you have a temporary change um that you wanted to validate a different look and feel for the UI for a single screen and you only want to leverage a single test to do that we can match and Visually validate that screen based on the variation that we have in place great um we only have nine seconds left uh can the Baseline be from figma or Zeppelin um we have actually a plug-in where you can upload that image directly from figma to apply tools to start as the Baseline and we can certainly talk about that approach uh more in depth H as needed but yes the answer is yes okay great thank you so much and um any questions we didn’t get to today Martin can answer by text thank you so much

Ready to accelerate delivery of
your mobile apps?

Request a Demo