A.I. in QA: More Than Just a Bot

Reading Time : 36min read
Graphic of mind mapping with kobiton logo in center and setting icons and cell phone graphics in outter circles

Artificial Intelligence in Quality Assurance is coming. Depending on where you look, it’s already here. Not as a replacement for your role, but as new technology for you to leverage in your advantage. Too often we try to place AI in one of two boxes: full displacement or too immature. But what about the nuances of complementary AI that enable you to perform better, faster, and more accurately? As the evolution of Artificial Intelligence begins to outpace our ability to understand it, how should we be thinking about implementing AI/ML into our QA strategy from a holistic perspective, and not simply as a tool for optimizing and executing tests?

Full transcript available below:

Drew Williams:            
Hello everyone. My name is Drew Williams. I am the Marketing Director for Kobiton. Thank you for joining us today on this webinar entitled Artificial Intelligence in Testing. We are joined today by the Kobiton CTO, Frank Moyer, who’s going to be talking a bit about the different ways you can leverage AI and machine learning in QA both holistically as well as on a granular level when we talk about test execution as well as optimization. So with that I will get ready to turn it over to Frank, but I just wanted to let everyone know that during the course of the webinar, please feel free to submit questions through the question submission box. We will do our best to answer those in a timely fashion. If not, we’ll get to them at the end. Otherwise, this will be a Fireside Chats style discussion. So please enjoy and with that I’ll it over to Frank here.

Frank Moyer:               
Thank you very much, Drew. So I’ve pulled this over the past year, I’ve been keeping an ongoing list of artificial intelligence and machine learning in the QA space and so this has meant to share what I’ve learned through the research I’ve done over the past year. There are a lot of companies that are using AI in the testing space and it’s becoming more and more prevalent. There are companies that have emerged solely for the purpose of bringing artificial intelligence techniques to to quality assurance. And there is a lot of confusion about what all is out there because we hear these grandiose terms like machine learning, artificial intelligence, deep neural networks, and I’m hoping that over the next hour, I can demystify some of those phrases and really bring it down to common terms that we can all understand and some of the challenges that we have as a testing community in understanding how to best apply some of these techniques to our daily jobs also.

Drew Williams:            
Awesome. So with that I will go ahead and ask the very first question, which I’m sure if you’re not super familiar with the space yet, is kind of at the forefront of everyone’s mind, Frank, in your opinion, what is the difference between AI and machine learning? And does it actually matter in testing?

Frank Moyer:               
Yeah. So from a… Sorry. There are two terms I wanted to just level set and many people may already know the differentiation between machine learning and artificial intelligence, but I wanted to just make sure we had a level set on that. So you know, machine learning is a subset of artificial intelligence. I’ve gotten into a little bit of trouble with some of the analysts because on the term, the definition here, it’s really AI is any computer program that does something smart. So if you write good code, the definition would be that’s AI. But I think more broadly in terms of artificial intelligence, it’s making a computer do things that are more beneficial and relieve some of the tasks that a human would do. But deep learning is a subset of that and deep learning is a subset of machine learning. And that allows for computers to learn through human interaction or other training to really get as smart as, as humans can in certain tasks.

Frank Moyer:               
The other phrase I’ll use is self-healing, which you’ll see a lot in this deck because a lot of companies in the QA space are now using this phrase. And in the early days, test.im was probably one of the first to use it in the QA world. But it means as you go from release to release where the selectors for an automation script may change, or the Dom may change, the domain object model may change for a web interface that the test trips can fix themselves. So you don’t have to go in and you know, we hear about, we use flaky tests as a big problem in the testing community, being able to eliminate flaky tests through the use of artificial intelligence and machine learning.

Drew Williams:            
Awesome. And based on that broad definition, what is artificial intelligence and testing mean?

Frank Moyer:               
Yeah. So let me go to towards the end of this deck because I have some of those definitions and we can… Artificial intelligence and machine learning. I give some examples here of the different ways that those are used today in testing and I’ll go through a few of these. Before I go into it, there’s an important point here is that the difference between the model and the data. This is a very well known quote, the director of research at Google, Peter Norvig quotes that Google doesn’t have better algorithms. We just have more data. So data is a critical part of any machine learning. Without the data you start it at 0.0. But my advice is to get a good enough model before collecting training data because otherwise you end up collecting the data in a format that’s not usable to train your machine learning.

Frank Moyer:               
The data can be collected in a variety of ways. One is crawling data to collect it and use that to train. I’ve heard the CEO of test.ai talk about, they wrote a crawler that crawled 30,000 mobile apps to collect training data. That’s a very innovative way. There’s also a phrase called synthetic data, which is a way to use some bootstrapped data, but then augmented generate new data in order to really bolster the size of the data for training purposes. So just diving into some examples here. Anomaly detection is a form of artificial intelligence and anomaly detection looks at historical trends, looks at averages and standard deviation across the those trends to identify an anomaly in a current execution. So in this example, let’s say it’s performance, along the horizontal axis you have the different test steps.

Frank Moyer:               
So launch, enter your user ID, login, search, select details, save. And the measurement in this case is performance. The response time from when the user interacts with the mobile app and when it responds. The blue line indicates the average and those orange areas are the plus and minus one standard deviation. So typically that’s the method that is used for anomaly detection is looking at what is expected in terms of standard deviation across the mean. And the example here, that red area shows that the select actually was well outside the standard deviation and is a good indicator of this is something that should be looked into. For in testing it’s as much the goals that you’re shooting for, but also just knowing how the system behaves. The system behavior changes from release to release.

Frank Moyer:               
So anomaly detection is a great way to look at system behavior between releases and that can be in terms of performance, CPU utilization, memory utilization, and a variety of other factors. The other way, especially from a mobile perspective, is to look at a cross device type behavior. Go down, we’ve got 350 devices in our cloud for testing on real physical devices. So you could run an automated tests across 20 of those and see, do any of those behave anormally? Do they do any, for example, respond in a greater than the two standard deviation from the mean. So those are ways to just look for anomalies and look for problems. Now, the imperative here is you need to have enough examples or many examples of the same attribute for the comparison, right? You need to have that data and it goes back to a little bit the data point of being able to use that for comparison purposes.

Frank Moyer:               
The next one is predictive analytics. So, many people have probably heard this term that’s, when using a historical trend, how do I predict what the future will look like? And when we look at this, and it’s often time, although not always used in time series data. So you’re looking at historically here is how my tests performed and based on this, here is how I can expect them to perform going forward. So that could be in terms of number of defects, you can use it for capacity planning for your QA team. But that is the way predictive analytics, and typically you’ve got the further out in the future, the less likely that your prediction will be. So you’ve got good data and you’d go up to a point. Now the imperative here is using historical data if that’s what you decide to use for comparison and extrapolation, then examples are a likelihood of defects based on the area of the system. So order submission versus payment processing.

Frank Moyer:               
A lot of the very effective prediction techniques actually combine anomaly detection and predictive analytics because you can look at, the system complexity increases over time, but certain areas may become more stable or less stable at a faster pace than other areas. So being able to use that combination for better intelligence. Probably one of the most popular ones that’s emerging today is around image comparison. Being able to compare two images of an application state at an application test step. And there are a variety of techniques and I’m going to talk a little bit about some of the companies that are doing this and some that are doing it really well. For image comparison. 10 years ago, the most common ways to do that were pixel matching and what’s called template matching.

Frank Moyer:               
Pixel matching is a pixel by pixel comparison between two images or multiple images. It’s great because it’s able to identify some changes that a human eye can’t detect. You, can’t you as a test or verifying you might just skim over something, but it’s very limited in its use because any single pixel change would highlight an error. Over the past five years, there’s been tremendous growth in the innovation around convolutional neural networks. Part of that is because of our increased ability to do system processing on GPUs and then Google launched something called a tensor processing unit which allows you to run… The processing and is specifically designed for tensors or machine learning models. It performs on training anywhere between 50 and 500 times faster than a CPU and about 10 times faster than the GPU. And also just Google’s open source, their most common…. Their internal machine learning library called TensorFlow. And that’s really led to a lot of innovation as well.

Frank Moyer:               
The benefit of convolutional neural networks are able to identify differences between images that, I’m sorry, it’s able to identify differences between images that are more semantic in style. So instead of looking at pixels, it’s looking at a button and it’s able to say these two elements are both buttons and they differ in color by this x value. So you’re able to get more fine grain analysis of what is actually happening on the image itself or the image difference. It’s able to learn through training something, what two items on two images are the same and what are different. But on the other side, it’s subject, and I’m going to get into this on the next side, it’s subject to false positives and false negatives.

Frank Moyer:               
So with pixel matching, it’s exact. It’s very programmatic. With machine learning, when you start to look at convolutional neural networks that are based on training, it’s more subject to error. So when you look at, I’m going to go into these terms. We have developed what’s called a Taes model, which looks at the comparison between a machine learning prediction and what actually happens for testing. And every domain that uses machine learning has different goals. For test, this Taes model really outlines where we need to focus as testers and what matters. So on the upper left and the bottom right are pretty straightforward. A predicted defect that is an actual defect, that’s what we’re trying to find. Those are true positives and those are our targets. That’s what we as testers want to find.

Frank Moyer:               
On the bottom right are items that are not defects and they’re not reported as defects. Those are true negatives and we want to sustain that, right? We want to keep that at bay. The two areas that are problematic from a machine learning and test and QA world is the upper right where it’s reported or it’s predicted as a defect but it’s actually not a defect. And that’s annoying, right? If it happens once and you’re training the machine learning and it doesn’t happen again, [inaudible 00:00:15:51], when it starts to continually be reported as a false positive, as users we lose trust in the system. And then the one that is the scariest for a tester is the false negative where a defect was not reported but there was actually a problem and those are the ones we want to eliminate. Those are the ones that cannot escape our testing flow.

Frank Moyer:               
So when we’re looking at machine learning products for testing, really focusing on the Taes model is important. Knowing that we cannot afford to have defects because we’re trusting that the product is going to identify errors and if it doesn’t do its job we lose trust and it damages our user experience. So we look at image comparison in QA. The most common ones are just looking at regression of UI changes. So from release to release, how is the UI changed and even device type to device type, how is the UI different? You can expect that every device with a 4:3 aspect ratio will render the same user interface and if that doesn’t happen then that’s something that you may want to be alerted to. Those are just some examples of the way that current companies in the QA and machine learning or artificial intelligence space are using these technologies to bring to market their products.

Drew Williams:            
Great. And can you talk a little bit more specifically about some of those players in the AI testing space?

Frank Moyer:               
Yes. Can you get to the beginning of the deck for me real quickly? Sure. So I’m going to go into a few of the companies and I split them out between web and mobile because there is a difference and I’ll talk about that in a little bit, I think. So I’m going to go company by company and what I’ve done is I’ve summarized what their strengths are and then I looked back over the past few years over how they’ve tried it. How have they gotten to where they are today? And what I find really interesting are the companies that actually emerged with the sole purpose of bringing machine learning and QA together or artificial intelligence and QA together. That’s one category. The second category are the players who are incumbent experts in QA who are now adopting artificial intelligence techniques to make a better user experience.

Frank Moyer:               
So Testim is one of the first that came to market. They have the most patents in this space that were filed back in 2012, 2014 and they provide a script base and script lists, test authoring capability. Their user interface is very intuitive to use to record the test as well as to add assertions without any scripts. Now they’ve recently added the ability to strengthen their capability through scripting, but the foundation of it was purely scriptless for web. You can bring up a browser, record a test in a browser, capture all the results, write assertions without writing any code and then when… like I mentioned earlier, they’re the ones who coined the term, this ability to go from release to release through self-healing. And they use machine learning, deep learning and annotations from the tester to know how to… Or to improve their training.

Frank Moyer:               
So they’ve got that capability. They are focused on the web space, although they do have the ability to run tests on mobile web, not native mobile applications. So if you just really quickly, you look at their history over the past three years, very much focused on trusting the tests, self-healing, using machine learning. Speed up authoring using machine learning. And then when you start to get into today, and I think this is a common thing, in 2017 and 2018 the machine learning and artificial intelligence in QA, there were a lot of companies that were touting that capability. What I’ve noticed is that’s starting to be put behind the scenes. It’s not used as a way to market their product, but they’re becoming more mainstream in the functionality that they’re able to deliver by using artificial intelligence to deliver it. And this is a good example, there’s nothing on their page right now that talks about machine learning or artificial intelligence.

Frank Moyer:               
The next one, Mabl, also focused on the web space. They have a bot. So a bot is a program that does some link navigation through a depth first search or depth first navigation to identify all the avenues through the application. So you can point it to a URL, it will explore that URL as well as it can without any guidance. From that the user or the tester can author their tests without any scripting. So it combines the experience of the bot with a user interface to build the scriptless tests. Very similar to Testim in that it also has a very intuitive UI that can record and add assertions without any scripts.

Frank Moyer:               
This is completely scriptless unlike Testim that’s added the ability to bolster their capability with scripts and it does also have self-healing capability and so when the developers change their domain object model, their selectors that these tools are smart enough to know when it’s changed that it doesn’t break the test. That’s both Testim and Mabl do it and I’ll talk about a few others that do it after this. If you look at Mabl, they came to market in early 2018 from early on when I heard the founder on a podcast, I think that the actual term he uses, the phrase he uses, “We use a combination of artificial intelligence and statistical techniques to improve the job of the tester.” Right. So he was very much focused on the… Wasn’t trying to get into a lot of fancy terms. He was just talking about statistical techniques and that’s what they’ve done. They really honed in on making it a great user experience for the tester on web applications to rapidly build and test their applications.

Frank Moyer:               
Along the same lines as Mabl a company called Functionize also focused on the web. Scriptless test authoring, also intuitive UI to record and add assertions, self-healing. There’s a lot of focus on CI/CD, so the ability to initiate the scriptless test without doing it manually. You can kick it off through your CI/CD pipeline and the other technology that or the artificial intelligence related technology that Functionize has just launched is a natural language processing Gherkin style way to describe tests so that as a tester instead of composing tests through a script or even selecting a dropdown of different actions and different elements, you’re able to just freely type in and it will be smart enough to learn based on what you’re typing, what your intent is. So click on the submit button is all you need to type in and it knows what that means.

Frank Moyer:               
Doing the way back, look at where they’ve come from, they were started in 2016, again, when you get into 2017 much more discussion around artificial intelligence and then where they are today still speaks to that deep learning machine learning to improve the different parts of the testing process. And they mention here natural language processing and a lot of that’s related to the test authoring component. Applitools I referenced earlier, one of the leaders in visual comparison or visual assertions. They’ve been around for quite a few years and they’ve focused on making it easy for testers to compare different versions of their app and identify differences. They have a user interface that allows the tester to annotate different parts of the application to know which ones to compare and what types of comparisons should be applied to each part of the app.

Frank Moyer:               
So if you’ve got dynamic content, you can go in there and just say, “This section is dynamic content, never compare it.” You can go in and say, “This is text, I want you to do a textual comparison.” So it’s got very robust learning and reduces the number of false positives that are raised by the test themselves. They have an API that’s used by companies like Perfecto and I think they do a lot of their business through that API because it’s easy to use and the technology that Applitools is built was really hard to build three or four years ago. It’s a lot easier today, but companies like Perfecto that don’t want to actually build that or don’t have the skills to build that capability would go to someone like Applitools to use it.

Frank Moyer:               
So Applitools, going back to 2015/2016 very much focused on visual testing and then when you get to 2017, again, we’re looking at AI powered visual testing. 2018, AI powered visual testing and then there’s still on the visual AI component that they talk about in their marketing collateral today. I’m going to speed up a little bit just given the time. But TestCraft, they tout machine learning or artificial intelligence. It’s not very much. I think they just added it recently. Their focus is on drag and drop authoring to create and maintain test cases. They claim to do some self-healing now that they’ve added that onto their existing capability. More of the… they didn’t come into the market focused on AI. They came into a drag and drop authoring capability and they’ve been adding machine learning or artificial intelligence most recently.

Frank Moyer:               
So they for the past few years, very focused on codeless through drag and drop test authoring and, it doesn’t get any easier than this, is their current tagline unless you’re just typing in natural language terms then it may be, it’s a little bit easier. Katalon is also a scriptless and script based test authoring. You can record a script through Katalon Studio and record your script. It generates a script and it has a robust test case management. Also CI/CD integration and just recently they added self-healing capability.

Frank Moyer:               
So you’ll see a lot of… They didn’t come into the testing space with the focus of artificial intelligence, but it’s been added on recently to make their solution better for their users. So that’s all on the website. I’m going to jump into the three companies on the mobile side and just talk about where they are and it’s not as… I feel like having gone through this enough times, and I’ll explain why. On the website it’s a good story, on the mobile side, it’s a hard story because mobile is a lot harder and I’ll explain why, but if you look at, for example, test.ai, these companies has raised 10 million from Google Ventures. They came out or they were founded about three years ago and they focus originally on bot test execution. Just the ability to run a bot through a mobile app, and identify the…

Frank Moyer:               
Jason Arbon, the CEO, I’ve seen him present a few times. He talks about knowing the difference between a shopping cart on a Pixel 3 versus a shopping cart on a Samsung 8. So that was really where they focused early on. But it looks like just based, they’ve backed off on their website. So it’s just a landing page right now and without much functionality from what I understand, they’re focused a lot more on testing web versus mobile and yeah, they’ve gone more broad than just mobile.

Frank Moyer:               
Moquality, I’ve known Shauvik, the CEO there for about eight years. I used to see him a lot when I was an entrepreneur in residence at ATDC, Georgia Tech incubator. They’ve come a long way, but still they’re solving a tough problem. They have a bot test execution and what they’re able to do is compare application releases. So from release to release on a given device type, the bot that they started with can run across different devices and identify differences from release to release.

Frank Moyer:               
They started with artificial intelligence. That’s where they’ve been focused for the past four years and they have backed away from using that terminology in their marketing collateral. So when I look at the… This is a graph I put together a little bit complicated to understand, but on the horizontal, it shows the timeline trend and then along the vertical is the maturity of the solution and then the thickness of the line represents the amount of artificial intelligence that they’re deploying to provide a better solution. So when you look at the website with Katalon has been around for four years, they didn’t focus on machine learning and artificial intelligence early on. They have recently added it and they have a very mature product, they can take that foundation and add artificial capabilities to it.

Frank Moyer:               
The next group like Testim has been around for a long time. They’ve been using artificial intelligence for years and it’s a very mature product. When you look at Functionize and Mabl, they came in with the goal of using machine learning, using bots to compliment testers and they’ve been able to achieve a lot over a few years in their space. They were able to leverage some of what Testim had done to accelerate their delivery. But if you look on the mobile side, it’s a different story, right? The sophistication or the robustness of those applications in some cases, I think test at AI has hit some challenges on doing mobile because of its complexities. So there isn’t the same level of maturity on the mobile side as there is on the website.

Drew Williams:            
I know we talked pretty generally about the players in the space right now, but who do you see as actual leaders when we’re talking innovation into the future of AI and testing?

Frank Moyer:               
Yeah, I think this graph is a good one to see who’s leading and where the hockey puck is going. I have a lot of respect for Applitools and the work they’ve done on image comparison. I think that’s going to be more of a commodity going forward because the machine learning models for what they’re doing are becoming more commonplace. So for example, Perfecto may be using it today, but given some of the recent innovations, it wouldn’t be hard for them to just use a boilerplate TensorFlow model to do a lot of what Applitools is doing today. And then I think test has been around from the early days and I don’t think they’re given as much credit as they deserve for all the innovation that they’ve pushed forth over the past six years.

Drew Williams:            
What does artificial intelligence mean to the role of the tester when you talk about day to day function and overall job outlook?

Frank Moyer:               
Yeah. So I don’t have any slides for this. I’ll just talk from what I’ve seen and I think there are quite a few players who came into this space saying, and even on the podcast, I listened to, a couple of years ago, the phrase that was used was, you know, we’re looking to eliminate the tester, right? That was one of the competitors that, or one of the, I think it was either the CEO or founder of either Mabl or Testim that that said that. And I think they’ve changed their tune on that because they that that’s not possible. And nor should it be a goal. That the goal and the role of the tester will increasingly be that of a subject matter expert who knows how to test an application and right now they can use these machine learning tools and techniques to increase their capacity.

Frank Moyer:               
So testing is often the brunt of delays in the development life cycle, right? When you’ve got a release that’s going out and development as a week late, that doesn’t mean the release is going out a week late. It means the testers have to work twice as hard. So how can we help the testers do their job more efficiently? And I think that’s where we should see the role of the tester is partnering with the capabilities of machine learning to do more in less time.

Drew Williams:            
Okay. Looks like we have a question here. So what do you think about Eggplant Software as a testing product/platform?

Frank Moyer:               
Yeah, so Eggplant is… Sometimes I get a little bit… Test has a lot of functionality. It does a lot of different things and while on the one side you have companies like Applitools who focus on one thing and do it really, really, really well, right? They’re the best at comparing two different screens or different images and knowing what the differences are and being able to create the user experience around that. And I commend that focus. Eggplant has a lot of different functionality. Being able to test on… So if you’re looking for a solution that can do the Swiss Army knife, but not necessarily does anything the best, but you want a single solution to do everything, Eggplant rises to the top, right? It’s a very robust solution.

Drew Williams:            
Awesome. Okay. What would you say, in your opinion, is the most important component of making AI and testing successful?

Frank Moyer:               
I think it’s a combination of things. I think it’s choosing the right platform. I don’t suggest building it yourself. A lot of smart minds have gone into building what’s out there today and there are robust solutions. The second thing, and I think this is probably the most overlooked part of machine learning and that’s the user experience for the tester. Too many times companies here on this list go into it thinking, “We’re going to use machine learning and it’s going to be the panacea.” It’s not, it is not the end all solution. And you have to expect that the tester will play a role, and the more that you can start with, if you’re looking at products, look for products that, from the user experience, partner with the tester. The tester has a role to train the machine learning. That the machine learning is getting smarter based on the tester’s annotations and I just think that’s overlooked so much as that user experience for the tester and how that weaves in with the machine learning capability.

Drew Williams:            
So do you think AI can be used in each and every field as far as testing and QA application goes?

Frank Moyer:               
I think we’re just at the tip of the iceberg right now when it comes to machine learning and especially machine learning and QA. I think there are so many capabilities that are emerging, especially in the intelligence community that have been around for years, that we’re now just catching wind of what those are and how we can use them. You know, Google, when they open source TensorFlow, they open source to a set of machine learning models. That’s going to happen more and more and I think that it’s in the QA space, I think it can be used in every field.

Drew Williams:            
In your experience, is AI for mobile drastically different than AI for web? And if so, what are some of those nuances?

Frank Moyer:               
Yeah, that’s very helpful Drew. I’m going to jump from this chart which shows the big gap between web and mobile. And I’m going to jump into why mobile is challenging. Why haven’t we seen the level of innovation in mobile that we’ve seen on web. And I think the biggest challenges, unlike web the device manufacturers render XML differently. So the element names… There is a W3C specification for HTML and rendering HTML in the browser. All of the browsers do their best, so I’m better than others to comply with that specification. So as a tester you know that when an application is built that how that application is rendered in Firefox looks exactly like how it’s rendered in Chrome with some slight differences. On mobile that is completely different.

Frank Moyer:               
Each manufacturer, each version is different. There is no specification for adhering to how the XML is rendered. And so as a tester now I have to have possibly a different selector for each device type. And that has been a substantial impediment to some of the recording and playback tools where you can record and playback on the same device type, but crossing multiple device types is a real problem. And that’s, you know, we’ve been at Kobiton attacking that problem. So the ability to… We have in beta product that you can record on one device type and you can play it back on any other device type.

Frank Moyer:               
Mobile devices have less CPU and Ram. You can’t virtualize mobile devices like you can the website. Just capturing the XML on mobile can take 10 seconds. If you think about in the web world when it’s rendered, it’s there like you see it so you can go in and inspect the dom straightaway. With mobile, you can’t afford to wait 10 seconds while you’re testing. I think 10 seconds and sometimes can be generous. You know, sometimes I’ve seen it up to 30 seconds. Mobile devices can move location. There’s a lot of fragmentation and manufacturers and crashes. In mobile, crashes are a big deal. They can really damage the user experience while crashes sometimes happen in browsers, it is very uncommon.

Drew Williams:            
So what does the frequently used algorithm for designing machine learning models and self-healing?

Frank Moyer:               
Yeah. The most common one that… Well if you were to ask me what I would use, I would use a multimodal machine learning deep neural network. So the ability to look at different components of the application, the visual component, the XML. Testim has done a lot with learning the XML or the HTML structure to know when something changes, what’s most likely the cause of the change. So using a multimodal machine learning model that at its foundation has visual elements as well as the textual elements of the dom.

Drew Williams:            
So given all this talk about AI and machine learning, it seems that Kobiton has done a pretty good job emerging as leader in AI and mobile testing. And so can you talk a bit about what the company has done over the past three years to get to that place?

Frank Moyer:               
Yeah. I think part of it is the graph that I showed earlier. Not many companies have… A few companies have tried and failed. We have the fortunate position of having real devices that we know and some of the team that has built the product knows mobile better than many engineers in the world. So the foundation of… They knew some of the challenges going into doing what we’ve set out to do early on. And they’d said, “The number one problem that we’re going to face is that the manufacturers render the XML differently.” If we can’t solve that problem, then we got to find another problem because that’s a really hard problem. We spent three months just figuring out whether or not it was possible. And then we took a little bit of a leap of faith and said, “All right, we think it’s possible, but we hit some surprises.”

Frank Moyer:               
It wasn’t until we were further along, probably eight months in that we realized, “Wow, we missed some things.” Now, none of those things that we missed were catastrophic, we were able to recover from them. But the reason we are where we are today and how we’re able to launch our product on January 1st with the ability to record on one device and play back on any other device type is an attestment to the team, their knowledge of mobile and the ability to effectively use machine learning and a great user experience.

Drew Williams:            
Amazing. So you seem to be pretty adamant about splitting web versus mobile testing and we touched on this, but can you revisit if there’s enough of the difference that actually warrants separating them?

Frank Moyer:               
Yeah. I want to go to another slide that… Because there’s one important point that I haven’t gone through yet and that is when we look at the complexity, I’ve talked about the environment which is, there are 20 times more mobile environments than web. And I talked about the functionality and that’s a multiplicative, right? If you’ve 20 different device types that you want to test on and now you’ve got 50 scripts, you’re multiplying those 50 scripts times 20 to really go across all those device types. But the other part of this that those who are considering, that are in the web world considering applying to mobile, it’s a different world in that all of those mobile devices have features, features that can be used in machine learning.

Frank Moyer:               
Features like the aspect ratio, the screen resolution, the manufacturer. All of those are capabilities that you don’t have on the web world. So every feature that I find for mobile is an advantage that mobile has over web and so I don’t worry about the web players coming into the mobile world because they’re going to try to fit a square peg into a round hole.

Drew Williams:            
Frank, I just want to say thank you for sharing all this wonderful insight on AI and machine learning and how we should be thinking about incorporating it into testing. With that said, I again appreciate everyone for attending and hope to see you on the next one. Thanks everyone. Have a great day.

Frank Moyer:               
Thank you, Williams.

Drew Williams:            
Sure. Thanks.

Interested in Learning More?

Subscribe today to stay informed and get regular updates from Kobiton

Ready to accelerate delivery of
your mobile apps?

Request a Demo