30 Jan Achieving Continuous Mobile Testing
Success in today’s mobile sphere demands fast, accurate testing. Continuous testing delivers speed and quality, but the challenges that come with continuous testing for mobile are different from what testers encounter with web applications.
This data-driven presentation will include insights such as the difference between mobile and web testing, how to optimize for continuous testing, and best practices for advanced test automation strategies.
Transcript available below:
Beth Romanek: Hello everyone. Thank you for joining us today for the web seminar, Achieving Continuous Mobile Testing, sponsored by Kobiton. We are going to give people just a couple of minutes to join so we'll be getting started in about two minutes. Thank you.
Beth Romanek: Hello everybody and welcome to today's web seminar, Achieving Continuous Mobile Testing, sponsored by Kobiton, and featuring speakers, Frank Moyer, Kobiton's CTO and Nilesh Patel, Director of Testing Services at KMS Technology. I'm your host from Techwell, Beth Romanek. Thanks for joining us today.
Beth Romanek: Before I hand it over to our speakers for the presentation, let me explain this console you see in front of you. All three panels on your screen can be moved around and resized. At the bottom of your console, you'll also see a widget toolbar. By clicking the handouts widget in the middle, the one that looks like a piece of paper, you can download a PDF of the speaker's slides today. If you have any technical issues, please let us know by typing in the questions and answers panel on the left. This is, of course, also how you can submit questions. We will have a Q and A session at the end, so feel free to ask questions throughout the presentation.
Beth Romanek: And finally, for your best viewing experience, please close any unnecessary applications you have running in the background. We are recording this web seminar and we will make it available to watch on demand after the event. Once the recording is ready, you'll get an email with instructions about how you can watch the presentation. And with that. I'd like to hand it over to our speakers, Frank Moyer and Nilesh Patel.
Frank Moyer: Thank you very much Beth and thank you Techwell for hosting this event. I am Frank Moyer. I'd like to say hello and I'm with Nilesh.
Nilesh Patel: How are you guys doing?
Frank Moyer: So just briefly to go into the agenda for today, we're going to talk a lot about continuous testing and then specifically, we're going to hone in on what it means for mobile. I want to first start out and just get a level set on what continuous testing is. We're not going to try to cover all of what continuous testing is. We're really going to focus on the meat of what continuous testing means and what it means to mobile.
Frank Moyer: So moving on to the semantics, people get ... Probably the biggest confusion over the past few years is what exactly is continuous testing and how can I apply it within my organization. The most popular diagram around continuous testing was a diagram that was created by Dan Ashby back in 2016. And Nilesh, this talks a lot about the overall development flow and then where we do testing throughout.
Nilesh Patel: Right, and the main point here is testing is done everywhere throughout the whole entire life cycle. So that's the key part of continuous testing is you're testing continuously across the whole entire software development life cycle.
Frank Moyer: Great. I was at an incubator program for about six months and we were just doing customer discovery interviews and we were testing and trying to break the ideas we had. And so in a certain way, that's continuous testing at the very early part of-
Nilesh Patel: Exactly. Completely agree with you right there.
Frank Moyer: And then all the way into production when you're monitoring, making sure that you're monitoring works and you have ways to feed back because there's that feedback loop from monitor to plan to make sure you're capturing the information to make the overall system better.
Nilesh Patel: Right, and that's the key part of the whole DevOps life cycle is to get faster feedback back to your clients, back to your developers and know how to improve going forward.
Frank Moyer: Great. And then the other part that Dan [Ashby] focuses on is minimizing the business risks. That's our goal. That's the overall tester's goal is to minimize business risk and applying continuous testing helps with that. So what continuous testing is not, it's, explicitly, they say it's not just the testing component of continuous integration. It's broader than that. It involves all parts that we just walked you through. It's not just automation testing, it's broader and includes manual testing, whatever parts of unit testing, every part of the testing and the development life cycle. It doesn't focus on a specific platform. It looks at all platforms.
Frank Moyer: However, having said that, we've just talked about what the broad definition of continuous testing is, but to make this practical, we are going to focus in on the testing component, we're going to focus a lot in on automation testing and we're going to talk a lot about mobile. Instead of talking more at an abstract, we're going to be very specific.
Nilesh Patel: The key thing I want to add here is that the automation testing is part of continuous testing. It's one of, probably the most important part of continuous testing. I don't think you can say you're practicing continuous testing without the automation piece. So just wanted to add that in there, Frank.
Frank Moyer: Great. Thank you. So I'd like to move on to why it's important and I'm going to let ... Nilesh is going to talk a little bit here about the importance of continuous testing.
Nilesh Patel: Yeah. This is from this State of DevOps report that we recently found from the different delivery performance of the different types of organizations performing at an elite level, high level, medium level and low level. If you look at it, you see the change of failure rates are very low when you're deploying more frequently. So if you see the elite, you're seeing a 0-15% failure rate when you're deploying these once a day and so forth. But the key here is the way to get faster feedback is to release quicker, and that is the key thing of continuous testing because you get the feedback loop is much greater, so you're able to get your feedback out a lot quicker. And due to all the different global competition out there right now, it's key to get your software out quicker so that way you can please your customer base and so forth.
Nilesh Patel: So that's kind of the reason why you see this here is the failure rates ... You're operating at a higher level when you're deploying frequently, you're continuous testing frequently, and that's how they're rating you as a higher elite organization.
Frank Moyer: And part of this, I would expect also, is the amount of risk that gets introduced for every change.
Nilesh Patel: Correct. Yeah, exactly right. So the more changes or the more risk gap, you want to make sure you minimize that risk and that's where the testing comes in, to minimize those risks.
Frank Moyer: And back to one of the things Nilesh mentioned was the frequency of updates. In this day and age, when you're delivering mobile applications, your customers are ... There's a 60% greater likelihood somebody will download your mobile app if it's been updated in the past two weeks. So an app that goes on for months without being updated has a lower likelihood to be [downloaded].
Nilesh Patel: Right, and that's where you get back to the global competition. You got to release fast in order to get your product out quicker and get a bigger approval rating in the store or better feedback and the customer views and so forth.
Frank Moyer: Great. Now there was a survey done a couple of years ago about what are the ... The question was what are the main holdups in a software production process? And the perception is 63% of the holdup is related to test or QA. And whether that perception is reality, it is the perception and we'll talk some more about, back to the previous point of being able to deliver multiple times a day, being able to deploy multiple times a day and how testing oftentimes is considered the holdup or the bottleneck at least perceived holdup and bottleneck.
Nilesh Patel: We see that a lot in our clients and our customer base is that when we go into these automation engagement or testing engagements, they come back the main pain points saying, "QA is our bottleneck. We can't release fast enough because the testing hasn't had enough time. They haven't test everything thoroughly so we can't release as fast as we'd like to."
Frank Moyer: When they say that, do they have in mind something specific that they were trying to get to?
Nilesh Patel: Maybe they want to go to a continuous deployment, continuous delivery, but they're not ready. They don't have the skill set there, they don't have a tool set up in order to actually practice that whole delivering faster, releasing faster. They need to have the whole infrastructure, the framework in place in order to do that. They just don't have that yet.
Frank Moyer: Okay, great. Thank you. Also, as part of that accelerate survey, it was apparent in reading through this that there's a focus, not just in understanding that there is a relationship between CI [Continuous Integration] automation testing, but there is a high correlation between successful CI and automation testing. I think what Nilesh said earlier, which is it's an imperative to have automation in place in order to achieve CI.
Nilesh Patel: Right and well [inaudible 00:11:03] pipe thing. That's the key. If people have automation in place, it might not be automated triggered. It might be set until the end and someone physically pushes the button to execute a test. That's not what we need. We need to plug into your CI/CD pipeline in order to actually get the true benefits from it.
Frank Moyer: Great. In addition to the item we just went through, which was accelerating continuous integration, continuous testing also addresses risk and brand damage. I think it's been all over the tech news today that Twitter released an Android version that crashed as soon as the user tried to launch Twitter. So they've been having to do a lot of damage control around their brand, they worked through the night last night trying to fix it. They fixed it and released this morning, and that's had an impact on their brand. Big business, large scale software failures like that don't happen often, but when they do happen, on average they have an impact of 4.06% on the stock price. I checked Twitter's stock price today and it's definitely down 2% below the overall stock market.
Frank Moyer: In addition, in this day and age, it's not just the large scale software failures, it's also because of social media and the ability to rapidly disseminate information. When there is a brand failure, a minor incident could also have a huge impact on the brand. The app store reviews are public, people comb those app store reviews. We were working with a client the other day that had a 4.9 star rating on the app store, but one of the devices crashed their app and a few users started putting two one star reviews and that had not just a negative impact on the star rating, but also people were looking for those negative reviews and seeing those crashes.
Nilesh Patel: Right, and that's a valid point. I know when I go look at reviews, I'm always looking at the negative ratings first to see, "Hey, are they issuing problems that I'm going to face or I'm going to get annoyed with or not like?" And that way, that makes me more likely to download the app or not download the app based on the reviews. The other thing I wanted to add to that Twitter thing is that test is an easy test to automate. They would have known if they had it through a pipeline, I don't know if they did or did not. But if they did, it's a simple launch of the application. If it fails, the test would have failed and they would have been able to roll back to make sure that never got into production.
Nilesh Patel: So that's how the pipeline and all that stuff comes into play and automation comes into play. It could have helped prevent this if they had it plugged into their pipeline and so forth and were able to do the checks that they needed to do.
Frank Moyer: Great. The final point here is that there is a broad impact on continuous testing across the organization, that decisions need to be made as far as part of continuous testing around what the highest items to test are to reduce the business risk. So just in summary, continuous testing, although it's not just a component of continuous integration, just automation testing and just a specific platform, we are continually as testers trying to reduce the time and reduce the business risk associated with change.
Nilesh Patel: Yep. 100% agree with you there. There's no doubt about it there. The point of a test is to reduce that risk. We can't assure quality. We're not going to assure anything is working perfect 100%, but we can reduce the risk of something breaking along the production and some bugs in there and so forth.
Frank Moyer: Great.
Nilesh Patel: Yeah.
Frank Moyer: The next item I'd like to go through is, and this really teases up as we start to hone in more on mobile, are the impediments to continuous testing. What causes, at least at the perception level, that the test part is the whole [inaudible 00:15:16]. Right now, how do we as testers achieve that same level attribute of continuous that's been granted to build, release and deploy? How do we achieve that?
Frank Moyer: Just related is it's where the market wants to go. A comparison done last year between where the current state of frequency of deployment against the desired frequency of deployment, there's a big gap. There are a lot more people who want to get to hourly and daily deployments than actually do today. And going back to the earlier slide, a lot of that's, again, attributed to the time that it takes to get a level of quality assurance to do this.
Nilesh Patel: Right. I agree. And again, it all goes down to the whole global competition. There's competition in everyone's industry and again, to get that software out faster, getting the feedback from the customers quicker is going to make you a step above your competitors.
Frank Moyer: Before we move on Nilesh, from your experience, have you seen resistance from test organizations who are accustomed to spending two weeks to release?
Nilesh Patel: Yeah, that's a great question and we do see that quite often. A lot of it's due to the skill set of the teams. Like we said before, the skill sets are not there in terms of automation and so forth. They can't script as fast even if they want to. They don't have the tools in place. They can't test all the devices or all the different browsers because they don't have anything set up for them to do that, as well as they don't have pipelines set up. So they, again, it came back to what I was talking about before. They don't have the infrastructure and framework to achieve continuous testing. So that's been the biggest. They might have the skills and the people to do it, but they don't have the tooling around it to actually support that. So that's why you see that a lot in a lot of our engagements.
Frank Moyer: Great. So that segues well into some of the, what you were just talking about.
Nilesh Patel: Exactly, right. So there's a lot of things here in external to testing. These are stuff that you see from what you can control within the testing team and what's outside of testing team. And a lot of the testers will hear some of this and will understand this, like late delivery of the application. That happens all the time and that's a pain point. Unexpected changes to the application. Testers see that all the time when we're doing it. "I didn't expect that to happen. This wasn't supposed to be there. This wasn't part of the user story," and so forth. So you see all of that, these are all these things that hinders their ability to achieve continuous testing.
Nilesh Patel: There's environment instability. Environments are always never set up right. If you don't have labs set up properly or if they're in-house, you're going to see a lot of failures in environment, just in terms of having browser set up, mobile devices being available to test. You might not have all that set up, [inaudible 00:18:01] just a bunch of defects. You see a lot of defects already in the application in that end. It's going to hold you up from going to deliveries. For instance, for Twitter, if it's failing at launch, you really can't test anything besides that. It's already very varied. You've got to get past that first.
Nilesh Patel: And then there's the different variable and test environments. Again, there's a bunch of different browsers, not much with mobile. Mobile has a bigger different variations, but browsers still have a good amount of variation as well. There's different releases got to go out, different versions, different internet browsers and all that stuff. So you got to make sure you test all that. And obviously in internal to testing, you see it's a lack of resources. That's probably the biggest thing, is most testing teams are not as big as the development teams. They're usually smaller, so they don't have the resources to do that stuff.
Nilesh Patel: Lack of skills to automate. A lot of people don't have the coding skills to automate, so that's why there's new tools that do low codeless scripting, so that helps a little bit. So there's stuff like that so you've got to figure out what your skill set is of your teams. Lack of tooling, I've mentioned before. There's not enough tools that support what they need to support. And then just again, the lack of testing environments. That's something that testers have to have, based on our clients, I get it all the time. It's like, "Well, we only have one iOS device and only have one Android. So we got to test everything and get enough coverage just based on those two devices," which is scary because there's millions of millions of devices out there that probably need to be tested as well too. Just like Frank said before, just one user, one Android user had a bad rating that dropped the rating down. So that could cause a big issue at least to production and to the customers.
Frank Moyer: Great.
Nilesh Patel: Yep.
Frank Moyer: Great. Thank you. So I'm going to go through a few with Nilesh adding some color here about the different approaches that we've seen at Kobiton for achieving continuous testing. I'll go through three. I think I mentioned the last [slide 00:19:47] four, it's three. We are working with, one of our customers is over a $10 billion market company. So they have a lot of capital. Mobile is a critical part of their business. They have 3000 devices.
Nilesh Patel: Well that's a lot of devices.
Frank Moyer: 3000 devices in-house and 60 testers around the clock executing manual tests. So I know Nilesh you said that automation was an imperative to continuous testing.
Nilesh Patel: Correct.
Frank Moyer: I think this is an exception here, right?
Nilesh Patel: Yeah. It's an exception, sure. You can't get away without doing some manual testing. Automation testing can test only so much. We'll talk about more of the stable tests, more of that stuff. It's more of the easy lightweight tests are probably more higher candidates for automation, but you still have to test a lot of manual stuff. If you don't know, it's like the business critical workflows. Workflows are going to be tested more by users instead of by machines. Machines and automation can only test exactly what you tell it to do, but there are things obviously when we go around here, you click here, you go here, you go here, you can find some more issues. You can find more critical issues than as opposed to what you find by just checking. It's testing versus checking, that whole James Bach theory, but so I guess, yeah. This approach would be a need for manual testers for sure. I see that here.
Frank Moyer: Yeah, and they have no automation. It's all manual. And just around the clock constantly, as they do new releases, they're hitting those 3000 devices using Kobiton with 60 testers worldwide.
Nilesh Patel: And they have a lot of testers. There's a lot of testers for it.
Frank Moyer: Yeah. It's around the clock-
Nilesh Patel: [crosstalk 00:21:25].
Frank Moyer: There are three shifts, seven days a week, 60 testers, I think between 60 and 100. So they do a whole lot of tests to make sure that their app is not crashing like Twitter.
Nilesh Patel: Makes sense, makes complete sense.
Frank Moyer: It's not Twitter. I guarantee you. I promise is not Twitter. Although Twitter, if you're listening, we have (inaudible 00:21:45). So going on to the next one. So one of our other customers in the insurance industry takes a different tact. They actually combine the development process and their automation process together. So the automation engineer sits alongside the developers in the team, the scrum team, and they build the automation at the same time the developers are building the app. Actually it's a web app. So when they deploy, the developers and the automation engineers check everything in together to get, and then it builds automatically and deploys automatically.
Nilesh Patel: Yeah. So I think this is a good example of a really working well continuous testing model just because it's integrated teams. You saw that diagram that was background in the slide. You said it was testing was involved in every single aspect from deployment monitoring to deploy to everything. The whole entire circle, testing was involved there. So you have to be integrated into the whole entire life cycle and integrated with the team to truly achieve that. So I think this is a very good example of how continuous testing could work in a model.
Frank Moyer: And the challenge, what are the challenges with this approach? Why wouldn't every organization ... In the next slide we go through, some of it, just physically dispersed.
Nilesh Patel: Some of the challenges you might say, they might be, it's easier when everyone's located in the same office. There's definitely a challenge when testers and developers are not in the same time zone space, which happens a lot, so that's a challenge right there. But to overcome those challenges, we need tools. Some kind of collaboration tools could help with that, like Slack, so forth in such situation like that could help with that, but those are the main challenges I see is people not being co-located. That makes it very hard to achieve this, but you have to have tools to balance that out and to mitigate that risk
Frank Moyer: Yeah, makes sense. Makes sense. Probably the most popular approach to automation is what, from our customers at least, is the way to automate approach, which is in this customer, they were physically separated. The developers are in the US, the automation engineers are in India. When a new release is pushed to test, the first thing the automation engineers do is they run all their tests, they see what breaks and they triage every single test script that failed and said, "Okay, why did it fail? Did it fail because of a bug or did it fail because the script needs to be updated?" Usually this takes about, for this customer, takes at best a week and at worst-
Nilesh Patel: Longer than that for sure.
Frank Moyer: Yeah, like a month.
Nilesh Patel: Yeah. This is more of a what, and I say the dev ops, they would classify this as a low performing organization because now you're really waiting to them to finish everything before you can release. That could be detrimental, in my opinion, to get your releases out there and getting that faster feedback and make sure you're ahead of the competition because now, you have to wait until everything gets done in order to actually go ahead and push forward now. It depends on the industry as well too. Some of the issues are okay with not having releases out every week, but if you're an eCommerce industry or something like that or the mobile industry, it's got to get out quick, quick, fast, fast. This would probably deter you, but again, it all depends on the industry or the [inaudible 00:25:17]. So it seems like here, it might be okay. It might be an approach that they could go with.
Frank Moyer: Okay. Before I go onto the next side, there's another approach which is, I didn't include because it didn't have a testing step, but GitHub does this. They will automatically, their developers can deploy and it goes directly to production. So they do their testing and they are, every time they do a push, it gets automatically build, runs through some tests and minimal tests. But they monitor the production deployment. They do a canary deployment. That canary deployment can ... They have real-time monitoring to detect whether the canary deployment-
Nilesh Patel: Responded well, yeah.
Frank Moyer: Yeah, has any anomalies against the previous release, and if it does, they back out the release.
Nilesh Patel: Yeah, that's a good approach too. I know a lot of companies that do canary release and that helps that you can see if hey, like you said before, is there's a problem with [inaudible 00:26:13] or not, that I can flip the switch to get everyone over to that the main release.
Frank Moyer: For those who are not familiar with a canary release, it's a ... The term comes from the canary in the coal mine where you'd put a canary in the coal mine to see if it died before you sent down your workers. And that's what you know with Kubernetes, which is a common deployment stack right now, has out of the box canary deployments. So you can just say, "I want 5% of my workload to go towards the canary release," very easily.
Nilesh Patel: Yup, like the whole alpha AB type testing that you can [inaudible 00:26:51]. That test is going to be your canary. We do AB testing.
Frank Moyer: Yeah. So when I look at those four that we discussed, the testers are doing everything they can given the resources to reduce that duration, to make that duration shorter in order to get product out faster. But ultimately, there is a level of prioritization that needs to be done, some constraints that need to be put in place because there are limited resources.
Nilesh Patel: Yes, exactly. In terms of, that's why you have to pick the right test to automate, the right suites to automate. Obviously you can get it out faster if do one test they're going to run really quickly, that they're not going to fail quickly. So high prioritized tests for tests that run fast or they're very stable. You don't want to run tests that are brittle because that's going to give you false positives and that's going to ... If you can't release because of a false test because of a failed testing, they're going to release, you're not releasing out. So you have to pick and choose the right test suites that you want to run as part of your releases. So you got to prioritize based on a bunch of different practices, but you want to prioritize those tests release, those test suites for each of your different releases.
Frank Moyer: Great. So Nilesh, can you talk a little bit about the tools?
Nilesh Patel: Yeah. Absolutely. There's a bunch of tools that we've used. A lot of these are automation tools and some tools are for platforms and so forth and device management. But there's a bunch of tools that you can use to help you along the way across your continuous testing journey, I would say. Tools, again, are needed. You cannot do this without tools. It won't work.
Nilesh Patel: So one of the tools that helps this is Mabl. It's a browser testing only tool. It uses AI. It's geared for people that don't have very good programming skills or automation testing. The coding skills with it allows you to do just record and playback in a little bit of a more intuitive way and allows the AI engine behind it to actually go ahead and make sure those tests are [inaudible 00:28:42].
Nilesh Patel: UI and test only really, if that's a big component of your application, Mabl might be a good option for you guys. There's a lot of UI tests and it involves very little or no coding at all. So it gets you ... Kind of solves the skill level resource challenge you guys might have, the organization might have.
Frank Moyer: So the tester in that scenario, they go in, I guess it looks like they have a browser plugin that-
Nilesh Patel: Exactly. It's a browser plugin. It's, I believe it's a Chrome plugin and I believe it's a browser plugin and then you just go about your testing like you would normally test your application. You go around and just do all your workflows and then it'll automatically kick in and it plugs into your CIC Jenkins. I think it has some other integration tools and it logically pipes those tests, so whenever a build kicks off, you can kick off those tests to see if it's a good build or a bad build and then go from there.
Frank Moyer: Neat. Very neat.
Nilesh Patel: Yeah. Functionize is a new tool. I was just introduced to this recently, but it's again, it's a browser testing only tool, but it does something pretty cool. It's basically no code because you're using speech to test the product. So I can talk to it and say, "Hey, log into this application," and it'll log into it. So that's a new tool out there. It's using more Gherkin-like. Obviously, the Gherkin is the BDB where it's user stories are written on plain English. That way it's, again, it solves the problem of testing in a team that don't have the coding skills. The biggest time when we see my clients is, "Oh, my testers don't have the coding skills to achieve automation." Well now that's not a problem anymore. So that's what these kinds of tools are helping you is to get over that barrier and that challenge you guys might be facing.
Frank Moyer: Sounds like Functionize is not just the ones, the testers who don't have the coding skills, but also the typing skills [crosstalk 00:30:18].
Nilesh Patel: Yeah, exactly. It makes it a lot quicker. Again, it helps with the speeding up. So it's a pretty cool tool. It's fairly new-
Frank Moyer: [crosstalk 00:30:23]-
Nilesh Patel: It's fairly new, but it's interesting. So-
Frank Moyer: I'd like to see how it works with you and how fast you speak.
Nilesh Patel: Yeah, that'd be a good test. If it can recognize what I'm doing as fast as I'm speaking, then it'll pass. It'll be pretty successful, I think.
Frank Moyer: Excellent.
Nilesh Patel: And the other one is Katalon. Katalon is a web and mobile-based automation tool. It takes into account both the technical and nontechnical testers of it. So you have this one set tool. It's a framework really. It's more than a tool. It's a complete end to end automation solution where you can run on your test from creation all the way out to the device management as well too because that's what ... A bunch of different plugins with Kobiton as well and some other [crosstalk 00:31:01]-
Frank Moyer: [crosstalk 00:31:02]-
Nilesh Patel: As well. It also has record and playback so it allows people that are, again, are not as technical to record (inaudible 00:00:31:09). It's a pretty powerful tool built on Appium and Selenium (inaudible 00:31:17).
Frank Moyer: We've covered what continuous testing is, the different impediments to continuous testing and then talk about how mobile is different than web. So one of the biggest differences is all the disparate devices that you have to work with. I know in the web world, you have to deal with a few browsers.
Nilesh Patel: Yeah. Not as many devices, that's true. (crosstalk 00:31:59).
Frank Moyer: But this is a graph taken using data from [Mover 00:32:04] who looks at the usage of device types across the industry globally. The number of devices that you would need to test on to get different levels of coverage. So for example, to get to a 75% coverage on just one operating system, you're talking over 25 devices to get there. If you want to get North of 90%, you're looking at over 300 devices to test on.
Nilesh Patel: And I will tell all engagements that we have, most people don't have even five devices, to be honest. Devices are pretty expensive. They cost a lot of money so most companies don't invest in that. They invest more in the simulators and so forth, but obviously simulators aren't a good way to test. But would barely see a lot of [inaudible 00:32:54] don't have upwards of 20, 30, 40 devices. They're very, very small, on the smaller side of things just because of the cost.
Frank Moyer: And on the Kobiton side, we see this often where our customers will come to us with a critical issue that one of their users is experiencing. They will ask us to, if we don't have a device already, to go procure a device that they need for testing. We turn that around very rapidly in order for them to reproduce the problem on Kobiton so that there is no need to buy the 300 devices.
Nilesh Patel: Sure, exactly. It solves budget problems for sure.
Frank Moyer: Yeah, and it's using the device in hand. If you still have devices in hand in your organization, you can still use those as well. So if you had 30 devices right now that you are at the 75% but you want to get to the 90%, you can plug those 30 into your own environment and connect to Kobiton and supplement those with the other devices provided by Kobiton.
Frank Moyer: In addition to device types which has so much disparity, you also have operating systems. So not every operating system operates and behaves the same way. So on Android, you can't guarantee that a Samsung S8 will behave the same way on two different versions of Android.
Nilesh Patel: Yeah, it makes sense. I was wondering, one question came to mind. What about the different types of networks that you use? Because obviously there's different types of stuff going on in different countries, different than US and that's something that I'm sure that has to be tested as well to [crosstalk 00:34:44] the network bandwidth and all that other stuff that goes along with mobile because you're not always on the same carrier. You have different carriers and that could be, introduce a whole slew of different types of problems to solve.
Frank Moyer: Right. We connect a lot of our devices to sim. We know we have sims [inaudible 00:34:57] in them and so they're actually connected to carrier networks, different carrier networks to allow the ... Because they perform, like you said, perform differently-
Nilesh Patel: Differently, right.
Frank Moyer: Not just speed but also how they actually transmit packets.
Nilesh Patel: Yeah. I would say international relations would be an issue to our localization or whatever you want to call it. That would be another complexity to mobile testing as well too, just because you can change the different fonts and keyboards to different languages and so forth. So that has to be supported as well too. So I think that would be another variation to the complexity of this mobile testing [crosstalk 00:35:32].
Frank Moyer: Yeah. One of our clients, their application only works in Australia. So it's a video streaming application and the only way is to use is to actually appear through Australia. So we use a proxy service, Luminati, that makes it appear like the device is in Australia so they can actually test even though the device is physically located.
Nilesh Patel: A cool thing, I think one thing you mentioned over there is the recording playbacks, so I want to know a little bit more about that. What's the challenge you guys face with recording playback with mobile? I know there's challenges with the web side of things, but it'd be cool to hear what your aspect is on it from the mobile side.
Frank Moyer: Crashes don't really happen in the web world-
Nilesh Patel: Not as much, yeah.
Frank Moyer: It's a different animal and in mobile, it's a common complaint of the users of this app is continually crashing like Twitter and then the cross device type testing. So the world of recording playback on web has been around for 15 plus years-
Nilesh Patel: Or even more than, yeah. It's been around for a while.
Frank Moyer: The biggest challenge on mobile is that there is no standard to render the XML or the HTML. So in a browser, there is a W3C standard that a specification that says, "Here is how the HTML should be." Chrome and Safari and Firefox, they're all using that specification. Microsoft got into trouble over the years because they haven't followed it too exact, but now they're back in line and everyone's trying to adhere to this spec. In the mobile world, that spec doesn't exist. It's a free for all and the manufacturers get to render the XML however they deem appropriate. They're using that as a way to get a competitive advantage over their competition. "We're going to render it to give a smoother animation." So they control the rendering, they control the XML.
Frank Moyer: That makes it increasingly difficult to do recording playback because if you record on one in browser, you say, "Here's the element. Here is the selector to get to that element." In the mobile world, that is different between a Samsung and LG, a Samsung S8 and S10. So there's just a lot of variability and it makes that almost ... It's very difficult to do.
Nilesh Patel: Yeah. I can see the pain in the automations trying to automate that because in true sense, they would have to automate on all the different devices or at least the different OSes that are out there to make it work. So they'd have to do a script on Android and a script on iOS and so forth.
Frank Moyer: Yeah. So the typical workflow like with Perfecto, is you go and you record a script on one device. You then as a tester, going into the other device types and get the selectors that you plug into your script. So you're actually having ... If you're scripting across 50 devices, you're going into 50 different devices-
Nilesh Patel: Wow. That can be ... and that could slow down their-
Frank Moyer: Times 100 tests, yeah.
Nilesh Patel: That could slow down your release, slow down your automation in getting things out as fast as we want it to. So I can see that could be definitely a pain point.
Frank Moyer: So we launched on January 6 the ability to do that, the ability to record on one device and play it across any other device type. We used a combination of machine learning and statistical techniques to identify the element across device types. That allows the tester right before a release to run a manual test on one, and then replay that across hundreds of devices if they [inaudible 00:39:24].
Nilesh Patel: I think that's a pretty cool thing and a pretty game changer if you ask me just because it, again, it gets to that whole point of now we're delivering faster and we're able to do more. I'm not waiting on the automation guys to sit there and script for the 50 different devices. I'm doing it one time and it's ready to run on all of them, which is a pretty cool thing if you ask me.
Frank Moyer: Yeah. We set out to empower the manual tester to give them the same capabilities that an automation engineer would have.
Nilesh Patel: Other than that, it just saves the tester time to work on other things now and they can test more important things that sitting there worrying about, "Man, it's going to take me four days just to run this one script for everything. I need to do it and I can not do it in a matter of minutes or however [inaudible 00:40:01]."
Frank Moyer: That's right again.
Nilesh Patel: I think that's pretty cool. Just to say, I used to be a tester in a past life. For me, saving time was the best thing for me. That's why I liked automation, was due to that I didn't have to do things manually. I wanted to save some of my time, use my time on more important things that I needed to work on. So this would definitely ... I see a lot of testers would definitely be willing to jump on this if they could have this available to them.
Frank Moyer: Yeah, and our existing customers have enjoyed it up to this point. We are opening it up to all of our trial customers on February 17th and then the first of our what we're calling assertion libraries, our first one is our crash detection library. You can run a manual test on an iPhone 8, rerun those same tests, the same commands, across 100 different devices, Android and iOS, and you will get a report of any crashes that occurred during that test.
Nilesh Patel: That's pretty cool for sure.
Frank Moyer: We will be expanding that throughout the year to include many more assertions. We have about six different assertion libraries that will look at different aspects of the application. So you will, by the end of it, get a holistic view of an application in terms of visual performance, how it uses CPU memory, and obviously crashes. So we've got those lined up for quarter over quarter, over the next few quarters, and we look forward, by the end of the year, where a tester can execute a manual test on one device, rerun that across any other device type. We'll automatically detect any performance issues, UI issues, crashes. Then when you get a new application release, instead of having to rerun all of those manual tests again, we have auto healing in place that will allow those to be automatically updated.
Nilesh Patel: That's pretty cool. That's all going to be as part of I guess out of the box solution for Kobiton or I need to configure that or do something separate with it.
Frank Moyer: It's just like you do today with running a manual test, but you'll be given the ability, at the end of the manual test, to expand that to hundreds of devices.
Nilesh Patel: Makes sense. It's pretty cool. Sounds interesting.
Frank Moyer: So thank you very much for the audience here. Thank you for your time, and I think I'm going to hand it back to Beth.
Beth Romanek: Okay, thanks very much. So as you can see, the Q and A is now in session. We've already got some great questions here, but everyone, feel free to continue asking the speakers your questions by typing in that questions and answers panel on the left and then clicking submit. Okay. So first question for all of you, this person wants to know, will I be able to invoke the automated script generated by the intelligence test automation product?
Frank Moyer: Yes. There is an API that will allow you to launch any of your tests through the API. So you can integrate it into your CICD. so the workflow is you run a manual test, you decide on the devices you want to run it against, and then from the API itself, you will be able to call that automation ID from the API and your CICD pipeline and specify the devices you want to run it against.
Drew: Okay, awesome. Let's see here. So I'm an existing Kobiton customer. Is ITA currently included in my plan?
Frank Moyer: So we are right now the navigation part of ITA, which is the ability to replay across a multitude of those devices will be included in your plan. You will still have to pay for the device minutes used by the execution across those devices.
Drew: Awesome. We have 30 in-house devices today that we'll perform manual testing on. Can I run ITA against these devices?
Frank Moyer: You can. So you can use a combination of in-house devices as well as our cloud devices and it's just a matter of pointing to those devices that you have in-house. What we do is we'll look at the device type and we will provide preference to anything that you have in-house or private and then fall back to a public cloud device if it's not in your private list.
Drew: And how many devices does a Kobiton license cover and how many devices can we test on simultaneously?
Frank Moyer: So the license structure is based on number of slots or concurrent testers. If you want to run your tests faster, you can increase the number of concurrent or simultaneous tests, and that is how it's structured. If you can have 50 devices and if you want to run five concurrent tests, you can do that.
Drew: So I've used Perfecto's record and playback at my last job and I still have to go into every device that I wanted to test on, determine the X path selectors and then add that to the generated script. Quite a process. So will I need to do that with ITA as well?
Frank Moyer: Yeah. So I covered this earlier. No, you won't. Our machine learning will actually identify the selector across device types. We'll also provide you the ability to export the selectors so that you can run those in your raw Appium scripts, instead of having to go, like you do today in Perfecto, into every single device site. You can export that and run ... Focus on the part of ... If you're an automation engineer, don't focus on going into every device and every element and finding out the selector. That's no fun. You want to code the logic. You want to break the system, not break your fingers from figuring out all of the.
Drew: The [crosstalk 00:00:47:10], yeah.
Frank Moyer: Yeah. So we're making this really, really easy and hopefully we're making the automation engineers, the Appium automation engineers life a lot more pleasant.
Drew: Awesome. Okay, next question. I'm familiar with Mabl and its ability to compare UI across different versions. Will Your ITA products be able to do that as well?
Frank Moyer: Yes. So in Q2 of this year, we'll start releasing our UI comparison libraries, our assertion libraries, that will compare, and one thing I'd like to point out on this question is unlike web, where you're looking at the viewport is a browser and can be resized, with mobile apps, you're looking through a viewport that has a set of standard aspect ratios, dimensions, operating systems. We're going to use the intelligence from the metadata contained in those devices to make it a much more pleasant experience than you do anywhere else.
Drew: Awesome. Okay. So between Mabl, Functionize and Katalon, do you have any suggestions on which one I should use for browser testing?
Frank Moyer: No. There's not one for all. It depends on what your company or you want to focus on. I think they all have great benefits, each one of them. Mabl is very good for browser UI testing. Again, if you guys are looking for more API testing and so forth, then I would say Katalon. But if it's just browser only, between Mabl ... I think all three of them have their benefits, but it depends on how you want to expand your automation now. Mabl doesn't support a good API testing just yet. So if you want to do functional and API then I would say Katalon.
Frank Moyer: Functionize is still fairly new in my opinion, but they have a lot of skills there too. So it depends on how much in-depth coverage of testing you want to do, then you can select the tool of choice you want. If you just focus on UI only, then I would say go with Mabl and don't worry about Katalon and Functionize. If you're worried about API testing and mobile testing and all that other stuff, then I would say, "Hey, maybe you want to look at Katalon as opposed to that one." So you have to look at your needs first before you decide which tool to use. They all do things really well. It doesn't matter what else they can do that you need it for your testing needs.
Drew: Sure. How do you choose the tests to run for continuous testing from best practice?
Frank Moyer: So my opinion there is you got to choose tests that are very stable because these are tests that you're planning to run daily if not a couple of times a day, maybe multiple times a day. So you have to choose tests that are very stable that are going to run very quickly, that are not very UI intensive in my opinion. For me API tests are very good tests to choose for your continuous testing because those are quick. If your API is broken, you run it 10 times, your UI is going to be broken.
Frank Moyer: So you want to pick tests that are robust and clean and are very stable. Then you have to create your test suites that actually makes sense. Test suites like it depends on the type of release you have. If it's a quick release, then you can maybe get away running 5, 10 tests. But if it's a whole complete platform monetization, you might need to run the whole end to end suite. So you need figure out which ... You got to select test rates that logically fit your release basically as well is how I would say choose the right tests to run.
Drew: Cool. Will I be able to invoke the automation scripts generated? That's great. That's it. Keep going.
Frank Moyer: It looks like somebody ... You may have read that one already? [Inaudible 00:50:49]?
Drew: Yeah. Oh yeah, I think we did. All right. Sorry about that. I think that's it actually. Okay, awesome. Yeah, I think that covers all the questions. Unless anyone else has any follow up, feel free to email us at email@example.com and I'll be sure to share those questions with Frank and Nilesh and try to get you an answer turnaround quickly. Other than that, thank you everyone for showing up today. We appreciate it. Thank you to the Techwell team for hosting us and thank you Frank and Nilesh.
Nilesh Patel: You're welcome.
Frank Moyer: Thank you.
Drew: Thank you.
Beth Romanek: All right. Thank you guys and that will end our event for today. As Drew said, if you think of any other questions you'd like to ask the speakers, please email marketing at kobiton.com. So I would like to thank Frank and Nilesh for their time and Kobiton for sponsoring this web seminar, and of course all of you in the audience for spending this time with us today. So everybody have a great day and we hope to see you at a future event.