Automation Awesomeness: 260 Actionable Affirmations to Improve Testing Skills
Abstract
Automation is no longer optional when it comes to testing. So why are so many organizations struggling with moving from manual to automated testing, especially with mobile apps? Join our podcast-style interview where Emily Thomas, an automation expert, reveals her recipe for successfully helping organizations accelerate their move from manual to automated testing.
Accelerating Automation: How to Fast-Track Mobile Testing Maturity
Don’t miss the chance to gain expert insights on securing executive support, managing change, and driving automation maturity in your organization.
Additional Resources
Speakers
Video Transcript
0:00 | Matt Klassen
Good evening. Welcome to this session at the Mobile Testing and Experience Summit. My name is Matt Klassen. I’m one of the leaders at Kobiton. And I’m really passionate about helping customers deliver better apps and specifically mobile apps and to do that rapidly. And so I’m really excited about this session because my guest Emily Thomas is a test automation expert. That is what her career is all about is helping organizations deliver applications, improve automation. And so to do that efficiently and quickly. So thank you for being here, Emily. If you wouldn’t mind just tell us a little bit about yourself introducing yourself?
0:35 | Emily Thomas
Yeah, thanks for having me. So I am a test automation engineer, a consultant. I have been doing test automation for about seven years now in the industry across about three to four different apps depending on your measure. And I have been self taught mainly. So I’ve actually completely moved from not knowing anything to knowing all of what I know now largely through self teaching and through reaching out to our tech community. And I am currently working on a point of sale app for a well known national food brand, And I have worked on some billing apps in the past. So I love working on new things, working on really complex things and breaking them down to their component pieces.
1:15 | Matt Klassen
Yeah, that’s awesome. Yeah, we’ll dig in a little bit more, maybe some of the details or lessons learned from your current project, But I think it’d be really useful as we were prepping for this. I found that your approach overall approach I think would be extremely valuable for people to understand. So based upon the fact that you’ve worked on numerous projects with the goal of moving from little or no automation to significant automation quickly. So maybe share a little bit about your approach, how you, as you move into a new project? What does that kind of, you know, what do those steps look like?
1:43 | Emily Thomas
Yeah, 100 percent. So a lot of my history is with brand new apps that have zero automation in place and zero testing already in place. So a lot of what you start with is talking to people. I know we as engineers aren’t always keen on having long in depth conversations, but that’s a huge part of doing quality and a huge part of doing testing is having these conversations and researching whether there’s existing documentation, whether we have existing test suites, whether we have existing previous apps, we can build on. Both of the apps that I’ve worked on most recently have been replacement apps for older legacy apps that needed an overhaul and updating them was more onerous than just building from scratch. So working from that previous knowledge and saying like, okay, here’s, how it’s worked for users in the past. Here’s how it’s worked for developers in the past. How do we bring that understanding and that awareness into this new application? And those conversations where you start with your initial research, you check in with your developers, your project manager, your product owner, you do that iterative cycle of building that knowledge before you can actually build your test suite, and importantly build your automation off that test suite. I think is one of your most crucial initial steps in getting started on a project.
2:47 | Matt Klassen
Yeah. So, I’m really interested because I have a little bit of background in QA from the past. But what is the difference as you approach again from project to project? What would be the difference from that approach for either building a manual suite of tests or automated suite of tests? Like where do those diverge from an automation perspective?
3:07 | Emily Thomas
There’s always going to be some level when you’re thinking about it holistically that some things simply can’t be automated. I think you can always build an automated test off of a manual suite. You can’t always build a manual. You can’t always build a holistic manual suite off of an automated suite because automated suites are limited by the hardware more often than not or limited by what race conditions you may have going on a technical level. So I think it’s probably the biggest difference is the approach and mindset of evaluating the practical return on investment of a specific test case or a specific test function.
3:39 | Matt Klassen
Yep. Okay. And I will touch on some of the metrics and so forth later. So I guess as you think about this overall process, a couple other things as we were preparing that caught my ear if you will. One of them was how I think you talked about a common, one of the things that was important was a common repository. Can you talk a little bit about that? Like how does that work? And why is that important?
4:00 | Emily Thomas
Yeah. So using GitHub or Maven as a common repository especially when you’re building an automation suite or even just your test plan is a good way to make sure that the whole team has access to it. So if you have questions, you can link and you can say, hey, I’m, looking at this test case, I’m running into these questions. Do we think these are valid questions? But especially when you are sharing that space with manual testers, and with other quality engineers who are making automation, having the ability to build off each other’s knowledge and to build together is crucial to developing any project successfully. I know that my current project, we have a shared repository that has our framework in it that we’ve built. And in that framework, we have the ability to add new test cases, and we have the documentation which is incredibly important that we’ve worked on that tells us how we can add new test cases, that tells us how we can use these test cases to our greatest advantage. And that enables us as a team to move forward together.
4:53 | Matt Klassen
Yep. Yeah, that’s cool. I think that’s really critical. I think that’s one of the things that allows people to scale efficiently. There’s a value I think to maintaining your test cases and suite as well. And then the one thing I always find interesting is developers typically aren’t really big on requirements, But, you know, what you said is it’s pretty much impossible or very difficult. More difficult. Let’s say to actually build, you know, verification, validation if you will, of An application, if you’re not sure what it’s required to do. So, having a common understanding of requirements does become extremely important from a testing perspective and probably even important in that repository as well. In some ways should perform linking back to those. I’m guessing?
5:37 | Emily Thomas
Yeah. And I think a lot of excuse me real quick. I think a lot of developers, it’s not that they’re not big on requirements. I think they don’t want to write down requirements because I think that in my personal experience, a lot of developers, I think they feel beholden to requirements as soon as they’re written down. But whether you write down the requirements or not, there’s always going to be requirements. They’re just going to be implicit requirements or explicit requirements. Making those requirements explicit, gives you a metric that you can hold your standard to If they’re implicit that goalpost can move constantly and you can have, you know, things fail that were working before with nothing changing. So, I think it’s very important to document those requirements specifically so that you can have that measurable success rate as a team.
6:15 | Matt Klassen
Yep. So let’s go back to that kind of overview we’re giving I’m just curious as you go through this process and you’ve done this now several times over the years, what are some of the biggest challenges that you typically face? The things that can chew up a lot of time or that make this a difficult journey?
6:36 | Emily Thomas
Probably one of the biggest ones is just finding the correct stakeholders and finding the correct answers There are. And I know we talked about this briefly. There are occasionally challenges with frameworks because certain mobile operating systems are a little bit more complicated to work on than others because of the certain requirements, required technologies involved in getting connected to the operating system and getting connected into the infrastructure in order to run your test cases. But I find that probably the biggest challenge most days Is knowing who to talk to and knowing the right people to talk to get the right answers. That’s why that iterative communication process is so important to the testing cycle.
7:11 | Matt Klassen
Yeah. So I think that’s again once again kind of a good lesson here which is, you know, talking to people, We think, in this world, this digital world we live in, we tend to think, oh, we can work remote, we can, you know, do all these things very efficiently Without actually interacting with people. But, you know, especially when we know not all requirements are going to be explicit and written down and there’s going to be implicit ones that we need to, we need to, you know, dig into right? Having those conversations becomes, you know, collaborating is extremely important. So that’s a great tip. So even when we’re even people, you know, might be surprised, we’re talking about this as much as we are when they’re coming to an automation session, but it’s really important. Yeah. So let’s dig in a little bit on something. You mentioned, You alluded to mobile being more difficult than web. So let’s talk a little bit about that. So why is testing mobile apps more difficult? Is it the real devices? Is it something about the automation frameworks? Where does that difficulty and challenge lie?
8:06 | Emily Thomas
Yeah. I think a lot of the challenge comes from the frameworks themselves and the hardware itself. Because things like, you know, for iOS, you have Xcode, you have XCUITest, that even if you’re using Appium, you’re going to be dependent on those tools. And if those tools are having issues or if you have a versioning issue, like you have Xcode and you’re trying to use iOS or something and it’s incompatible. You’re going to run into a lot more issues that way. There are some other challenges related to hardware with timing issues. I know that we talked about The fact that a button can load and your framework could not detect it. So you can run into a lot of issues with things like timing. You can run into issues with like, oh, I decided that I wanted to press these two buttons in a row, but the second button hasn’t loaded yet And that all comes from the actual hardware in the operating system and trying to tie in these automation frameworks with these operating system frameworks themselves.
8:59 | Matt Klassen
So, some of the complexity that you just talked about kind of goes back to kind of layers of technologies and frameworks. And specifically, I think one of the things that a lot of organizations are struggling with is, you know, Appium similar to Selenium, but It’s similar to Selenium or built on Selenium. However, the purpose is also to allow organizations to create automation that theoretically is compatible or could run against the same app whether it’s on iOS or Android or at least the framework is compatible. But I think that also add the layer of complexity to some degree and also creates maybe other issues like performance issues and so forth. But maybe talk a little bit about maybe, you know, I think you said you are using Appium on your current project and specifically also Python as a language. So talk a little bit about maybe your choices there and how that’s working.
9:53 | Emily Thomas
Yeah. So I’ve been using Python for as long as I’ve been automating It’s. A language I’m very comfortable with. There’s a lot of tooling that Python has like Pandas and PyTest that you can really tie in automation very easily. So that is part of what led me to choosing Python and Appium. And I will say my current project, I actually am supporting both the iOS and the Android versions of the point of sale app. The test suite is actually currently capable of running both of those versions, but you are right where it does add a layer of complexity because when you are running Appium, the best practice right now is to use accessibility identifiers to find objects in the page. And as a result, you have to have the same accessibility identifiers in both the Android app and the iOS app. In order to run it. That way, One of the bigger hurdles there is that iOS and Android functionally are different. They do not have the same code bases. They do not have the same behaviors on certain things. And so there are going to be limitations on that point Where that can be overcome. We’re currently looking at adding in configuration files that allow us to feed in those configs separately. So that we can just say, like, hey, grab the accessibility identifier for this button on Android or iOS based on whichever config we need it to run. And that is one option for overcoming that complexity is to have your separate configs that can support each app separately while still running the same series of buttons or sequence of buttons in your mobile app testing. And as for performance, yeah. As for performance, the test suite itself doesn’t necessarily run slower. But iOS and Android have different performance levels and can load things at different rates. And that can add to some level of complexity when starting to look at the metrics for, okay, it took, you know, point, you know, point three five of a second here versus point seven, five of a second here. Is this a failure of code or is this a failure of structural timing?
11:40 | Matt Klassen
Yep. So so on that let’s dig a little bit in to also the, you know, using real devices versus virtual devices. And I think that also varies a little bit by OS, and because I think, you know, when you think about Android, from what I’ve heard anyway that Android emulators are a little more advanced than iOS simulators or at least the ones that are easily Acquired or used or open source et cetera, or available. Let’s say, And so, you know, testing on real devices at the end of the day, I think most organizations that are pretty serious about their apps at some point will do that, But maybe talk a little about, you know, that your approach on the current project and where do you know, where does, where do virtual devices sort of begin and end and you move into real devices?
12:26 | Emily Thomas
Yeah. So probably the biggest for me, the biggest limitation of the virtual devices of both simulators and emulators is that there’s simply certain things they can’t do The iOS simulators, for example, you can’t actually use the camera in any way. So you can’t even feed in a fake image to the simulators to get them to do certain behaviors. So there are tests that we have to rely on physical devices. In order to get the best results. On top of that, there are actually certain things in iOS specifically where A UI frame will not load the same in the simulator versus a hardware device. I’m not 100 percent an iOS expert. So I don’t know why that behaves that way, but I can say that from a testing perspective, especially if you’re dependent on, you know, if you’re more dependent on XPaths, which is not best practice, but everyone has to do it at some point If you’re more dependent on XPaths. The differences between the UI interpretations on an emulator versus a hardware device are going to throw off your test as well. And again, it is best practice to use accessibility identifiers or some other identifier rather than using XPaths. But everyone has to use XPaths for one test or another that doesn’t have an identifier yet, especially when you are working on a brand new app that you are first starting to build the automation for.
13:33 | Matt Klassen
Yep. So that is, I think that’s a great explanation And I think I love the fact that I was going to ask you to kind of explain identifiers a little bit, but I think you kind of covered that a little bit. But maybe we should go back and do kind of for some people that maybe are newer to automation in the mobile world, talk a little bit about identifiers and from an automation perspective why those are absolutely critical and also how you use them will make or break if you will the maintainability and resilience of your scripts.
14:02 | Emily Thomas
Yeah, yeah. So best practice obviously, like I mentioned to use identifiers. Now, the reason it’s best practice is because when you are using things like XPaths, which can, XPath has a very broad definition because you can actually use an XPath to find identifiers as well. But when you’re using an XPath div which is based on a div in a website. So again, it’s going back to that Selenium framework, Those locations on the page can change if you have any UI changes. And so using XPATs is generally accepted to be a more brittle testing practice Using identifiers especially if those identifiers are stable, even if the position of a button changes, even if the work on a button changes, you should be able to use an identifier regardless to find something Because of the way that Appium searches for what are called elements in the page, which are the objects from an object oriented programming perspective. When it’s finding those elements by identifiers, it is actually just looking for searching the page for an identifier that matches. So, a name that equals something, an accessibility ID, which is the most accepted identifier that equals something and it’s finding that element. And then, you know, if you’re clicking it, if you’re searching within it, if you are going to its parent object to find its, you know, to click its parent, what have you, the accessibility identifiers are just more stable from a testing perspective. So you won’t end up with so many false negatives in your test suite.
15:21 | Matt Klassen
Yep, Which brings me actually to another point which is the best applications and the highest quality applications. I will say begin with those developers who sort of begin with the end in mind, right? So actually know a little bit about QA or where a QA organization maybe is influencing how the applications are actually built because there is a difference, right? So how you actually build an application will influence how you can test it and specifically how you can automate the testing of it and either make it easier, better, more maintainable, more achievable, more efficient or less, So, which in the end will of course impact quality. So, can you talk a little bit about, You know, you’re passionate obviously about automation. So why is, why do companies, why should they pursue automation for, you know, apps in general mobile apps? Even though there’s some difficulties along the way, What is, what’s the payback? What’s the value? Like what’s why? What’s you know, why should an organization, you know, pay that cost?
16:23 | Emily Thomas
It’s going to sound trite. But automation is the future Automation enables companies to move faster. It enables it enables testers to move faster. So, for example, we can depend on our automation to test some of our most basic P1 or priority one high priority regressions. And because we can depend on that, we can go into the more niche exploratory testing. For example, one of one of our more complex bugs that we ran into, we were doing a field test and the app kept crashing on a QR code scanner. So it was bringing up the camera. And every once in a while it would crash. It only happened two to three times in the field out of, you know, five hours of testing. So we were struggling to understand Now because we have that automation, I was able to play around with it and eventually figured out there was a thread that was getting hung on the camera, right? We wouldn’t have had the time to do that. If we were depending fully on manual testing as our only fallback for mobile testing, We simply wouldn’t have had the freedom to do it. We wouldn’t have had the time to do it. But because we were able to do that, we were able to fix this niche barely, you know, barely ever occurring bug that was going to be impacting users of the app And we were able to move forward past that.
17:35 | Matt Klassen
Yep. So I love that example And I think it also is a perfect lead in to something else we talked about which is what is what’s the goal? What should the goal for automation be? Because I think, you know, people are thinking, oh, you know, when they catch the automation bug, it’s like 100 percent, we’re going to automate 100 percent of our test cases And, but what should the metric be? I mean, what should the approach be there? Like what’s a, what does good look like?
17:57 | Emily Thomas
I mean, I will, I always joke that my personal goal is 100 percent. I would love to automate 100 percent. It’s not a realistic goal. It’s not something that’ll happen in reality. And I think it’s going to vary from project to project. So, it depends on what peripherals you might have attached to your app, what you know, images, you might need to input, what things you might need to inject into the camera. I think that aiming for, You know, 80 percent is beautiful. It’s wonderful. We all want to do it. But I think honestly aiming for that first one percent of tests, getting that framework up and running, and getting that first one percent done is going to be your biggest achievement. Because once you have that one percent, that first couple tests, that first set of your framework, expanding on that is going to be a lot easier. There’s a saying that our team has said a lot is that, the first 10 percent of the work to the first 80 percent of the work takes 10 percent of time and then it takes 20 percent of time And the next 20 percent of the work takes 80 percent of the time. So it’s important that you just jump in, You start building stuff. I think once you’re building you’ll get to be able to get a better idea of how much of your app you can automate you.
18:57 | Matt Klassen
Yeah, that’s good. And I think we also, I think you have some ideas about how do you pick which tests should be automated versus those that, you know, maybe aren’t And like what’s that strategy? Like. So how do you know? Like where do you start? What’s the one percent you should target first? And what’s that approach?
19:14 | Emily Thomas
I think obviously your most critical regressions, right? If you’ve got something where this has to happen 100 percent of the time automating, that should be your highest priority. I think that return on investment is a big thing. If you’ve got something where there’s an edge case where you know, this one user in one in, In one instance of the app on one set of hardware is having this one issue that nobody else is having. There’s not much value in automating that. But if you’ve got critical regressions, if you’ve got things that are going, to be crass, that are going to be making your app money, those are the things that you probably should focus on first to get automated first.
19:48 | Matt Klassen
Yep, Good advice. All right. So we’re going to close out with two sort of questions. The first one is around AI, and then we’ll kind of talk about takeaways from the session. But what you know, as you think about AI, there’s obviously a huge craze. I heard that at the recent Star East conference, which is a big QA conference. Ai was almost every title of the sessions. Yeah. And so what you know, I guess what is your take on AI and testing and test automation? Like what’s going to be interesting? You know, what’s available today or what would you like to be available? What is your view on that?
20:20 | Emily Thomas
I think AI is a really interesting tool. I think a lot of people really want to leverage AI. I think that for quality, most of our leveraging of AI will be more available in like three to five years once we have large language models, or once we have machine learning or whatever we’re calling it by then. Once we have an ability to take let’s say, input from a system like Bugsnag or that can take in the breadcrumbs of what users are tapping, and you can feed it that information of how users are tapping across your app and you can give that to a training model and tell it, this is how users are using the app and that AI can then use leverage your framework to run tests for you. I don’t think we are going to reach a point… where native apps in particular are truly purely AI automatable for a good while yet, I think that we’ll get there eventually, right? Ai will eventually expand its capabilities, just like any tool in the past, just like the wheel, just like glasses. But I think that it’s probably three to five years on the horizon. I think in the next 12 months, probably what we were talking about earlier identifiers, I know that there are some tools out there like Kobiton that have a little bit of AI baked in to try and find similar identifiers where we can do that. But I think more what I’d love to see is that ability to just feed in key presses that have happened in the field to an AI and say, hey here’s, how users are using it. Can you try and tell me how they’re going to use it in the future?
21:39 | Matt Klassen
Yeah, yeah. That’s good. And I think that is where we need to go. I think today, I mean you mentioned Kobiton but basically, I think others, maybe there’s similar. There’s a lot of capabilities out there. But I think one area where I find interesting is some of the people that are probably listening today, maybe aren’t automation experts. So they don’t think in terms of code. And so what they can grasp is, hey, I know how to do, I know testing, I know how to create a set of test cases. I can do a manual test, execute that. So, I think, I know you guys are evaluating some of that capability in our tool suite as well in terms of the ability to do a scriptless for some or maybe generate Appium scripts from a manual test session. And I think that’s interesting and unique. And I think, you know, we’re working to make that obviously scale. I know we don’t support Python right now. So I think that’s one of the, one of the blockers for you. We would do Java and JavaScript well net. But, is that, does that, was that, would that be an interim step? That would be a value before we get to the full AI you were discussing? Which part would be the interim step? So either script lists or the ability to generate Appium scripts, I think you mentioned it briefly, but.
22:49 | Emily Thomas
It wouldn’t for me. But I know some of our manual testers who have an interest in automation have actually been playing around with the framework and have been teaching themselves how to do automation in the framework. So I think it could be useful for manual testers who are trying to learn Kind of like having for lack of a better word, kind of like having a stack overflow that can personalize its answers rather than going to just a generic question that’s like how do I get this? I don’t know how do I get this, try accept loop to work? I will say while Kobiton may not support Python. I have not found any issues so far with running my Python scripts there related to Python.
23:23 | Matt Klassen
Yep. Yeah. And I’ll clarify we don’t support. We don’t generate Appium scripts that are in Python language. So when we generate Appium scripts, it’s Java or JavaScript or NET. But we did announce an open source to that generation or sorry script generation capability. And so I think there are third parties actually looking and working on Python and others. But I think what you said although I’m going to come back to that is really important you said not for me but maybe for others. And so this is the key. I think when we think about automation, most organizations don’t have the luxury of having 10 Emilys on their team And so, but yet they want to move, they want to do things more efficiently more quickly and in a more automated fashion. Python. And so, I think that what that also means is it’s not one size fits all, right? There’s different approaches based upon skill sets based upon the operating systems you’re using, based upon the environment you have available. And so I think that’s probably an important takeaway as well. So, yeah.
24:24 | Emily Thomas
I think we’ve touched on that a lot is that there’s no, especially where quality is concerned. There’s no one size fits all solution. There is going to be the, whatever works for you and whatever helps your app to grow and succeed and to remain stable. The best is going to be what is the best solution for you? You know, what I’m doing at another company may not work for them. What I’m doing at our client does work for them. So it’s you know, it’s a balance You have to find what works for you. And I think where that comes in, I’m a person who’s not afraid to fail. I actually love failure because I learned from it When I’m anyone on my team can tell you this when I experience a new error code, it’s like, oh sweet new error code. That means that I’ve made progress.
25:05 | Matt Klassen
Yep. Well, hey, you’re made for this the world of test automation, breaking things, So, and trying to solve problems. So it’s awesome. So, I guess to close today, what’s the most important takeaway that you would want to leave our listeners with?
25:18 | Emily Thomas
I think probably just don’t be afraid of the big bad Appium bear. Don’t be afraid of the big bad automation bear with mobile. I think if you’re afraid of, or if you’re getting hesitant or tentative about getting into it or you’re thinking like, oh, this is going to be a ton of overhead. There’s going to be a ton of work. The easiest thing you can do to break through that wall is to just get in there and start doing some of it Once you are in there and you have that first little bit set up. It’s going to be so much easier to do the rest.
25:47 | Matt Klassen
Yeah, it’s great. It’s great. So, thank you so much for your time today. I really appreciate that. And I think everyone is going to find this session extremely useful and valuable. And so with that, thanks everybody for listening today and we will close out our session. Thank you.
26:16 | Matt Klassen
All right. Thanks, Emily. That was a great interview. And as you can see, I’m wearing a different shirt. So that was pre recorded, But we have live QA and we have Emily here with us. Thanks for joining us, Emily.
26:29 | Emily Thomas
Yeah, thanks for having me. And like I said, you were a great interviewer, It’s easy to bounce a conversation off you.
26:34 | Matt Klassen
Oh, thank you. Yeah, no, it was good. It was great. I think it flowed extremely well And you had a ton of great information. So a couple of questions that came in that I’m going to go through that aren’t in the QA tab, but are in the session tab. So we’ll kind of look at both. Sorry in the chat. So one was, do you test on the production platform or use a test environment?
26:58 | Emily Thomas
I have always been lucky to have a test environment that I can test off of. I know that not every organization is so lucky but I have always been lucky to have a test and a pre production environment where I can run my automation that I don’t have to run automation in prod. I have been involved in some smoke test automation in prod in the past and it can have mixed results because if you have a smoke test that suddenly decides to go haywire and it messes up your production environment, then the return on investment suddenly crashes into the ground on it.
27:34 | Matt Klassen
Yep. No, that makes sense. That’s good. So I think there’s another question. I think it came in before we talked about AI, but I don’t know that we answered this question anyway. So the question is from Terry which says, have you utilized an AI powered framework or tools to help you in your mobile automation? And how effective was that or is that?
27:56 | Emily Thomas
Yeah. So we’ve been working on a proof of concept that leverages Kobiton which does have some AI involved in keyword matching As of yet. I don’t believe our code has run into the keyword matching AI tool necessarily, We haven’t had keywords that have changed and that have needed to be updated. But to my, it hasn’t negatively impacted our tests. I’ll put it that way.
28:24 | Matt Klassen
So.
28:24 | Emily Thomas
We have, we have the capability in Kobiton at this time. I don’t think that our scripts have run into that issue in Kobiton.
28:32 | Matt Klassen
Yep. Okay. So I think that’s fair. It’s early days. So that would be what we call we’d call self healing. A lot of times. It’d be it’d be called self healing. You execute a script, something has changed. Can the script deal with those changes in some graceful manner? And so, but I think it like you said, it’s early days in that. So there is some of that capability available but, it’s not 100 percent either. I mean, I don’t think any tool is, but definitely can be helpful. Okay? And then another question is how critical is it to test in different screen resolutions constantly?
29:09 | Emily Thomas
I think that’s going to depend on your app. So the app that I’m currently working on, the client that I’m working with is controlling the hardware. And so it is going to be one standard set of resolutions. But if you are going to be supporting multiple resolutions, I think that especially if you’re doing visual testing like I know other speakers have talked about earlier today that it’s important to cover at least a range of resolutions. You know, if you’re going to be supporting a 16 nine and a four three, you’re going to want to know that everything’s in the right place. But if you’re only supporting a four three and you’re only going to be supporting a four three, there’s not really a necessity to testing multiple resolutions.
29:48 | Matt Klassen
Yeah, that makes sense. Ok, great timing. Our session has come to an end. Thank you so much for everyone. Thank you.