
Appdome: Streamline Cloud Testing of Cybersecurity Features in Mobile Apps
Watch this on-demand session as Li Rajaraman, Sr. Manager of QA at WeightWatchers, joins Kobiton’s Matt Klassen to discuss how to successfully shift left in QA—even with a globally dispersed team.
Learn how Li’s team overcame the unique challenges of mobile app development, such as accessing real devices, to deliver higher-quality apps faster. Discover practical strategies for fostering collaboration, optimizing workflows, and achieving excellence in QA, no matter where your team is located.
Shifting Left from Anywhere: Testing Mobile Apps Faster with a Dispersed Team
Discover how a globally dispersed QA team can successfully shift left in mobile app development, delivering higher quality apps faster, with insights from Li Rajaraman and Matt Klassen.
0:00 | Matt Klassen
super excited for this session shifting left from anywhere. So again, I’m Matt Klassen. I run marketing at kobiton and we just had a great lineup today including this session and we’re going to learn a lot about weight watchers progression and kind of their journey around testing test automation around mobile. And I think it’s going to be really interesting because I think they have had a shift as an organization. If you think about what the organization of weight watchers, they’ve had a shift in terms of a business strategy that has created an opportunity not only an opportunity but a criticality around mobile for their business. And in fact, it’s shifting the demographic of their customers and consumers. And I think we’re going to dig in on that. So before we get started into the kind of the meat of the discussion, Lee, could you just introduce yourself and your role at weight watchers?
1:07 | Li Rajaraman
For sure. Thanks matt for that intro. I am Lee, I’m from weight watchers, I’m part of the quality engineering org at weight watchers. I own quality for a lot of the member experiences in our applications, mobile, and web, and the coaching experience, which is part of a service that our members consume. I lead a team of highly technical full stack qes who work on both front end and back end UI… testing, integration, testing, and all of that good stuff. So, yeah, that’s my role at weight watchers.
1:52 | Matt Klassen
Yeah.
1:53 | Matt Klassen
Thank you for that intro. So you said something that actually really interests me and I didn’t when we kind of were preparing for this, I didn’t catch it or glean onto it. But, you know, today’s event is called the mobile testing and experience summit. And when you talked about, you know, what you own, you actually talk about, I think you said member experience and coach coaching experience or coaches experience, talk about talk.
2:26 | Matt Klassen
A little bit.
2:27 | Matt Klassen
About what that word experience means to you, right? From a QA perspective at weight watchers, I’m just curious, I’m going to dig in on that a little bit.
2:36 | Li Rajaraman
My understanding is that the experience is what our members go through in our applications. They are mostly, it’s mostly black box for them, right? They don’t know what happens behind the scenes. What they want to be able to do is use the app in their day to day as part of their journey towards great health and fitness. So, they are trying to figure out their activity, their food, their weight, and all of those areas in their life. And they want to be able to have a good experience tracking that going towards their goals and all of that. So that’s what our mobile applications do, provide features to keep track of and move towards their goal, their personal goals. The coaching experience is a lot of internal tools where the coaches, we have workshops, real life and virtual workshops. So our coaches use these platforms to create the events, look at members who will be joining their workshops and all of that. So a lot of internal applications which the members are interfacing with from the mobile applications and the coaches are sort of responding to those requests and events from web experiences.
4:10 | Matt Klassen
That’s two worlds. Yeah. So, so talk a little bit about how mobile, is sort of at the heart of weight watchers transformation, right? I think, you called it a digital transformation, but I’m kind of getting we’re kind of getting a picture here of, you know, your constituents, your users, right? You have coaches and you have members, but what is that shift that’s occurring digitally? And how is mobile kind of part of that?
4:38 | Li Rajaraman
So a lot of our experiences, the new features are built for mobile. So, when a product owner is thinking about building something for a member, they think of building it in the mobile applications. So they, it’s become mobile forward, iOS or android depending on where the demography is and all of that. But the.
5:02 | Li Rajaraman
first choice.
5:03 | Li Rajaraman
Has become mobile for weight watchers. And, from my understanding, engagement with the weight watchers experiences happens through mobile, we could sign up. We could enter the program through web, but most of the engagement and retention happens in the mobile experiences is.
5:29 | Matt Klassen
That is that also partly due to the demographic. I mean, I think, you know, weight watchers maybe has an image that’s been shifting, I think or you’ve I think you’re kind of shifting the image, but is that also a shifted demographic?
5:45 | Matt Klassen
In terms of age of the age of the let’s say, age of the participants or members, is that something you’re interested in? Yes.
5:51 | Li Rajaraman
Some and ease of use, right? Most people today have the have an iPhone or an android phone. It’s easy to get on the program and experience the program while you are doing in between something, right? Instead of going to a laptop, opening the app and doing it in the web. So, I think it’s ease of use and you can set reminders and, you get notifications and act on those notifications and all of those things. So, yeah, it’s a combination of moving towards a demography and ease of use.
6:30 | Matt Klassen
Okay. That’s awesome. Okay. That makes sense. So, so how crucial is mobile app testing, right? To ensuring the success of your digital initiatives, right? As, as you’re as you shift to, this new experience right through mobile?
6:48 | Li Rajaraman
So, like I said, we are thinking of mobile as our first choice whenever we are building something new. So it’s definitely important for us to deliver a high quality experience, high usability, easy to access features and all of that. And we are also moving towards a lot of a B testing approach. So making tweaks small tweaks in existing features or launching new features as an a B test. So it’s more important for us to ensure we are able to, we are not breaking the control experiences and the variant would, if it succeeds, it would perform well and not cause any issues. Quality that, that’s how quality comes into play. We, we are always working with two different experiences within a feature set. So more qe, more eyes on quality, more hands on qe and all of that. So, yeah.
7:55 | Matt Klassen
Yeah. Okay. So, so that makes a lot of sense. So, so I think one of the things that a lot of organizations struggle with, as they’re trying to scale, right? QA, and especially for critical mobile apps is, you know, what’s the right approach around? Let’s say automation around, right? Shifting left around, they have to make there’s a lot of decisions, right? What methodologies, frameworks? So can you describe maybe a little bit about how weight watchers approaches mobile app testing today?
8:34 | Li Rajaraman
are getting to a phase where we are moving more towards the shift left approach. We have been formalizing and standardizing the shift left approach since the beginning of the year more. It, it was always shift left, in a way qe would start early on in the cycle. Now, we’ve gotten more functions like engineering, project management, product owners, also contributing in a way to qe especially the engineering org where they are more involved in the qe process. They test their changes before it even gets merged. And all of that, our, we use a combination of manual and automated testing. Like I said, AB testing. We, we don’t want to invest in writing automation for variants that will not succeed. So those are primarily tested manually and we have good automation coverage across the board, whether it’s backend or front end in all the different areas. So it’s a combination. We, we follow the test pyramid approach test more at the component level early on, more end to end testing towards end of cycle when features are stable and integrated. So more automation at the component level, UI tests and integration tests is the approach we are going towards. And very low maintenance UI tests that are only capturing business critical use cases will find issues for us in the mission critical, business critical use cases has been our approach.
10:26 | Matt Klassen
Okay. So there’s a couple of things I want to, I want to touch on. And we had talked earlier, we talked before about potentially doing a poll. I’m not going to be able to do it just fyi, just so we don’t interrupt the flow. Everybody else in the audience didn’t know we were going to do a poll but Lee, we had talked about it. So we’re going to skip that. But, but specifically, there’s two things I want to touch on one. The first one, is shift left. So you mentioned shift left, you talked a little bit about it. And obviously, we, as we were discussing this session right there’s a lot of different aspects we could have. We could have probably put, into the spotlight in terms of the title, but we ended up selecting shift left as part of the title as well because I think it’s really important part of your story. But, what does that really mean? Kind of dig in a little bit more on, you know, what shift left the approach is, and maybe it all, you know, maybe get into, you know, the impact of that as well.
11:32 | Li Rajaraman
A phase where… you have to test everything, always, right? Regression testing is needed only because we were, the mobile experiences were evolving and we were trying to group them under pillars and all of that. So manual regression testing was the case until we got to this phase. Now that we are in a more stable evolved technical organization. What the framework has come to be is developers also start testing their code changes. It’s not quality is owned by. All right. Quality is not just qe’s ownership. The qe org’s ownership, everybody should worry about quality and that’s also part of the engineer’s work stream. When you make a code change, you don’t break anything while you’re merging it in the big picture. So developers test their code at the feature level, targeted testing to ensure nothing is broken and it’s being tested through the review code review and the merge process. And once the code is merged, the change is available, qe goes in and performs surgical manual testing. Again, understanding what the impact is, where qe is needed. Because usually in most organizations, the qe org is much smaller compared to the engineering. Org. So you, there is only so much bandwidth. You can spread across the areas that you want to spread across. So be very intentional conscious about where that effort is spent. Is what our shift left approach is write tests that matter that’ll find issues, P, zero issues that’ll reduce manual testing hours spent on something. So make an impact. If you’re writing a code, even a line of code, it has to make an impact. If you’re performing manual testing, the same applies, use your time to only perform testing where it matters, and not just because we as qes learn that regression testing, exploratory testing, you have to test everything all the time. If something breaks, you’ll be blamed that’s not the case anymore. Use your time, well, make an impact is what our shift left approach has been and no.
14:22 | Matt Klassen
Continue, sorry, continue.
14:24 | Li Rajaraman
We leverage our automated tests towards the same also whether it’s a code change made by an engineer. We run the mission critical use cases and whether it’s qe, we use our automated tests to find the same, do the same for us. Okay?
14:41 | Matt Klassen
Can you talk, maybe talk a little bit about the other part I want to talk about a little bit is the automation. So, can you talk a little bit more about like how, what do those automated tests look like? You know, anything you could share around frameworks around you?
15:05 | Matt Klassen
Executing those against simulators emulators, versus real devices, it’s you know, et cetera. Like what does that, what does that environment look like? And how have you, how kind of, how have you scaled automation?
15:18 | Li Rajaraman
Right. So I want to say sometime in 20 20 20 21 is when we started looking at how we leverage our automation frameworks, the mobile, iOS and android automation frameworks. And we went through a sort of transformation in approach in how we build the framework and who is going to be using it, who is going to be contributing it? All of those aspects as well. So we took a look at even before writing tests, we took a look at what that framework should be, how it should serve us and where we run our tests. When we run our tests, we usually write automation with physical devices, local devices. But when we are executing them in our nightly runs or for app release sign offs, we are leveraging kobiton for iOS and another platform for android. So we use device forms to sort of expand our device OS matrix and find those unique issues that are unique to OS versions or device form factors. And all of that we are integrated. The automation frameworks are integrated, created within the dev repositories, so that both engineering and qe, they have access.
16:45 | Li Rajaraman
To both.
16:46 | Li Rajaraman
The automation frameworks and the code repositories. So they are not presiding separately that’s something that we sort of consciously decided, we will not write a framework that’s outside of the repository where there’s no visibility for engineering to say why something is broken there, how to root cause an issue. And all of that. So we are integrated within the code repositories. We run our tests against the CI CD pipeline early on. We started that as we built the framework that’s one of the things we took care of, we use GitHub actions, and we set up our tests to run against the nightly builds. And when we have a release candidate, we execute our automated tests against that release candidate as like a final seal to say this, we have confidence in this build to release it to the app store and play store.
17:45 | Li Rajaraman
are very conscious about how we were building the framework and where we are running it when we are running it to give us maximum gains.
17:56 | Matt Klassen
Yep. Okay. That’s good. That makes sense. So when you talk about the device farm, how did you, how did you figure out which, right device configurations, how many, which device configurations sort of like a sweet spot, you know, for your organization for, and I guess we’ll focus on iOS. You can talk about android too, but let’s talk about iOS primarily.
18:26 | Li Rajaraman
Yeah.
18:28 | Li Rajaraman
So we leverage our data member, usage data to make decisions on what to run what to test with. So our data informs that we run our tests against the most used most popular device OS combination, the lowest supported only because it’s supported. And we do have users on that version and somewhere in between, right? Maybe a problematic OS version or where we have the second most used OS version and things like that. So, we look into analytics data every single time, we are talking with kobiton about upgrading an OS version in a device. We are looking at that data and making decision as to when adoption will increase or when a version’s adoption has decreased and we make a call on updating our devices. So it’s data.
19:33 | Matt Klassen
So, so… I,
19:40 | Matt Klassen
want to go back to something, you know, kind of go back to the shift left a little bit cause you’ve actually, it sounds like you’ve been a little bit, you know, been successful there. It sounds like from the very beginning you knew that your approach was only going to be successful if, you know, the quality organization worked together, right? Qe, sort of worked together with dev in combination. You’ve made some purposeful choices there, which I think is, which is obviously seems to be working for you. But I think when we were talking, you’d mentioned that there’s still skepticism, you know, here and there, there’s still skepticism in the dev organization, and around, I don’t know if it’s you know, that, you know, why am I doing testing or they don’t want to do testing? Or that it’s going to make an impact or, you know, help maybe talk a little bit about your experience there. And how did you kind of overcome that? And, and kind of, you know, work with dev to kind of to move past that?
20:44 | Li Rajaraman
So, two two things came up when we, as part of the shift left discussion, we piloted the shift left approach with a combination of crews, a crew on mobile, a crew on web and backend to see where it is working. Well, what kind of changes we should make and when we were going to formalize it more widely. The, the question from the mobile engineering team was… what, and, we wanted to as part of shift left, we wanted to integrate validation tests, a small subset of our test suites into the merge queue process. It’ll run against any PR that is getting merged. And if the tests fail, the PR will be kicked out of the queue. And the author of the PR has to look at, why, that happened. So someone from the engineering team asked, why should we do that? How do I even know that the failure matters? So, we were, when we started running our nightly tests regularly, we started tagging issues that we would find from the failures caused by the automated tests. And we deliberately didn’t do any automation to create issues today. I think we are going to continue to do it. We are creating those issues manually. Somebody in the qe team would look at failures and then say, okay, this is a genuine failure. I’m going to log a defect against this function, this area. So we had the data going into this discussion to show that, hey, look, these are so many issues found across six months that we’ve found fixed that has a code change associated with it. So the framework is working, it is finding issues for you if you integrate it with your merge process. And if for some reason the tests fail, there is a possibility that it’s a genuine failure, that your code change caused an issue somewhere else. So that data helped us have that conversation and sort of get buy in for integrating our tests, move testing early in the cycle even before a code gets merged into our different feature branches, develop branches and all of that. So automation and our process of tagging defects drove that conversation. Yeah.
23:24 | Matt Klassen
Yeah, that’s awesome. So I think data is key. I think we all probably know that, but data speaks for itself most of the time.
23:35 | Matt Klassen
So what?
23:37 | Matt Klassen
As you think about sort of like the… and I’m going to kind of move a little bit forward in our discussion because we have about five minutes left and there are a couple questions. So kind of net out like what are some of the key KPIs? Like how do you know you’re you know, you’re moving in the right direction. What do you measure? You don’t have to tell us the actual measures necessarily, but what are you measuring? And what kind of impact are you making as you progress through this, you know, through your transition in terms of like again, in terms of those metrics?
24:13 | Li Rajaraman
In what case, while using our frameworks?
24:19 | Matt Klassen
Yeah.
24:20 | Matt Klassen
If you were just to look at your organization, right? How do you measure your organization’s success? I guess let’s just put it that way, just really simple.
24:28 | Li Rajaraman
The QA, org, yes.
24:30 | Matt Klassen
Yes, you’re yes, exactly.
24:32 | Li Rajaraman
Okay. I want to start from our automation frameworks. The KPIs, that matter to us. That kind of goes or falls under the tech org’s KPIs. Also. So we want to be able to be agile, be nimble, flexible if we are releasing something and if it fails, we fail fast, roll back and repeat that’s our approach to what we deliver. So if we want to do that, then our qe, the quality process should also sort of follow those principles. We are not too rigid about the processes when using the frameworks. Don’t be too rigid about how we are using the data that we get, and all of that. So what we write should make an impact. If our frameworks are stable and reliable, they run every single day for any code change that happens. Then we are in a place where we are only responding to genuine failures and that rules under how engineering responds to the issues that we find we are at a two week release cadence. So we have to be moving faster every other week we are releasing. So we need to, our sign off process has to be quick and fast. We need to make sure we are not in any way disrupting engagement in the member experience with what we deliver. So that is key to us. Our tests should go towards not disrupting normal P, zero flows and all of that. So those are some of the, I would say metrics, stable, reliable, move fast, write… code for what matters?
26:38 | Matt Klassen
Yep.
26:39 | Matt Klassen
Yep, I think those are some really good, some really good sort of a good summary of best practices. So again, if you have any questions, I’m going to pause for a minute here. If you have any questions, go ahead and put them in the QA section. I think, there was a sort of a question but it was answered in the QA and it’s discussion or the chat. So I don’t know that we need to ask that specific question… but I will ask one that’s kind of general in that area. But what security considerations do you have in the, you know, as you’re thinking about mobile, the mobile quality engineering? Like is there, any particular frameworks or compliance or anything that you’re looking for that’s really particularly, you know, important in how you address sort of security.
27:34 | Li Rajaraman
We, I want to say we don’t use our frameworks in towards security, but we do ensure let’s say for example, if we are using the public devices in the kobiton platform, we ensure we don’t leave any test environment related data, something that we are very conscious about. We are also conscious about using production data. So we don’t necessarily run our tests against any production data that would be out there or that would consume production data, PII, and whatnot, right? So when we are setting up our frameworks, we are conscious about that, what environments we test against and what is out there if we are using external platforms to extend our test execution capabilities. Does that answer?
28:35 | Matt Klassen
That’s a good answer? I made it kind of semi generic. I took one that was more specific and made it generic. So that’s a good answer. I think those are good considerations. And here’s another question. Again, you can choose… to answer it if it’s if you can. Is there any particular reason you use separate device services for iOS and android?
28:58 | Li Rajaraman
The short answer is the android engineering team wanted to be within the Google android ecosystem. When we’re doing the tech selection for device farms. We wanted to be on the same platform for both. But our process just showed us one worked really well for one platform versus the other, what we got out of test execution and reporting dashboarding. And all of that showed us that it’s okay for us to be on different platforms. So we chose what worked best with each platform, iOS and android.
29:38 | Matt Klassen
Okay. And then the last question I have is, and this isn’t necessarily specific to android versus iOS. But is… there, do you see it, is there a case to be made for testing against real devices versus just emulation simulation?
29:58 | Li Rajaraman
As a qe, I would say yes, because that’s where our users are going to be. They are not going to use the application on simulators and emulators. So it depends on how the experience is implemented, is how we decide, right? But the qe in me wants to say test even if it’s hosted in the cloud test with real devices. So, yes, yeah. Okay.
30:30 | Matt Klassen
Okay, good. All right. Great. Hey, thank you, very much. That’s all the time we have for and people are transitioning to the next session. So, but super appreciate your time. I think it was a, very, good session and educational for everyone. So, thank you. Thank.
30:46 | Li Rajaraman
you, matt. Thanks a lot for the conversation. Thank you. Thanks.