Webinar

Mobile Excellence at Uber

White Background

Abstract

Tune in to this on-demand podcast-style interview with Nanda Kishore, Uber’s Director of Program Management, as he reveals insights from Uber’s transformative journey toward Mobile Excellence. Learn key strategies for creating seamless, magical user experiences that make your mobile app stand out in today’s competitive market. Gain valuable perspectives from a leader driving innovation at scale—watch now and elevate your mobile strategy!

Mobile Excellence at Uber

Discover key strategies for achieving Mobile Excellence as Nanda Kishore from Uber shares insights on delivering seamless, standout user experiences in today’s competitive app landscape.

Learn More

Video Transcript

0:00 | Cara Suarez
Welcome to the Mobile Testing and Experience Summit, MTS, I’m, Cara Suarez, Head of Marketing at Kobiton For this session, I have the opportunity to sit down with Nanda Kishore Director of Program Management. The mobile testing industry highly regards Uber as a global leader in advanced mobile testing and delivery methodologies, Known for its rigorous practices in continuous integration and deployment. Uber delivers mobile excellence at a scale and complexity at the forefront of mobile innovation. Today, Nanda will help us understand Uber’s journey to mobile excellence. So, Nanda tell us about your experience as a quality leader at Uber and your background.

0:47 | Nanda Kishore
Thank you so much for having me, Cara, I lead the quality org within Uber and I’m part of a team called GSS. Gss. Stands for Global Scale Solutions. We are a product and tech arm, helping product and engineering teams scale programs and projects globally. So that’s a quick intro about me. As far as Uber’s mobile testing experience is concerned, Uber app in general, if you notice very closely has a global footprint operating in 100 plus countries, We have a, very agile release cycle which means we ship close to four apps every week across operating system versions, mobile operating system versions. This in turn leads us to testing more than 40,000 test cases every week, not including any of the hotfixes… or the ad hoc testing that we do like performance testing, accessibility testing, integration testing, and so on with the highest level of quality, right? So that’s the big focus while also not losing track of our efficiency by automating as many tests as possible. We work on test case optimization. We look to proactively shift our bug discovery to the left as much as possible. We tap into the social media channels to understand bug trends. So there’s a lot of effort going on for the last few years. So that sort of explains the complexity with which the Uber app is rolled out globally into production environment. So that’s been a phenomenal journey. We are kickstarting several programs and projects, right? To create excellent mobile experience. And what I mean by excellent mobile experience is… we are trying to ensure that our app is, absolutely magical to use. It’s seamless. It’s intuitive to use for everyone everywhere. It’s visually, very appealing. It’s super fast in terms of performance and has a smooth navigation. So this is what we call us the magical user experience for everyone everywhere. That’s a big pillar that we are trying to focus on as a quality function. And the thinking here is we are very clear in terms of our strategy and we are looking to own the end to end user experience by seamlessly integrating performance compatibility, accessibility and usability as part of our core testing fundamentals. So I hope that gives you a background of how we approach mobile excellence in the context of Uber?

3:38 | Cara Suarez
I think something that is really striking is how having such a clear definition of mobile excellence has really shaped your mobile strategy. Do you know a little bit about how that definition came to evolve?

3:58 | Nanda Kishore
Absolutely. In fact, to give you quickly a bit of a background context when we set up our mobile testing processes about eight years ago, Uber was a rapidly evolving organization. There is a natural bias to deliver and launch features from engineers and who are building the speeches and to launch it as fast as possible. So naturally, the quality was taking a hit because of which, sometimes we ran into situations that, you know, they were global outages. There is friction in the experience. You’re trying to open the Uber app to book a ride at a critical moment and the app crashes, or you’re trying… to select a product, select a screen and you’re unable to do it. You’re in the midst of an emergency, you’re trying to get to a location and you’re not able to do it. So when the feature exists, we’ve not been able to test it comprehensively. Another challenge back then was there was no dedicated to find a testing team within Uber And a lot of our initial test strategy was based on vendor outsourcing a bit of ongoing testing in the US and that cost us, costed us quite a bit in terms of investment. So, therefore this whole pillar around creating magical user experience, making it seamless sort of got evolved and that’s when we started setting up the practice ground up in house to deliver that experience for our users?

5:30 | Cara Suarez
So, what were some of the earliest challenges when you were first setting up that in house practice that you remember that you came across?

5:41 | Nanda Kishore
I think one of the earliest challenges was, you know, the value of testing in itself was not clearly established. Like what does testing mean? While we are talking about, you know, seamless experience for everyone everywhere. What does it mean? How do we test, Do we test for one city? Do we test at 100 plus cities? So we had to first establish What’s the value of testing. So we established metrics like difficulty gauge. We established metrics like test coverage. We started aligning leadership teams across the board top down as well as aligning the teams within GSS. So that was one of the first things that we did which is to define what success means in the context of a match. The next thing that we did was within our own organization, we had two different teams. One team was focused on largely centrally triaging bugs and minimizing noise that was going to engineering team. Another team was entirely focused on doing end to end testing. And these two teams were in two different organizations and operating in siloed manner. So we sort of, We felt that bringing these teams together under one Bux and testing integration strategy sort of is the right thing to do. So, we did that. It took a while for the structure to sort of fully take shape in that the Bux team was entirely focused only on Bux creation. They were not looking at it from an end to end lifecycle standpoint. Likewise, the testing team was purely focused on creating a test plan and a test cycle and a test strategy, but not really looking at defects leaking into production. So we had to bring these two teams together and sort of put together a structure with a vertical and a horizontal structure where the horizontal team is focused on scaled operations, running repeatable playbooks, testing at scale, looking at Bux, and so on. While the vertical team is focused on ensuring the feature onboarding is proper. Our experience from an engineering standpoint is seamless. The handshake happens logically and so on and so forth. So, I think there was an odd change. There was a definition that we had to put together. There was a very clear success criteria that we had to define. And then that sort of helped us to move the needle forward in terms of, you know, the mobile excellence?

7:59 | Cara Suarez
Yeah. Would you say that organizational structure greatly contributes to, you know, kind of this speed at which you’re able to deliver and tackle the numerous releases that you’re bringing to market on a weekly basis?

8:15 | Nanda Kishore
Absolutely. I think it’s important to have everyone aligned on a common purpose. And an objective Like you can’t be testing features in isolation while you’re not having that conversation with engineering on the comprehensiveness of those tests, right? There is no clear definition of the test plan and so on. So, I think The structures make a huge difference in that the end to end structure has to be seamless. Everyone signs up. There is a very clear objective that we are all aligned to. And then there is a very clear timeline in terms of delivering results To me. I think those odd structures really do matter to ensure that all of us are on the same page on what does mobile excellence mean? What does magical user experience for everyone everywhere means, right? And therefore, What does it translate to in terms of metrics And how do we drive it collectively as a team to make it happen? I think, that’s the essence of when I say the architecture is super important.

9:15 | Cara Suarez
Yeah. You know, speaking of metrics, what KPIs, does the organization use? What do you find most valuable for really assessing, you know, performance, and reliability, or even other internal KPIs and metrics that are used to kind of manage the overall effectiveness of the organization?

9:39 | Nanda Kishore
The two key KPIs that we look at it from a testing standpoint, if you were to ask me, is the first is the test coverage, How comprehensively am I testing the application across regions across cities? Am I accounting for performance testing? Am I accounting for accessibility testing? Am I doing enough negative testing and so on? So the first KPI That I generally look at what’s my functional test coverage when I test an application, Am I looking at every user journey, every mobile screen, every single feature and the interplay of such features? And am I comprehensively testing those? I think that’s first definition? The second definition that we are all aligned on is the defect leakage, the number of bugs or defects moving to production. So we couldn’t catch it in testing, right? Like for example, you’re trying to test a city card, a commercial city card being used in the US, but you’re not testing for a card that is being used in South Korea, where Uber has an operating marketplace, right? So, stuff like that, How do you test for regional nuances? So those are Like those two really matter in ensuring that the end experience doesn’t take a hit because you’ve not tested it, right? So we’ve recently launched a program where we are comprehensively doing end to end of payments testing as well, right? We test for nine different payment methods in 10 different countries, right? We test for Google Pay Apple Pay. So how do we, how do we ensure that there is comprehensive test suite and there are success metrics That we are clearly testing for in every release cycle And we deep dive into what the issues are as we run into challenges.

11:29 | Cara Suarez
Yeah, no, that absolutely makes sense. You know, I feel like that is something that clearly had to evolve, you know, over time And you mentioned, you know, sometimes bugs occasionally getting out into production. I’d love to ask about user feedback and how user feedback has influenced your approach to mobile quality? And also kind of those iterations of how you’ve changed your processes.

11:58 | Nanda Kishore
No, absolutely. I think initially, a couple of years ago, we did listen into user feedback, but we didn’t have a structured robust program around listening into user feedback. So one of the first thing that we did was to resurrect the beta program where we started providing white glove service to our top owners. We try to put together a cohort of owners that we want to focus on People who have spent enough time in the system who have taken a good number of trips on the platform, who can definitely make time to deliver feedback to us on the mobile experience. So we put together this cohort, we started sending invites out to them to have them sign up to the program. We looped in our legal team, our corporate communication team to make sure the messaging is consistent in terms of what are we attempting to do here. And then once the program was set up, we ensured that this cohort of users are getting first class user experience when they report bugs. So what we did is as soon as a bug is reported, we ensure that it’s trashed to the relevant engineering team in less than four hours. Right? What we also did is we look at specific screen names which would potentially contribute to most number of PZOE P1 bugs and prioritize them first as compared to some of the lower priority bugs. We look at customer reported bugs to ensure those are prioritized first hand than an internal facing user facing bug. We also ensure that we send similar status updates to people on how feedback that has been submitted is being comprehensively used right? Month on month. So we look at what is the total number of bugs that have been submitted, month on month? How many active beta users are on the platform? What is the validity rate of the bugs that are submitted? We also interestingly looked at features that were coming from this population and we trashed it to the relevant product teams so that can go into the product roadmap for Uber. So I think that’s how we kept reinforcing our processes by listening into user feedback. And that’s exactly why I said shifting left and bug discovery early on in the software development cycle is going to be super important.

14:25 | Cara Suarez
You know, as you were telling that story, you talked about how you had to work cross functionally with corporate communications, and legal, and all of these other departments to set up a really effective program. You know, I’d say what advice would you give to other organizations? You know, that are kind of squarely in the tech or testing or program operations about, You know, the importance of working cross functionally and how to do it effectively. Because not everyone understands what you’re talking about, right? On a technology level. So tell me a little bit about how you align people cross functionally?

15:06 | Nanda Kishore
Yeah. I think my approach to aligning people cross functionally is to focus on the user experience than tech. It doesn’t matter what tech stack, what is the underlying tech stack that’s covering the platform? But essentially, when I talk to my engineering teams or my product teams, my initial pitch is, hey, we are looking to create a friction free experience for our users, right? And that’s exactly what we’ve all aligned on at Noggle. What does magical experience mean? So, I think that keeping the narrative really simple, keeping the narrative that is resonating well with everyone Irrespective of the regions and teams they belong to. I think that really stands out. And then it’s about translating that experience into objective metrics that each of these teams can contribute… to, right? Like if you’re talking about automation, right? In the context of mobile test automation, the KPIs that they had to focus on is the reliability of our mobile testing platform. How reliable is our mobile testing platform? Am I getting an instant feedback loop? Am I able to run to My mobile test suite across 100 plus cities, right? When it comes to GSS? Am I triaging bugs? Am I shifting left? Am I identifying patterns in bugs? Am I proactively adding test cases based on the bug trends that I’m identifying? Am I reducing defect leakage? Is there a clear test coverage and definition? Those are my KPIs? Right? So when it comes to a technical program management team, then they start breaking this down to how is my release cycle to How many hotfixes are coming in production? Am I able to fix it within time? So, I think everyone is working towards one common goal which is to create this… frictionless experience and therefore translating that to specific KPIs for those individual teams. And then cross collaborating is a magic that I sort of felt worked across the board when you work with different teams, right? Like you can’t go and tell a marketing team about the underlying tech stack and the root personnel, and what is invalidation and deduplication and so on, right? I think the pitch there is, hey, I’m trying to create a friction free experience. I want my app to behave seamlessly intuitively fast in every market where Uber operates. I think that’s the background context. And therefore, you know, it’s important to set up this beta program to get first hand feedback from our beta users and being able to action on the feedback, Not just setting up the program but also actioning on the feedback. I think that really stood out when we made the pitch.

17:48 | Cara Suarez
You know, I think, that is, it is such great advice to shape that message to be about the magical user experience versus bugs which somebody might not really understand or feel like inspired by, right? So while you were also talking, you mentioned a little bit about automation. I’d love to dig into that a little bit more around, you know, what role does automation play in your testing strategy? And how do you balance that with manual testing?

18:23 | Nanda Kishore
That’s a great question. Automation plays a, very significant role in improving efficiency, enabling faster feedback loops and increasing the test coverage. Today. If you look at manual testing, one of the challenges that we run into is we do very limited runs of manual testing, which is not the case with automation. With automation, you can run Testing across pipelines multiple times even within a day. I think that’s one, We have a very robust mobile and web automation framework today. We also have a lot of backend automation coverage where we look at API endpoints that these services communicate to. So if an endpoint goes down and the service is unable to communicate, then that can potentially lead to a lot of L for L for incidents. So we are constantly looking at automation as a key lever to ensure we have sufficient coverage. We run several tests within a short span of time across several cities. That gives you the scale that you’re looking for. However, having said that while you have a robust automation strategy, it’s also equally important to balance it with manual testing Because with manual testing, you have an edge where you can sort of test for every edge case, You sort of test for every negative scenario, You sort of test for every regional events, right? Whereas in automation testing, it’s more broad and shallow. Whereas manual testing solves for a lot of HPA scenarios. So I think having that right balance is extremely important. And then you look to continuously shift left on both, right, You get to a point where when a software mobile engineer is attempting to place a diff, You are immediately blocking the diff. If that code diff is leading to any kind of conflict in the larger app, I think that’s where you’ve got to find the right balance between the mobile test automation, the web test automation, the backend integration testing, and the manual testing.

20:23 | Cara Suarez
Yeah. Wow. I mean, it sounds really incredible to have that view about kind of broad and shallow versus like the depth in the edge cases. I feel like that is kind of advice or mantra that Anyone can really use in their organization especially if they’re trying to make a business case to kind of invest in both methodologies. I was actually wanting to ask you a little bit about some of the tools and technology that Uber’s adopted for mobile testing and how has that evolved over time?

21:02 | Nanda Kishore
In fact, in our very initial stages, we had a lot of manual operators running test cases on their personal devices when we started off testing, Like I have a device and I start running tests on the device, right? And this is the device that I use And we used to track the results in spreadsheets, Google spreadsheets and share results more broadly, which led to a A lot of discrepancy. Gradually, what we did was we started investing in a fully functional device lab with simulators and 1,000 plus devices across brands, and OS versions, That’s exactly where Kobiton comes in, right? But we don’t played a pretty significant role in terms of establishing our on prem device lab. And today, we are at a much better position in terms of some of those tools processes and technologies that we’ve been able to adopt to, We were able to quickly move to Jira T3 for project management. We today use Zephyr for case management while also investing heavily on Google Data Studio for anything with respect to dashboarding of some of these results. I think with a greater understanding of the product, we also broke down the entire application into components and capabilities. We essentially test every single block of the app. So imagine that, look at the shift from mobile devices spreadsheets to a Kobiton device lab at scale 3,000 plus devices, emulators, running test cases, We have automated dashboards. Now, We have innovative operating models. There’s a test case management system and so on. So that’s the shift that we’re talking about over a period of time.

22:45 | Cara Suarez
Even though it was over a period of time, it seems like you evolved really rapidly. How did you get that internal alignment to, you know, have, you know, finance approved investment in the technology? And also, you know, investment in people, right? You know, so that’s something that I think a lot of quality organizations, you know, struggle with is being a little bit under resourced. So tell us how you overcame that.

23:13 | Nanda Kishore
So generally, what we do is we identify pain points that we want to be able to solve, right? Like for example, in the earlier conversation, I spoke of using Google Spreadsheets and how we were manually tracking results using Google Spreadsheets. We felt this was absolutely suboptimal, because if someone is not in office and we don’t have access to the report that this individual has put together based on a test, we don’t have access to what exactly happened. So we had to quickly move to a test case management platform, right? Likewise for mobile devices, it was extremely hard for us to keep track of the number of physical devices and who we assigned to and the scale at which we are operating. And therefore, it made total sense to invest in a setup like an on prem device lab in partnership with Kobiton where we have so much of so many different devices and versions that we could absolutely seamlessly test, Irrespective of where we are in the world, right? I can be operating from home from a remote office or from my office location itself. The experience is super seamless. And the beauty of this entire setup is because a lot of my testers are today based in India, right? While we are doing a lot of on road testing also globally, but we do a comprehensive testing out of India as well. And with the Device Lab being based in India, there’s less latency, there’s a near real time access. The entire experience is like pretty seamless, right? So, I think it’s extremely important to focus on what the pain points are and surface them at the right level for budget investments. Likewise, when it comes to component and capabilities, we started doubling down on the test coverage definition that I spoke of at the very beginning in this conversation, right? So we started breaking down the application into various streams or surfaces. Then within each of the surfaces and screens, we started breaking down the application into capabilities and components. And that’s where we started measuring objectively what does test coverage look like, right? So, initially, you know, it was a bit difficult to get a buy in. But once we’ve established what is test coverage for a mobile release app objectively? And once we’ve aligned engineering teams across the world and grant pilots to show results and the end impact. It becomes so much more easier to scale these programs with other teams who have not… volunteered initially for the pilots. I think that’s a journey slightly an iterative cycle. But you eventually get a buy in when you have objective numbers to share and you have end result that clearly shows that the experience has improved quite a bit Like we are talking. About a 99 point five percent, 99 point eight percent uptime of our mobile release apps. How do you make that happen? Right? A year and a half ago, if I were to take a trip and in the midst of a trip, I’m trying to switch my ride or switch my payment method, you know, error pops up and the error says, oops, something went wrong. It doesn’t tell me as a user, what do I have to do to get out of the situation, right? But we are testing for each of the scenarios today comprehensively and ensuring that the users have to, need, not have to think about the nature of issue, the underlying issue, but instead focus on the experience which is to click a button, book a ride and get to the destination.

26:25 | Cara Suarez
Yeah, I mean, it’s really incredible when you look at it through that lens, what you’ve been able to, you know, accomplish in, you know, in such a short time. So I’d say we’re coming up to the end of our session. So I would love to hear some thoughts, Some closing thoughts, and perhaps, you know, some advice to any mobile testing organization that might be where Uber was. You know, eight years ago, or five years ago, You know, what advice would you give them?

26:58 | Nanda Kishore
I would say a big disruption or adoption if I may say so that’s sort of coming our way is the adoption of GenAI for mobile testing. I see AI sort of playing a very key role in shaping the future of mobile testing, Right? We ourselves attempted several pilots where you have a Gen A model, we have our ERDs and PRDs on features. We just give that as an input to the model and the model itself, you know, helps write a test plan and a test, you know, a strategic test plan for each of the features. It can design comprehensive test cases which includes edge cases and negative cases. So, I think a big adoption of AI as we do mobile testing at scale is something that we do. Definitely, I would advise the teams to look at as they are looking at mobile testing and the evolving scenario of mobile testing globally, especially with advancements that are happening in the industry today. I think that’s a clear call out. I think somewhere down the line, we are getting to a point where Gen A looks like it can even analyze the code written by edge teams and make a call if it will impact the user experience in an adverse way, right? So this is really disrupting what we did. But in a positive way. So, I think adoption of Gen AI, bringing Gen AI expertise as part of the SGLC development lifecycle as part of your creation process. It’s going to make a huge difference. And I think that’s something that I feel as teams adopt mobile testing strategies, if they could double down on, I think that will be hugely beneficial for everyone. Of course.

28:38 | Cara Suarez
I think that is excellent advice and hopefully actionable. And our audience will certainly take that to heart. So again, Nanda, it was an absolute pleasure getting to speak with you.

Ready to accelerate delivery of
your mobile apps?

Request a Demo