Championing Automation: How Quality Leaders Can Drive Organizational Support
Abstract
Tune in to this on-demand podcast-style interview with Nanda Kishore, Uber’s Director of Program Management, as he reveals insights from Uber’s transformative journey toward Mobile Excellence. Learn key strategies for creating seamless, magical user experiences that make your mobile app stand out in today’s competitive market. Gain valuable perspectives from a leader driving innovation at scale—watch now and elevate your mobile strategy!
Mobile Excellence at Uber
Discover key strategies for achieving Mobile Excellence as Nanda Kishore from Uber shares insights on delivering seamless, standout user experiences in today’s competitive app landscape.
Speakers
Video Transcript
Welcome to the mobile testing and experience Summit MTES I’m Cara Suarez head of marketing at Kobiton for this session I have the opportunity to sit down with Nanda Kore director of program management at Uber the mobile testing industry highly regards Uber as a global leader in Advanced Mobile testing and delivery methodologies known for its rigorous practices in continuous integration and deployment Uber delivers mobile Excellence at a scale and complexity at the Forefront of mobile Innovation today Nanda will help us understand Uber’s journey to mobile Excellence so Nanda tell us about your experience as a quality leader at Uber and your background thank you so much for having me Kaa um I lead uh the quality or uh within Uber and I’m part of a team called GSS uh GSS stands for global scale Solutions we are a product in Te helping product and Engineering teams SK programs and projects globally so that’s a quick intro about me uh as far as ub’s mobile testing experience is concerned Uber app in general if you if you notice very closely has a global footprint operating in 100 plus countries we have a very very Agile Release cycle which means we ship close to four apps every week across operating system versions mobile operating system versions uh this inter leads us to testing more than 40,000 test cases every week uh not including any of the hot fixes or the ad hoc testing that we do like performance testing accessibility testing integration testing and so on with the highest level of quality right so that’s the big Focus while also not losing track of our efficiency by automating as many tests as possible uh we work on test case optimization we look to proactively shift our bug Discovery to the left as much as possible uh we tap into the social media channels to understand bug Trends um so there’s a lot of uh effort uh going on for the last few years um so that sort of explains the complexity with which the over app is rolled out globally uh into production environment so that’s that’s been a phenomenal phenomenal Journey uh we are kickstarting several programs and projects right uh to create excellent uh mobile experience and what I mean by excellent mobile experience is uh we are trying to ensure that our app is absolutely magical absolutely magical to use it’s seamless it’s intuitive uh to use for everyone everywhere it’s visually very appealing uh it’s super fast in terms of performance and has a smooth navigation um so this is what we call as uh the magical user experience for everyone everywhere that’s that’s a big pillar that we are trying to focus on as a quality function and the thinking here is we we are very clear in terms of our strategy uh and we are looking to own the endtoend user experience uh by seamlessly integrating performance compatibility accessibility and usability as part of our core uh testing fundamentals hope that gives you a background of you know how we approach mobile excellence in the context ofle yeah I think um something that is really striking is how having such a clear definition of mobile Excellence has really shaped your mobile strategy um do you know a little bit about how that um that definition came came to evolve absolutely absolutely in fact to give you quickly uh a bit of a background context when we set up our mobile testing processes about eight8 years ago Uber was a rapidly evolving organization there is a natural bias to deliver and launch features um from engineers and who are building the features and to launch it as fast as possible uh so naturally some the quality was taking a hit uh because of which sometimes we ran into situations that uh you know there were Global outages there is friction in the experience you’re trying to open the Uber app to book a ride at a critical moment and the app crashes right or you’re trying to select a product select a screen and you’re unable to do it you’re in the midst of an emergency you’re trying to get to a location and you’re not able to do it um so while the feature exists we’ve not been able to test it comprehensively another challenge back then was there was no dedic ated testing team uh within Uber um and a lot of our initial test strategy was based on vender Outsourcing a bit of on testing in the US um and that cost us costed us quite a bit uh in terms of investment um so therefore this this whole pillar around creating magical user experience making it seamless uh sort of got evolved and that’s when we started setting up the practice ground up inhouse to deliver that experience I use this so um what were some of the the earliest challenges when you were first setting up that inhouse practice um that you that you remember that you came across I think one of the earliest challenges was um uh you know the value of testing in itself was not clearly established like what does testing mean while we are talking about um you know seamless experience for everyone everywhere what does it mean uh how do we test uh do we test for one city do we test at 100 plus cities uh so we had to First establish what’s the value of testing so we established metrics like def leakage we established metrics like test coverage uh we started aligning leadership teams across the board top down um as well as aligning the teams within GSS um so that was one of the first things that we did uh which is to Define what success means in the context of a magic Los experience the next thing that we did was within our our own organization we had two different teams uh one team was focused on largely uh centrally tring bucks and minimizing noise that was going to engineering team another team was entirely focused on doing end to end testing and these two teams were in two different organizations and operating in siloed manner U so we sort of felt that you know bringing these teams together under one buck and testing integration strategy sort of is the right thing to do so we did that uh it took a while for the structure U to sort of fully take shap in that the buck team was entirely focused only on Buck creation they were not looking at it from an end to-end life cycle standpoint likewise the testing team was purely focused on uh creating a test plan and a test cycle and a test strategy but not really looking at defects leing into production so we had to bring these two teams together and sort of put together a structure with a vertical and a horizontal structure where the horizontal team is focused on scaled operations running repeatable playbooks testing at scale looking at bucks and so on while the vertical team is focused on ensuring the feature on boarding is proper our experience from an engineering standpoint is seamless the handshake happens logically and so on and so forth so I think there was an OD change there was a definition that we had to put together there was a very clear success criteria that we had to Define and then that sort of helped us to move the needle forward in terms of uh you know the mobile Excellence yeah would you say that that um that organizational structure greatly contributes to you know kind of this speed at which you’re able to deliver and tackle the numerous releases that you’re bringing to Market on a weekly basis absolutely I think uh it’s important to have everyone align on a common purpose and an objective um like you can’t be testing features in isolation uh while you’re not having the conversation with engineering on comprehensiveness of those tests right um there is no clear definition of the test plan and so on so I think the structures make a huge difference in that the end to end structure has to be seamless everyone signs up there is a very clear objective that we are all aligned to and then there is a very clear timeline in terms of delivering results to me I think those art structures really do matter to ensure that all of us are on the same page on what does mobile excellence mean what does magical user experience for everyone everywhere means right and therefore what does it translate to in terms of metrics and how do we drive it collectively as a team to make it happen I think that’s the that’s the essence of uh when I say the arc structure is super important yeah yeah you know speaking speaking of of metrics um what kpis does does the organization use what do you find most valuable for really assessing you know performance and reliability or or even other internal kpis and metrics that are used to kind of manage the overall effectiveness of the the organization the two uh key kpis that we look at it uh from a testing standpoint if you if you were to ask me is the first is a test coverage uh how comprehensively am I testing the application um across uh regions across cities uh am I accounting for performance testing uh am I accounting for accessibility testing um am I am I doing enough negative test on right so the first CPI that I generally look at what’s my functional test coverage when I when I test an application am I looking at uh every user Journey every Mobile screen every single feature and the interplay of such features and am I comprehensively testing this I think that’s first definition the second definition that we are all aligned on a defect leakage the number of bugs or defects moving to production because we couldn’t catch it in testing right like for example you’re trying to test a city card a commercial City card being used in the US but you’re not testing for a card that has being used in South Korea where Uber has operating marketplace right so stuff like that how do we test for regional Nu anwers so those are like those do really matter in ensuring that the end experience doesn’t take a hit because you’ve not tested it right so we’ve recently launched a program where we are comprehensively doing end to end of payments testing as well right uh we test for nine different payment methods in 10 different countries right we test for Google pay Apple P so how do we how do we ensure right there is comprehensive test suit and there are success metrics that we are clearly testing for in every release cycle and we Deep dive into what the issues are as on ches yeah yeah no that absolutely makes sense um you know I feel like that is something that clearly had to evolve you know over time and you mentioned you know sometimes bugs um occasionally getting out into production um I’d love to ask about user feedback and how user feedback has influenced your approach to uh to mobile quality and also kind of those those iterations of how you’ve changed your processes no absolutely I think initially um couple of years ago we did listen into user feedback but we didn’t have a a a structured robust program around listening into user feedback so one of the first thing that we did was to resurrect our uh the beta program where we started providing White Glove service to our top earners we we tried to put together a cohort of earners that we want to focus on uh people who have spent enough time in the system who have taken a good number of trips on the platform who can definitely make time [Music] to um deliver feedback to us on the mobile experience uh so we put together this cohort we started sending invites out to them to have them sign up to the program uh we looped in our legal team our corporate communication team to make sure the messaging is consistent in terms of what are we attempting to do here and then once the program was set up uh we ensured that this cohort of us users are getting first class user experience when they report bugs uh so what we did is as soon as a bug is reported we ensure that it’s trashed to the relevant engineering team in less than four hours right what we also do did is we look at specific screen names uh which could potentially contribute to most number of P P1 bus and prioritize them first uh as compared to some of the lower priority bugs we look at customer reported bugs to Ure those are prioritized firstand than an internal facing user facing bu we also ensured that we send regular status updates to people on how feedback that has been submitted is being comprehensively used right month on month so we look at uh what is the what is the total number of bucks that have been submitted month- on month how many active beta users are on the platform what is the validity rate of the Bucks uh that are submitted uh we also interestingly looked at features that were coming from this population and we trashed it to the relevant product teams so that that can go into the product road map for Uber so I think that’s how we kept reinforcing our processes by listening into user feed back and that’s exactly why I said shifting left and Bug Discovery early on in the software development release cycle is going to be super important yeah yeah you know as you were telling that story you talked about how you had to work cross functionally with corporate Communications and legal and all of these other departments to set up a really effective program um you know I’d say what advice would you give to other organizations you know that are kind of squarely in the the tech or testing or program operations about um you know the importance of working cross functionally and and how to do it effectively because not every one understands what you’re talking about right on a on a on a technology level so tell me a little bit about how you align people cross functionally yeah I think the I think uh my approach to aligning people cross functionally is to focus on the user experience than Tech uh it doesn’t matter what text tack what is the underlying text track that’s covering the platform but essentially when I talk to my engineering teams or my product team uh my initial pitch is hey we looking to create uh a friction free experience for our users right and that’s exactly what we we’ve all aligned on at an level what does magical experience mean so I think that keeping the narrative really simple uh keeping the narrative that that is resonating well with everyone irrespec to of the regions and teams they belong to I think that really stands out and then it’s about translating that experience into objective metrics that each of these teams can contribute to right like if you’re talking about automation right in the context of mobile test automation the kPa that they had to focus on is the reliability of our mobile testing platform how reliable is our mobile testing platform am I getting an instant feedback loop am I able to run my mobile test suit across 100 plus cities right when it comes to GSS am I Charing Buck am I shifting left am I identifying patterns in bugs am I adding am I proactively adding test cases based on the buck trends that I’m identifying am I reducing defect leate is there a clear test coverage definition those are my kpis right so when when it comes to a technical program management team then they start breaking this down to how is my release cycle how many hot fixes are coming in production am I able to fix it within time so I think everyone is working towards one common goal which is to create this frictionless experience and therefore Translating that to specific apis for those individual teams and then cross collaborating is is is a is a magic that I sort of felt worked across the board when when you work with different teams right like you can’t go and tell a marketing team about the underlying text track and the root C analysis and and you know what is it validation and D duplication and so on right I think the pitch there is hey I’m trying to create a friction free experience I want my app to behave Steam ly intuitively fast in every Market where U operates I think that’s that’s the background context and therefore you know it’s important to set up this beta program to get firstand feedback from our beta users and being able to action on the feedback not just setting up the program but also actioning on the feedback I think that really stood out when when we made the pitch yeah yeah I think that that is um it is such great advice to to shape that message to be about the magical user experience versus bugs which somebody might not really understand or or feel like inspired inspired by right so um while you were also talking you you mentioned um a little bit about automation um I’d love to to dig into that a little bit more around you know what role does automation play in your testing strategy and and how do you balance that with manual testing uh that’s a great question got a automation player very very significant role in improving efficiency enabling faster feedback loes and increasing the test coverage today if you look at manual testing one of the challenges that we run into is we do very limited runs of manual testing which is not the case with automation with automation you can run testing across pipelines multiple times even within a day I think that’s one we have a very robust mobile and web a framework today we also have a lot of back and automation coverage where we look at API endpoints that these Services communicate to if endpoint goes down and the service is unable to communicate then that can potentially lead to a lot of L4 L5 incident so we we are constantly looking at automation as a key lever to ensure we have sufficient coverage we run several tests uh within a short SP span of time across several cities right that gives you the scale that you’re looking for however having said that while you have a robust automation strategy it’s also equally important to balance it with manual testing because with manual testing you have an edge um where you can s you sort of test for Every Edge case you sort of test for every negative scenario you sort of test for Every Regional news right whereas in automation testing it’s it’s more Broad and shallow whereas manual testing solves for a lot of HP scenar I think having that right balance is ex extremely important and then you look to continuously shift left on both right you get to a point where when a software mobile engineer is attempting to place a diff right you are immediately blocking the diff if that if that code diff is leading to any kind of conflict in the larger app right I think that’s where you got to find the right balance between the mobile test automation the web test automation the backend integration testing and the Mand testing yeah wow I mean it sounds um really incredible to have that that view about you know kind of Broad and shallow versus like the the depth in the the edge cases you know I feel like that is kind of a you know advice or Mantra that um anyone can really use in their organization especially if they’re trying to make um you know a a business case to kind of invest um you know in both methodologies um you know I was actually wanting to ask you a little bit about you know some of the the tools and technology that Uber’s adopted for mobile testing and you know how how is that how has that evolved over time in fact in our very initial stages uh we had a lot of uh uh manual operators running test cases on their personal devices when we started off uh testing like I have I have a device and I start running tests on the device right and and this is the device that I use uh and we used to track the results in in spreadsheets um Google spreadsheets and share results more broadly which led to a a lot of discrepancy gradually what we did was we started investing in a fully functional device lab with simulators and thousand plus devices across Brands and Os versions that’s exactly where Kiton comes in right Kiton played a pretty significant role in terms of establishing our onr device lab and today we are at a much uh better position in terms of some of those tools processes and technologies that we’ve been able to adopt to uh we we were able to quickly move to J T3 for project management we today use zire for case management while also investing heavily on Google data studio for anything with respect to dashboarding of some of these results I think with a greater understanding of the product we also broke down uh the entire application into components and capabilities to essentially test every single block of the app so imag imagine the look at the shift from mobile devices spreadsheets to a kobiton device Lab at scale 3,000 plus devices emulators running test cases we have automated dashboards now we have Innovative operating models there’s a test case management system and so on so that’s that’s the shift that you’re talking about over a period of time even though it was over a period of time it seems like you evolved really rapidly how did you get that internal alignment to you know have you know Finance approve investment in the technology um and also you know investment in people right you know so that’s something that I think a lot of uh quality organizations you know struggle with is um being a little bit under resourced so tell us how you overcame that so generally what we do is uh we identify pain points uh that we want to be able to solve right like for example uh in the in the earlier conversation I spoke of using Google spreadsheets and how we were man tracking results using Google spreadsheets we felt this was absolutely subop suboptimal because if someone is not in office and we don’t have access to the report that this individual has put together based on a test we don’t have access to what exactly happened so we had to quickly move to a a test case management platform right likewise for mobile devices it was extremely hard for us to keep track of the number of physical devices and who we assigned to um and the scale at which we are operating and therefore it made to total sense to invest in a setup like an on-prem device lab in partnership with Kiton where we have so much of so many different devices and versions that we could absolutely seamlessly test irrespective of where we are in the world right I can be operating from home from a remote office or for my office location itself the experience is super sear and the beauty of this entire setup is because a lot of my testers are today based in India right while we are doing a lot of onroad testing also globally but we do a comprehensive testing out of India as well and with the device lab being based in India there’s a lesser latency there’s a near realtime access the entire experience is like pretty seamless right I think it’s extremely important to focus on what the pain points are surface them to at the right level for Budget Investments likewise when it come to when it comes to component and capabilities we started doubling down on U the test coverage definition that I spoke of the very uh beginning in this conversation right so we started breaking down the application into various streams or surfaces then within each of those surfaces and screens we started breaking down the application into capabilities and components and that’s where we started measuring objectively what does test coverage look like right so initially you know it was a bit difficult to get a buying but once we’ve established what is test coverage for a mobile release app objectively and once you’ve aligned engineering teams across the board then and ran Pilots to show results right and the end impact um it BEC so much more easier to scale these programs with with other teams who have not volunteered initially for the pilots I think that’s that’s that’s a that’s a journey slightly an iterative cycle but you eventually get a buy when you have objective numbers to share and you have end result that clearly shows that experience has improved quite a bit like we are talking about a 99.5% 99.8% up time of of our mobile release like how do you make that happen right a year and a half ago if I were to uh take a trip and in the midst of a trip I’m trying to switch my ride or switch my payment method you know error pops up and the error says oop something went wrong it doesn’t tell me as a user what do I have to do to get out of the situation right we testing for each of the scenarios today comprehensively and ensuring that the users have to need not have to think about the nature of uh issue the underlying issue but instead focus on the experience which is to click a button book a ride and get to the destination yeah I mean it’s it’s it’s really incredible when you when you look at it through through that lens what you’ve been able to um you know accomplish in um you know in such a short time so I’d say we’re coming up to um the end of our session so I would love to hear some um some closing thoughts and perhaps you know some advice to um any mobile testing organization that might be where Uber was you know eight years ago or five years ago um you know what ADV advice would you give them um I would say uh a big big um uh disruption or adoption if I may say so that’s sort of coming our way is the adoption of jna uh for mobile testing and I see AI sort of playing a very key role in shaping the future of mobile testing right we ourselves attempted several Pilots where you have a gen model we have our erds and prds on the features uh we just give that as an input to the model and the model itself you know helps WR a test plan and a test um you know the Strategic test plan for each of the features it can design comprehensive test cases which includes edge cases and negative cases uh so I think a big big um adoption of AI as we do mobile testing at scale is something that we definitely I would advise the teams to look at as they are looking at mobile testing and the Evol evolving scenario of mobile testing globally especially with advancements that are happening in the industry today I think that’s that’s a clear clear call out I think somewhere down the line we are getting to a point where J looks like it can even analyze the code written by in teams and make a call if it will impact the user experience in an adverse way right so so this is really disrupting what we did but but in a positive way so I think adoption of jni bringing gen expertise as part of the sdlc development life cycle as part of your QE Assurance process it’s going to make a huge difference and I think that’s something that I feel as teams adopt mobile testing strategies if they could double down on I think that will be hugely beneficial for everyone of board I think that is excellent advice and hopefully um hopefully actional actionable and our audience will certainly uh take that to heart so again nand it was an absolute pleasure getting to speak with you.