Top 5 Things I Learned From Odyssey

Reading Time : 17min read
Graphic of a desktop screen with text that says Odyssey

Quality Assurance/Engineering is a rapidly growing field aimed to help achieve a high-quality standard in the software development life cycle. It incorporates not only functional and non-functional testing and automation, but also management, development, operation, and maintenance of systems and architecture.

To put it bluntly, QEs do a lot, and sometimes the know-how and ‘best practices’ aren’t always clear or are ill-defined, leaving testers struggling in their efforts and in reaching their maximum potential. With this in mind, Kobiton had the pleasure of engaging with the testing community through its virtual, interactive conference, Odyssey. 

Odyssey Test Conference took thousands of attendees on a journey through knowledgeable presentations from testing thought-leaders, automation gurus, and QA rockstars to help educate and inspire testers, as well as anyone prioritizing quality. Odyssey presenters also provided QEs with the tools and knowledge needed to improve their own processes. Jam-packed with outstanding, quality-focused sessions, here are the top 5 things I learned from Odyssey.


1. Key Testing Pillars: Building your QA Toolkit, Defining your Strategy, and Reporting Your Findings

Whether you’re a seasoned quality engineer or just starting out in testing, it’s important to have your foundation knowledge set, aka your QA toolkit, as well as having a defined strategy, or plan of action, prior to executing. And, of course, you’ll need to report your findings to your team and relevant individuals in your organization. These important pillars help achieve a seamless testing experience.

So, what exactly is your QA toolkit? Julia Pottinger gave a great presentation at Odyssey to help us understand what we have and what we need in our QA toolkit. Julia explains that your QA toolkit is a personal set of resources, abilities, and skills used for a particular purpose, such as testing. She walks us through 5 ways to build your QA toolkit to help get you started and feel more confident in your testing:

  1. Create a list of objectives: What do you currently know and what do you want to learn? By listing out objectives, you provide a clear plan of action to research and learn on relevant topics, like learning the different software development methodologies or learning a new language or automation framework.
  2. Set realistic expectations: Go easy on yourself and set a pace that realistically works best for you.
  3. Dedicate time to learn: Quality engineers already do so much, so it’s hard to find the time to learn new tools. Ask your team to allocate time for you to learn or budget your time after work hours. Either way, you are strengthening your tool kit that will then benefit and add value back into your work.
  4. Use resources and mentors: You are not alone in your endeavors, and there are fantastic communities available to help you – use them! Sign up for webinars and testing conferences, like Odyssey, and join online communities like TestGuild, Ministry of Testing, and MobileDevTestOps.
  5. Multiple project experiences: Don’t feel like you need to learn everything at one time! Focus on what is needed at hand, and over time your QA Toolkit will grow as you gain experience in different projects.

Building your QA tool kit will allow you to come prepared for the additional pillars of quality engineering: defining your testing strategy and reporting your findings to your team. At Odyssey, in her presentation ‘Get Your Test Strategy Set’, Varuna Srivastava recommends defining a strategy that can encompass all your testing types for your project, as well as discussing your strategy with your team to agree upon the execution process and timeline of your testing. Once defined and executed, in his presentation ‘The Quality Dashboard’, Gil Zilberfeld provided success ingredients when reporting: listen to management and create useful reports based on open communication; maximize developer effectiveness, as a goal, by reporting useful information to developers; focus on things that matter and show results early and often; be transparent in your reporting and become part of the system you are testing.

2. Bring Security Testing into Your Test Strategy

To be completely honest, security testing has never popped up on my radar to be included in my QA toolkit & strategy. I always assumed there was a designated individual or team who covered security testing. To my surprise, Adam Satterfield explained that that is not always the case. In fact, in his presentation ‘Security Testing is Everyone’s Responsibility!’, he went on to illustrate how there is actually a lack of security testers and an overall lack of understanding on how to perform security testing. This leads to delayed delivery times and leaves your application and user data vulnerable. Adam continues his presentation by giving wonderful insight into how planning and executing security testing is no different from planning and executing functional testing, UI testing, unit testing, etc. Thus, security testing can easily be integrated into your QA strategy.

First off, what is security testing anyway? Adam explained that security testing is testing aimed to uncover vulnerabilities, threats, and risks, imposed by malicious attacks from intruders, that could expose data and user information. Think brute force attack, hijack of passwords, SQL and javaScript injections – scenarios where hackers are ready to take advantage of your system vulnerabilities, therefore, your system must be ready to counter those attacks and keep valuable data safe and protected. Adam went on to describe two types of security testing: penetration testing and application security testing. Penetration testing, or pentest/ ethical hacking, is an authorized simulated cyber attack on a system to evaluate system security. Application Security Testing involves measures taken to improve the security of an application often by finding, fixing, and preventing security vulnerabilities. Adam also mentioned OWASP (Open Web Application Security Project) as a useful resource to reference types of attacks and how to test for these types of attacks.

Adam outlined 7 steps in security testing, and how these steps are not far off from the usual quality assurance testing strategy:

  1. Pre-engagement – security testing, like all realms of testing, should be given a dedicated amount of time to open the conversation early in the development process. This can be done in planning, backlog refinement, or any relevant requirements phase.
  2. Intelligence Gathering – Gather as much useful information as you can to help define your strategy. What kind of application is this? What kind of users? Will there be different logins, permissions, etc? You’ll already have all this information in mind when preparing for quality testing – just bring security testing into the same process!
  3. Threat Modeling – Gather relevant documentation and identify & categorize primary and secondary assets. After identifying and categorizing assets, identify and categorize threats and threat communities. Then, map threat communities to primary and secondary assets to get a better understanding of what to expect. For example, my primary asset is a front-end web application. I could expect brute force attacks or SQL/javaScript injections and map that to the API layer and DB layer of the system.
  4. Vulnerability Analysis – More research! What are all attacks that can be performed? This is very similar to exploratory testing already defined in your strategy.
  5. System Exploitation – time to execute security testing! As mentioned before, OWASP and the internet are excellent resources to help you learn, grow, and execute security testing.
  6. Post Exploitation – cleaning up after your testing, like re-imaging VMs, to help keep yourself organized.
  7. Reporting – Explicitly outline testing efforts in a concise report to help you and your team pinpoint system weaknesses.

Adam strived to drive home that security test planning and executing really is no different from regular test planning and executing. The two can be combined and integrated into your core testing suites for optimal coverage, and even automation can be leveraged to support security testing. By learning and understanding security testing, you can open the conversation and build relationships around security testing that can benefit your QA toolkit and your team. Adam also provided great resources for how to get started in security testing including joining communities, like Hacker101 and The Cyber Mentor, engaging and participating in bug bounty programs, like HackerOne, and getting started in online learning and practicing.

3. The importance of shift-right testing to achieve continuous testing

As a quality engineer, it always felt taboo to test in production. We live and play in isolated dev/test environments, after all. But, with that narrow mindset, quality engineers actually end up passing on valuable information only the production environment can provide. In his presentation, ‘Improving Quality with Shift-Right Testing’, Coty Rosenblath explained that the production environment involves the most complex integrated systems and the most complete dependencies at play, thus provides rich information that can help produce data and user transaction events that would otherwise be extremely hard to simulate/reproduce in other environments. With this scalability, the production environment holds a lot of value to consider when testing.

So how is shift-right testing different from shift-left testing? Coty did a phenomenal job in defining the two. Shift-left testing is what most quality engineers are familiar with. It is preventive testing performed to validate the functionality of a system. It is done early and often within every stage of development prior to releasing to production. Shift-right testing, or testing in production, involves a more detective approach. It leverages the rich, production environment to help reveal new and unexpected scenarios that might not have been detected within other environments. This new information can help improve the application performance and user experience. It can also help improve shift-left testing, producing more efficient testing, and thus, faster delivery times. Shift-right testing can also provide feature modifications based on user feedback, creating a fine-tuned system to meet the needs of its users. By shift-left testing and shift-right testing powers combined, continuous testing is achieved to deliver optimal coverage.

Coty continued his presentation by shining light on some shift-right testing practices. He explained two deployment methods:

  1. Canary Release –  release a new version of your application to a small subset of users in production. Think ‘beta’ environment. This is often beneficial when transitioning to new infrastructure or testing back-end features. Production functionality testing, along with user feedback, will let you know if the system is running as intended without impacting the entirety of users.
  2. Dark Launches – similar to Canary Release, Dark Launches also release to a subset of users, but focus on specific new application features set with feature flags. Quality engineers have more control over testing new features in production, and based on user feedback, your team can make updates to features accordingly. However, Dark Launches come at a cost as feature flags are tedious to maintain and can clutter up codebases.

Along with different deployment methods, shift-right testing allows you to monitor User Experience – to pay attention and learn from your users. By instrumenting your system to monitor how users use your system, you can reconstruct those same behaviors and states. Why did this user abandon the app or cart? What went wrong? Shift-right testing exposes those scenarios to help optimize your system. But how do you go about instrumenting your system to do just that? There are commercial and open-source solutions available, but you can also implement an event monitoring system yourself, another thing I learned at Odyssey.

4. Automating Analytics

Pragati Sharma continues the conversation on the importance of learning and understanding your users’ experience to better your application in her presentation ‘Automating Analytics Data Layer’. She emphasized that user experience insights help understand how each feature is functioning, and how users are responding to your application. This can help expose potential bottlenecks within your system and could signify performance issues, like product search load time, that lead to app/cart abandonment. As mentioned before, there are commercial and open-source solutions available to help facilitate user experience insights, but Pragati shared her own way of capturing a ‘user journey’ through javaScripts ‘Event’ interface and automation.

Pragati first illustrates how to capture user actions within a user journey. These actions involve step-by-step moments that a user experiences when traversing an application that quality engineers are already familiar with:

  1. Load page
  2. Search for product
  3. ‘Click’ search
  4. Load search results
  5. ‘Click’ a product
  6. Load product details page

Pragati showed us how you can capture these user actions by binding relevant data of each step to an ‘event’ and pushing events to a javaScript object she defined as DigitalData. These pushed events create analytic data layers comprising each step that can then be analyzed and generate analytics metrics and reports. Pushed events are not limited to only ‘search’, ‘click’, ‘type’, etc, and you can programmatically bind any and all relevant data to be pushed to the data layer. Pragati explains how she does this by organizing her event data using javaScript nodes within the DigitalData object.

But wait, there’s more! Pragati continued her presentation by diving into how she utilizes Cypress Automation tool to automate and validate that the relevant data is getting passed onto downstream systems. By automating the analytics data layer, you combine analytics with your existing functional tests, and you can get faster feedback, thus creating more robust suites. Above all else, you are able to simulate the user journey for capturing analytics.

5. Make yourself heard

One of the biggest takeaways from Odyssey, brought up in numerous presentations, was the supportive encouragement to engage and communicate with your fellow quality engineers, developers, team members, and colleagues. To spark up the relevant conversations, not just in planning, but within every process of the development life cycle; and not just with direct team members, but with external individuals as well. This was driven home beautifully in Erika Chestnut’s presentation ‘Gatekeeper to Enabler’, where Erika uses the ‘80s cult classic, ‘The Neverending Story’, to help illustrate 5 strategies on how “quality can transform through collaboration and enablement into a mystical, magical being that encourages, enables, and empowers others”:

  • Strategy #1: Define Quality- Regarding quality, What is acceptable and what is not? What standard of quality do you uphold? When we collaborate on these strategies & measures of success and align with core business values, we are able to enable and empower everyone in the organization to infuse quality into their business area in a way that everybody understands and supports. We are able to create a guiding light where we can seek answers from others and get help. We are able to support one another towards common goals when quality is defined.
  • Strategy #2: Build Relationships and Trust, The Foundation of Teamwork – A team without trust results in poor quality and project failure. Therefore, when defining quality, strive for a way to build your team up, not tear down, shame, or call out. This positive interaction will be extremely beneficial to you and your team in working towards common goals.
  • Strategy #3: Leverage Influence & Be the Pathmaker for Quality – You are not limited to just testing. You have experience (and an outstanding QA Toolkit) and a voice that can influence the culture of quality in your organization. Make your voice heard! Speak up on changing practices, when necessary, to maintain quality standards. Speak less about QA as a silo, in the delivery process, and more about how to infuse and engineer quality throughout the delivery lifecycle by driving those standards, as well as providing testing. And, elevate the expectation to create opportunities and space at the table for other voices to be heard. Voices hold value and influence.
  • Strategy #4: Enable Others to Lead – As space at the table is being made for other voices to be heard, empower others to leverage quality practices and standards to be leaders of quality. This can be done by simply empowering others with the knowledge to act on the story they are already a part of; that you all are a part of together.
  • Strategy #5: 3 C’s: Collaborate, Communicate, & Coordinate – Through collaboration, communication, and coordination, any tension that may surface, as we engage in spaces that we are not traditionally vocal in, can be alleviated. Take ownership of the conversation to collaborate and engineer a cohesive test strategy across the test pyramid. Communicate to influence others from the top-down, and coordinate the process of executing your defined strategy byways like training, rollouts, and monitoring success metrics.

Erika continued by saying that by believing and trusting in yourself, you can help recognize quality value and drive quality forward. Just as the characters in ‘The Neverending Story’ supported, encouraged, and influenced one another to save Fantasia, you are also capable of encouraging and empowering others to help cultivate a space and opportunity to open the conversations and influence quality. As Erika concluded her presentation, she remarked “Quality really is a never-ending story”.

An extra takeaway: Kobiton has some exciting things planned for the future!

In case you missed Odyssey, Kobiton hosted our first-ever “Innovation Panel” where our leadership team shared some insight into where Kobiton is looking to take the Testing industry over the next year. There were some really exciting roadmap items shared, but I’m most excited about the Impact Analysis feature coming out later this year.

Kobiton’s Impact Analysis capability will leverage our already existing NOVA-driven Visual Testing solution and utilize it to be able to scan for visual differences between a new version of an app and an older version that had already been tested using Kobiton. Because Kobiton stores your test session history, NOVA will be able to see exactly where there are differences and places of impact with app changes, identify the test cases impacted, and then allow you to remediate for changes within the Kobiton remediation interface. Maintaining tests can be infuriating, so I see this as something that is going to completely change the Mobile Testing game!

Wrapping Up

Within the fast-paced DevOps world, it can be overwhelming to keep up with new processes, methods, tools, and frameworks at times. It’s important to establish the key pillars of testing, and focus on building your QA toolkit that best suits you and finding best practices and strategies that work best for you and your team. Luckily, there are amazing resources, communities, and conferences, like Odyssey, that can help enable you to learn about new and existing trends, methods, and tools, so that you never miss a beat. And, bonus! You get to meet and connect with pretty stellar, like-minded individuals where you have the power, in yourself, to encourage and enable others in the same pursuit.

The recordings from Odyssey are available now!

Interested in Learning More?

Subscribe today to stay informed and get regular updates from Kobiton

Ready to accelerate delivery of
your mobile apps?

Request a Demo