What We Learned from the Holiday Season
Adam Creamer
Quality Assurance/Engineering is a rapidly growing field aimed to help achieve a high-quality standard in the software development life cycle. It incorporates not only functional and non-functional testing and automation, but also management, development, operation, and maintenance of systems and architecture.
To put it bluntly, QEs do a lot, and sometimes the know-how and ‘best practices’ aren’t always clear or are ill-defined, leaving testers struggling in their efforts and in reaching their maximum potential. With this in mind, Kobiton had the pleasure of engaging with the testing community through its virtual, interactive conference, Odyssey.
Odyssey Test Conference took thousands of attendees on a journey through knowledgeable presentations from testing thought-leaders, automation gurus, and QA rockstars to help educate and inspire testers, as well as anyone prioritizing quality. Odyssey presenters also provided QEs with the tools and knowledge needed to improve their own processes. Jam-packed with outstanding, quality-focused sessions, here are the top 5 things I learned from Odyssey.
Whether you’re a seasoned quality engineer or just starting out in testing, it’s important to have your foundation knowledge set, aka your QA toolkit, as well as having a defined strategy, or plan of action, prior to executing. And, of course, you’ll need to report your findings to your team and relevant individuals in your organization. These important pillars help achieve a seamless testing experience.
So, what exactly is your QA toolkit? Julia Pottinger gave a great presentation at Odyssey to help us understand what we have and what we need in our QA toolkit. Julia explains that your QA toolkit is a personal set of resources, abilities, and skills used for a particular purpose, such as testing. She walks us through 5 ways to build your QA toolkit to help get you started and feel more confident in your testing:
Building your QA tool kit will allow you to come prepared for the additional pillars of quality engineering: defining your testing strategy and reporting your findings to your team. At Odyssey, in her presentation ‘Get Your Test Strategy Set’, Varuna Srivastava recommends defining a strategy that can encompass all your testing types for your project, as well as discussing your strategy with your team to agree upon the execution process and timeline of your testing. Once defined and executed, in his presentation ‘The Quality Dashboard’, Gil Zilberfeld provided success ingredients when reporting: listen to management and create useful reports based on open communication; maximize developer effectiveness, as a goal, by reporting useful information to developers; focus on things that matter and show results early and often; be transparent in your reporting and become part of the system you are testing.
To be completely honest, security testing has never popped up on my radar to be included in my QA toolkit & strategy. I always assumed there was a designated individual or team who covered security testing. To my surprise, Adam Satterfield explained that that is not always the case. In fact, in his presentation ‘Security Testing is Everyone’s Responsibility!’, he went on to illustrate how there is actually a lack of security testers and an overall lack of understanding on how to perform security testing. This leads to delayed delivery times and leaves your application and user data vulnerable. Adam continues his presentation by giving wonderful insight into how planning and executing security testing is no different from planning and executing functional testing, UI testing, unit testing, etc. Thus, security testing can easily be integrated into your QA strategy.
First off, what is security testing anyway? Adam explained that security testing is testing aimed to uncover vulnerabilities, threats, and risks, imposed by malicious attacks from intruders, that could expose data and user information. Think brute force attack, hijack of passwords, SQL and javaScript injections – scenarios where hackers are ready to take advantage of your system vulnerabilities, therefore, your system must be ready to counter those attacks and keep valuable data safe and protected. Adam went on to describe two types of security testing: penetration testing and application security testing. Penetration testing, or pentest/ ethical hacking, is an authorized simulated cyber attack on a system to evaluate system security. Application Security Testing involves measures taken to improve the security of an application often by finding, fixing, and preventing security vulnerabilities. Adam also mentioned OWASP (Open Web Application Security Project) as a useful resource to reference types of attacks and how to test for these types of attacks.
Adam outlined 7 steps in security testing, and how these steps are not far off from the usual quality assurance testing strategy:
Adam strived to drive home that security test planning and executing really is no different from regular test planning and executing. The two can be combined and integrated into your core testing suites for optimal coverage, and even automation can be leveraged to support security testing. By learning and understanding security testing, you can open the conversation and build relationships around security testing that can benefit your QA toolkit and your team. Adam also provided great resources for how to get started in security testing including joining communities, like Hacker101 and The Cyber Mentor, engaging and participating in bug bounty programs, like HackerOne, and getting started in online learning and practicing.
As a quality engineer, it always felt taboo to test in production. We live and play in isolated dev/test environments, after all. But, with that narrow mindset, quality engineers actually end up passing on valuable information only the production environment can provide. In his presentation, ‘Improving Quality with Shift-Right Testing’, Coty Rosenblath explained that the production environment involves the most complex integrated systems and the most complete dependencies at play, thus provides rich information that can help produce data and user transaction events that would otherwise be extremely hard to simulate/reproduce in other environments. With this scalability, the production environment holds a lot of value to consider when testing.
So how is shift-right testing different from shift-left testing? Coty did a phenomenal job in defining the two. Shift-left testing is what most quality engineers are familiar with. It is preventive testing performed to validate the functionality of a system. It is done early and often within every stage of development prior to releasing to production. Shift-right testing, or testing in production, involves a more detective approach. It leverages the rich, production environment to help reveal new and unexpected scenarios that might not have been detected within other environments. This new information can help improve the application performance and user experience. It can also help improve shift-left testing, producing more efficient testing, and thus, faster delivery times. Shift-right testing can also provide feature modifications based on user feedback, creating a fine-tuned system to meet the needs of its users. By shift-left testing and shift-right testing powers combined, continuous testing is achieved to deliver optimal coverage.
Coty continued his presentation by shining light on some shift-right testing practices. He explained two deployment methods:
Along with different deployment methods, shift-right testing allows you to monitor User Experience – to pay attention and learn from your users. By instrumenting your system to monitor how users use your system, you can reconstruct those same behaviors and states. Why did this user abandon the app or cart? What went wrong? Shift-right testing exposes those scenarios to help optimize your system. But how do you go about instrumenting your system to do just that? There are commercial and open-source solutions available, but you can also implement an event monitoring system yourself, another thing I learned at Odyssey.
Pragati Sharma continues the conversation on the importance of learning and understanding your users’ experience to better your application in her presentation ‘Automating Analytics Data Layer’. She emphasized that user experience insights help understand how each feature is functioning, and how users are responding to your application. This can help expose potential bottlenecks within your system and could signify performance issues, like product search load time, that lead to app/cart abandonment. As mentioned before, there are commercial and open-source solutions available to help facilitate user experience insights, but Pragati shared her own way of capturing a ‘user journey’ through javaScripts ‘Event’ interface and automation.
Pragati first illustrates how to capture user actions within a user journey. These actions involve step-by-step moments that a user experiences when traversing an application that quality engineers are already familiar with:
Pragati showed us how you can capture these user actions by binding relevant data of each step to an ‘event’ and pushing events to a javaScript object she defined as DigitalData. These pushed events create analytic data layers comprising each step that can then be analyzed and generate analytics metrics and reports. Pushed events are not limited to only ‘search’, ‘click’, ‘type’, etc, and you can programmatically bind any and all relevant data to be pushed to the data layer. Pragati explains how she does this by organizing her event data using javaScript nodes within the DigitalData object.
But wait, there’s more! Pragati continued her presentation by diving into how she utilizes Cypress Automation tool to automate and validate that the relevant data is getting passed onto downstream systems. By automating the analytics data layer, you combine analytics with your existing functional tests, and you can get faster feedback, thus creating more robust suites. Above all else, you are able to simulate the user journey for capturing analytics.
One of the biggest takeaways from Odyssey, brought up in numerous presentations, was the supportive encouragement to engage and communicate with your fellow quality engineers, developers, team members, and colleagues. To spark up the relevant conversations, not just in planning, but within every process of the development life cycle; and not just with direct team members, but with external individuals as well. This was driven home beautifully in Erika Chestnut’s presentation ‘Gatekeeper to Enabler’, where Erika uses the ‘80s cult classic, ‘The Neverending Story’, to help illustrate 5 strategies on how “quality can transform through collaboration and enablement into a mystical, magical being that encourages, enables, and empowers others”:
Erika continued by saying that by believing and trusting in yourself, you can help recognize quality value and drive quality forward. Just as the characters in ‘The Neverending Story’ supported, encouraged, and influenced one another to save Fantasia, you are also capable of encouraging and empowering others to help cultivate a space and opportunity to open the conversations and influence quality. As Erika concluded her presentation, she remarked “Quality really is a never-ending story”.
In case you missed Odyssey, Kobiton hosted our first-ever “Innovation Panel” where our leadership team shared some insight into where Kobiton is looking to take the Testing industry over the next year. There were some really exciting roadmap items shared, but I’m most excited about the Impact Analysis feature coming out later this year.
Kobiton’s Impact Analysis capability will leverage our already existing NOVA-driven Visual Testing solution and utilize it to be able to scan for visual differences between a new version of an app and an older version that had already been tested using Kobiton. Because Kobiton stores your test session history, NOVA will be able to see exactly where there are differences and places of impact with app changes, identify the test cases impacted, and then allow you to remediate for changes within the Kobiton remediation interface. Maintaining tests can be infuriating, so I see this as something that is going to completely change the Mobile Testing game!
Within the fast-paced DevOps world, it can be overwhelming to keep up with new processes, methods, tools, and frameworks at times. It’s important to establish the key pillars of testing, and focus on building your QA toolkit that best suits you and finding best practices and strategies that work best for you and your team. Luckily, there are amazing resources, communities, and conferences, like Odyssey, that can help enable you to learn about new and existing trends, methods, and tools, so that you never miss a beat. And, bonus! You get to meet and connect with pretty stellar, like-minded individuals where you have the power, in yourself, to encourage and enable others in the same pursuit.
The recordings from Odyssey are available now!