You tested in three environments, QAs and Devs communicated throughout the entire SDLC and a P0 bug still hit production—is the process to blame? Before you tear apart your workflow, check your cognitive bias.
My organization recently had a high severity production defect. Our application is complex. As testers, we need to understand the grant process from the perspective of multiple user roles, the intricacy of the system, and the rules of how grants are managed and funded.
With a system this big and complicated, I often catch myself feeling surprised and impressed at how effective our testing is. However, effectiveness does not equal perfection, and occasionally new changes lead to failures in production.
Standard processes only catch standard problems
We’re currently implementing a new wide-reaching feature that gives grant funders an additional layer of management options for their grant recipients. While this is a huge win for our users, it’s a big old bundle of risks for us — conflicting requirements in user roles, complex dependencies, mismatched user settings, and legacy work.
The change went through all our standard processes. The requirement went through story refinement with the whole team. It was tested in the test environment, stage environment, and even in production — but the defect was still missed. Several people asked me how the team could possibly miss this after so much testing, but for me, the reason was obvious — we forgot to factor in our cognitive bias.
Biases create risky testing gaps
Cognitive biases are an Achilles heel for development teams. Our biases put us in a box of what we already know, so our test coverage is limited to that box as well.
In this case, we were biased by the new state of the application, so we designed our tests around how data will be handled after changes are implemented. We failed to consider how legacy data would be affected by the change. As a result, we only tested two of the three potential states this feature could be in.
A study published in ACM identified over two dozen cognitive biases that can cloud our thinking and judgment. Testers are especially at risk of falling prey to authority bias, anchoring effect, and inattentional blindness. As testers we often defer to developers, viewing them as an authority on how the system will behave and react to changes. When authority bias is combined with the anchoring effect caused by an over reliance on specific information, we can easily miss what are in hindsight, obvious points of failure. We can’t completely eliminate biases — but we can develop the ability to identify and overcome them in our thinking.
Battling bias takes continuous practice
How can we fight against our own minds? The short answer is practice. The more meaningful answer is practice, plus tools and resources such as:
Thinking Fast and Slow – Kahneman helps us understand the two systems our brain uses to make decisions. System 1 is fast, emotional, and relies on intuition. We use system 1 to make snap decisions like avoiding an accident on the road or which brand to buy at the grocery store. System 2 is slow and deliberate, using logic and rationality in decision making. System 2 is our problem solving brain, helping us work through complex challenges like identifying gaps in requirements. Both systems are critical for thinking — understanding which system your brain is using to make testing decisions will help combat biases.
Get Familiar with Biases - Myroslava Zelenska’s 4-part series on cognitive biases is a great resource on biases. Each part of the series focuses on a role and the biases each job role is most susceptible to. For instance developers are more likely to follow trends due to the bandwagon effect. Managers on the other hand overestimate agreement with their opinions caused in part by the false consensus effect. By understanding which biases we’re most likely to be fooled by, we’re better able to identify when we’ve put our software at increased risk due.
Designated Dissenter – Rachel Kibler introduced me to the concept of a designated dissenter during her keynote at Romanian Testing Conference in 2024. The designated dissenter is tasked with asking tough questions and challenging the team in meetings and while making decisions. The designated dissenter asks questions like “what happens when the user takes an alternative path” or “how is old data handled with this new change”. The dissenters job is to spark thinking and proactively look for unknown unknowns.
Build bias-checking into the testing strategy
Perfect software is a myth and occasional failures are inevitable, but that doesn’t mean we shouldn’t do all we can to prevent defects. My team and I are making small changes to prevent a miss like this in the future. Proactive conversations around the various states, data, and settings will be built into our process. When developing test cases, we’ll use pairwise, triwise, and decision tables to cover more ground. And while we can’t account for all the unknowns, we can implement a bias checklist as part of our testing strategy so we’re always forced to confront it.
The testing process isn’t always the problem—it’s how you think about testing. So check your bias, early and often.
Love this post! Great analysis, and that "out of sight/out of mind" issue testers can fall into, along with everyone else.