Table Of Contents
In Part 1, I talked about checkbox testing: why it exists, what it catches, and what it misses. The short version is that testing to tick a box creates an illusion of quality without delivering real confidence.
But pointing out problems is easy. The harder question is: what does good testing actually look like?
This is Part 2. Here, I’ll cover the mindset shift, the practical approaches, and some examples from my own experience. The goal isn’t perfection. It’s intentional, focused testing that actually adds value.
The Mindset Shift
The first thing to understand is that good testing isn’t about doing more. It’s about doing the right things.

Checkbox testing assumes that coverage equals confidence. More test cases, more execution, more boxes ticked. But that’s not how risk works. You can have thousands of test cases and still miss the defect that matters most.
Good testing starts with a different question. Not “what can we test?” but “what could go wrong, and what would hurt the most if it did?”
That’s a mindset shift. It moves testing from a checklist activity to a thinking activity. It requires understanding the product, the users, and the client/business context. It requires making judgement calls about where to focus attention and where to leave alone.
That’s harder than following a script. But it’s also more valuable.
Risk-Based Thinking
I’ll go deeper into risk-based testing in a future article, but here’s the core idea.
Not all parts of your product carry the same risk. Some features are critical. Some are rarely used. Some are stable and haven’t changed in years. Some are brand new and barely understood.
Good testing allocates effort based on risk, not uniformity. You test the critical, volatile, high-impact areas more thoroughly. You test the stable, low-risk areas more lightly, or not at all.
This sounds obvious, but it’s surprisingly hard to do in practice. It requires someone to actually assess risk, which means understanding the product, the users, and the business priorities. It requires the confidence to say “we’re not going to test this area deeply” and the credibility to back that up.
The payoff is focus. Instead of spreading effort thin across everything, you concentrate on what matters. You find more of the defects that would actually hurt, and you waste less time on the ones that wouldn’t.
Knowing Your Product and Your Users
You can’t test well if you don’t understand what you’re testing.

That sounds obvious too, but I’ve seen plenty of QA teams operate at arm’s length from the product. They receive requirements, write test cases, execute them, and report results. They never really understand why the feature exists, who uses it, or what success looks like.
Good testing requires context. It requires knowing that this workflow spikes during end-of-financial-year processing. That this integration is fragile because the third-party API is unreliable. That this feature matters disproportionately to our biggest customer versus all others, as they depend on it.
Some of this comes from documentation. Most of it comes from relationships. Talking to product owners, developers, support teams, and actual users. Asking questions. Being curious.
The testers who add the most value are the ones who understand the product as well as anyone. They don’t just find defects. They find the defects that matter.
The Courage to Not Test Everything
Here’s something that doesn’t get said enough: good testing includes knowing what not to test.
There’s always pressure to test more. More coverage, more scenarios, more edge cases. It feels safer. If something goes wrong, at least you can say you tested it.
But testing everything isn’t possible. And trying to test everything means you test nothing well. Effort gets spread so thin that even the critical areas don’t get proper attention.
Good testing requires the courage to make choices. To say “we’re not going to test this because the risk is low and the effort isn’t justified.” To accept that some defects will escape because catching them would cost more than they’re worth.
That’s not negligence. That’s pragmatism. Every organisation has limited time and resources. The question isn’t whether to make trade-offs, it’s whether you make them intentionally or accidentally.
Intentional trade-offs, documented and communicated, are a sign of mature QA. Accidental trade-offs, where things just don’t get tested because there wasn’t time, are a sign of checkbox testing in disguise.
Building Trust Through Transparency
One of the biggest differences between checkbox testing and good testing is how you communicate.
Checkbox testing communicates in absolutes. “We tested it.” “It passed.” “QA signed off.” These statements sound confident, but they don’t mean much. Passed what? Tested how? Signed off on what basis?
Good testing communicates in probabilities and trade-offs. “We’ve tested the core workflows thoroughly and we’re confident there. We’ve done lighter testing on the admin features because they’re lower risk. There are two known issues we’re accepting for this release because the fix would delay launch and the impact is low.”
That’s a different kind of confidence. It’s not “everything is fine.” It’s “here’s what we know, here’s what we don’t know, and here’s why we think we’re ready.”
Stakeholders appreciate this. It gives them real information to make decisions. It builds trust over time because you’re not overselling or hiding uncertainty. When you say you’re confident, they believe you because you’ve been honest about the times you weren’t.
I know, it’s not always easy. Some stakeholders will have questions. These questions may derail any confidence others had and force you to give further detail. Depending on the stakeholders, this can become a very grey and potentially fruitless discussion.
What This Looks Like in Practice
Let me give you a few examples from my own experience.
Prioritising ruthlessly. At one organisation, we had limited time before a major release. Instead of trying to test everything superficially, we identified the three highest-risk areas and tested those thoroughly. Everything else was smoke tested only. We found critical defects in two of the three areas. If we’d spread our effort evenly, we might have missed them.
Saying no to low-value testing. I once inherited a regression suite with over 2,000 test cases. Most of them were outdated, redundant, or testing things that hadn’t changed in years. If you could call them regression test cases. We cut it down to 400 focused cases. Execution time dropped, but defect detection didn’t. We were testing smarter, not harder.
Being honest about gaps. On a project with aggressive timelines, we couldn’t test an integration layer as thoroughly as I wanted. Instead of pretending we had, I flagged it early and as a risk in the release report. We shipped with monitoring in place. A defect did escape, but we caught it quickly and fixed it before users were impacted. Leadership appreciated the transparency, but it still required a more detailed explanation.
Investing in understanding. On a complex project, I spent the first few weeks just learning the product. Sitting with users, reading documentation, watching videos, asking questions. That investment paid off throughout the engagement. I knew where the risks were because I understood how the system was actually used, not just how it was supposed to work.
None of these are revolutionary. They’re just examples of treating testing as a discipline rather than a formality.
The Return on Investment
Here’s the thing that often gets missed: good testing doesn’t have to cost more. In many cases, it costs less.
Checkbox testing looks cheap upfront, but the hidden costs add up. Escaped defects, production incidents, rework, support burden, reputational damage. These are real costs, even if they don’t show up in the QA budget.
Good testing catches more of the defects that matter, earlier in the cycle when they’re “cheaper” to fix. It reduces rework and better decisions can be made. It builds confidence that reduces the need for last-minute scrambles. It creates a reputation for quality that makes stakeholders trust the team.
The investment isn’t necessarily more testers or more time. It’s smarter allocation of the testers and time you already have. It’s focus, judgement, and the willingness to make intentional choices about where to spend effort.
This isn’t to say it’s all on QA. The entire business needs to buy-in and allow these processes to happen, or it can become just another checkbox.
The Take
Checkbox testing exists because it’s easy and it satisfies the immediate requirement. But it doesn’t deliver real confidence, and eventually the gaps show.
Taking testing seriously means shifting from “did we test it?” to “are we confident?” It means understanding risk, knowing your product, making deliberate choices, and communicating honestly.
It’s not about testing everything. It’s about testing what matters, with intention and focus. And building the kind of trust where your sign-off actually means something.
That’s what good QA looks like. Not more boxes to tick, but better judgement about which boxes matter.
A Note on Context
Every business and every project is different. What works in one place won’t work in another, and that’s the point.
Nothing here is meant to be a step-by-step prescription. It’s guidance, drawn from my own experiences, and deliberately kept general to avoid pointing fingers anywhere.
Take what’s useful, ignore what isn’t, and adapt it to your own context. Or as Joe Colantonio always says: “Test everything and keep the good.”

