Table Of Contents
The Ghost in the Machine
In the first article of this series, I talked about AI and where it’s genuinely useful in QA. In the second, I covered reporting and knowing your audience. In the third, metrics and what’s actually worth measuring.

But there’s a thread running through all of it that I haven’t addressed directly: the human element.
We’ve talked about AI doing the “doing.” We’ve talked about metrics doing the “measuring.” But who does the thinking? Who decides what matters? Who sits in the room and says, “I know the numbers look fine, but something feels off”?
That’s the bit that doesn’t fit neatly into a dashboard. And it’s the bit that matters most.
In an industry increasingly obsessed with automation and data, it’s easy to forget that software is built by humans, for humans. The best QA professionals I’ve worked with aren’t just test executors. They’re the bridge between code and the real world, tech and business. They translate, advocate, question, and occasionally push back when no one else will, and I love them for it!
That’s not a skill you automate. That’s a skill you develop with experience.
Empathy as a Technical Skill
Here’s something that doesn’t get said enough: empathy is a technical skill.
AI can check if a button works. It can verify that clicking “Submit” sends the form. What it can’t tell you is whether the workflow makes sense to a stressed-out user at 11pm trying to meet a deadline. It can’t tell you that the error message is technically accurate but emotionally useless. It can’t feel the frustration of a user who’s clicked three times and still doesn’t know if anything happened.
That requires putting yourself in someone else’s shoes. Understanding not just what the system does, but how it feels to use it.
This is especially true when you consider local context. In Australia, that might mean understanding end-of-financial-year stress, when every accounting firm is under pressure and users have zero patience for clunky workflows (I am guilty). Or it might mean recognising that regional users are dealing with slower connections and older devices, and your slick metro-tested UI doesn’t perform the same way in Broken Hill.
AI doesn’t know any of that. You do, or you can learn to better than AI.
Empathy isn’t about being “nice.” It’s about catching the problems that don’t show up in a test case. The ones that only surface when you ask, “What would this feel like if I were tired, distracted, or under pressure?”. “Am I able to use it easily or will I leave feeling frustrated and irritated?”.
That’s a skill worth developing. And it’s one that will only become more valuable as AI takes over the mechanical parts of testing.
The Art of Advocacy
There’s an old perception of QA as the team that finds bugs. The people who say “no” at the end of a sprint. The gatekeepers, or the problem child of the team.

That model is outdated, and honestly, it was never that effective.
The modern QA role, the one that actually adds value, is about advocacy. Not just finding problems, but championing quality throughout the entire process and the entire business. Not waiting until the end to raise concerns, but being in the room early, asking the awkward questions before code is even written.
Quality is a culture, not a department. We are all accountable for quality.
That means building relationships. It means being someone that Product Owners and Developers actually want to talk to, not someone they avoid because they know you’ll just point out what’s wrong.
It means learning to negotiate. When time runs short (and it always does), someone has to help the team decide what to cut, what to keep, and what risks are acceptable. That’s not a technical decision. That’s a human one, and it requires trust.
I’ve seen testers who were technically excellent but didn’t or couldn’t influence anyone. And I’ve seen testers with average technical skills who transformed the quality culture of their teams, simply because they knew how to communicate, build trust, and advocate without pissing others off.
The second group had more impact. Every time.
I know it’s not as easy as it sounds. I’ve been in both camps at various times; able to influence easily, and then sometimes not, no matter what I tried. Sometimes it comes down to the business’s willingness to change, genuinely change, not just talk about it. I’ve had senior people tell me I was right and then do nothing to help. That’s frustrating, and it happens.
But the skill is still worth building. Even when the environment doesn’t cooperate, knowing how to advocate well gives you options. And when you do land somewhere that’s ready to listen, you’ll be ready to lead.
Translation: Tech to Business and Back Again
One of the most underrated QA skills is translation.
A great QA professional speaks multiple languages: Developer, Product Manager, End User and with increasing need, Technical. They can take a technical defect and explain why it matters in business terms. They can take a vague stakeholder concern and turn it into something a developer can actually investigate. They can take Developer speak with all the technical jargon and acronyms, simplifying into terms everyone in the room can understand.
This matters more than people realise, especially now.
I’ve sat in meetings where a critical issue was argued and dismissed because it was explained in purely technical terms that leadership didn’t connect with. And I’ve seen minor issues escalated to crisis level because someone framed them in a way that triggered alarm bells.
The facts were the same. The framing (and audience) made the difference.
In previous articles, I talked about metrics and reporting. But metrics are only half the story. The other half is narrative: the ability to sit in a room and say, “The data looks good, but the feel of this release isn’t right yet.” I have been there many times, trying to translate the data to make sense of that uneasy feeling.
That’s not something you can put in a Jira ticket. It’s pattern recognition, intuition built from experience, and the confidence to voice it even when the numbers say otherwise.
AI can give you the data. It can’t give you the narrative.
The Psychology of the Bug Report
Let’s talk about something that sounds trivial but isn’t: how you write a bug report.
A bug report is, at its core, a critique of someone’s work. You’re telling a developer that something they built doesn’t work as expected. That’s a message that can land well or land badly, depending on how it’s delivered.
AI is blunt. It flags failures without context or diplomacy. That’s fine for automated alerts, but it’s not how you build a collaborative relationship with a development team.
The best bug reports I’ve seen do a few things well:
They’re clear without being condescending. They describe the problem, the steps to reproduce, and the expected versus actual behaviour, without implying that the developer is an idiot for missing it. As we know, this is not always the case – missing or vague acceptance criteria anyone.
They provide context. Why does this matter? Who’s affected? Is this a blocker or a minor annoyance? A developer is more likely to prioritise a fix when they understand the impact.
They assume good intent. “This might be an edge case we didn’t consider” lands better than “This is broken.” Same information, different tone.
The old “us versus them” mentality, QA versus Dev, is a relic and must end. It belongs in the past alongside waterfall documentation and manual regression suites that take weeks to run.
Modern QA is about shared ownership. You’re on the same team, working toward the same goal. The bug report isn’t a gotcha. It’s a contribution to software quality.
How you say it matters as much as what you say.
Mentorship and the Next Generation
If AI is going to handle 80% of the mundane execution work, what should humans be doing with that freed-up time?
One answer: mentoring.
I’ve spent a good chunk of my career building QA processes or teams, often from scratch. And the thing that makes the biggest difference isn’t the tools or the frameworks. It’s the people, and whether they’ve had someone invest in their development.
Junior testers don’t just need to learn how to write test cases or use Jira. They need to learn how to think. How to ask the right questions. How to look at a feature and instinctively know where the risks are. How to push back respectfully when something doesn’t feel right.
That’s not something you pick up from documentation. It comes from working alongside someone who’s been there, who can explain not just the “what” but the “why.”
If AI gives us back time, we should be spending some of that time on the next generation. Teaching them to think critically. Helping them develop the soft skills that will define their careers. Showing them that QA isn’t just about finding bugs; it’s about advocating for users, building relationships, and asking the questions no one else is asking.
That’s how you build a team that lasts. That’s how you build a discipline that lasts. Better yet, teach Developers and Product Managers the value of our thinking, that helps embed it into culture and process.
The Take
Here’s how I see it.
AI gives us the time. It handles the repetitive work, the bulk of execution, the pattern matching at scale. That’s valuable, and we should use it.
Metrics give us the map. They show us where we are, where we’ve been, and whether we’re heading in the right direction. Also valuable, when used well.
But human intuition is the compass. It’s what tells you when the map is wrong. When the numbers look fine but the release doesn’t feel ready. When a decision makes sense on paper but will frustrate users in the real world.
The QA role isn’t dying. It’s evolving into what it was always meant to be: a strategic, human-centric discipline. Less about executing tests, more about guiding quality. Less about finding bugs, more about preventing them. Less about saying “no” at the end, more about shaping decisions from the start.
The testers who thrive in this world won’t be the ones who can click the fastest or write the most test cases. They’ll be the ones who can empathise, communicate, advocate, and mentor.
They’ll be the ones who remember that at the end of every deployment, there’s a person trying to get something done.
That’s the human element. And no amount of automation is going to replace it.
A Note on Context
Every business and every project is different. What works in one place won’t work in another, and that’s the point.
Nothing here is meant to be a step-by-step prescription. It’s guidance, drawn from my own experiences, and deliberately kept general to avoid pointing fingers anywhere.
Take what’s useful, ignore what isn’t, and adapt it to your own context. Or as Joe Colantonio always says: “Test everything and keep the good.”

