Table Of Contents
There’s a conversation happening in QA circles, and it’s hard to ignore. AI is coming for testing. AI automation will replace testers. The QA role is dead…blah blah blah.
I’ve been in QA for over 20 years. I’ve seen hype cycles come and go. I am not saying AI isn’t significant; it most certainly is. But I’ve learned to sit back, observe, and form proper questions and understandings instead of jumping to conclusions.
So here’s the question I keep coming back to: is AI actually replacing QA teams, or is it just shifting what QA teams need to do? And if it is a shift, does it push QA all the way left so QA sits next to Product Owners and Project Managers and not underneath? Allowing QA to dig deep into planning & strategy, risk assessment, and test design, letting AI handle most of the execution?
What AI Is Genuinely Good At
First of all, let’s be fair to the power hungry technology. AI is proving useful in some areas of testing as well as within other engineering teams.
Test generation is one. AI can look at code, analyse patterns, and suggest test cases faster than a human can write them. I have been testing this out on various web applications recently, scary quick. No, it’s not perfect, but it’s a solid starting point. Think of the 80/20 rule, AI handles 80% of the work and QA finesse the remaining 20%, with AI doing the legwork.
Data analysis is another. AI can sift through thousands of test results, spot anomalies, and highlight patterns that would take a human hours or days to find.
Pattern recognition more broadly. AI can identify flaky tests, predict where defects are likely to appear based on historical data, and flag areas of risk. It can even catch and mitigate security, accessibility, and performance issues with the proper guidance.
These are genuine strengths. If you’re not exploring how AI can assist in these areas already, then you’re probably missing a ton of value.

What AI Still Can’t Do (yet)
But here’s where it gets more complicated.
AI doesn’t understand context. It doesn’t know that X feature is critical because your biggest customer uses it daily, or that Y workflow matters more during end-of-financial-year processing. It doesn’t know the history of why something was built a certain way, or worse, the politics behind a decision.
AI doesn’t exercise proper judgement. It can tell you what’s different, but it can’t tell you whether that difference matters for every nuanced business case. It can’t weigh up the risk of releasing with a known issue versus delaying a launch. It may guess (read: hallucinate) about these issues, but that still requires a human conversation.
AI can’t communicate with stakeholders. It won’t sit in a room with a product owner and explain why a release isn’t ready (chat screens don’t count). It won’t mentor a junior tester through their first defect triage or why something was not implemented and there’s no documentation stating as such. It definitely won’t negotiate scope with a delivery lead on its own, when time runs short.
And perhaps most importantly, AI really doesn’t know what not to test, without some strategic planning and oversight. Knowing where to focus, where to cut, and where the real risk lives comes from experience, relationships, and understanding the business applications/systems. That’s not something you can prompt your way through without a high degree of risk.
I will say this now: one day like in the movies, AI (and robots) will actually be good, taking practically all jobs. But once you see through the hype and know the history of AI and its history of various iterations and meanings, you know it’s still a little ways off yet. Unless like Skynet, it surprises us all overnight.
The Integration
There’s another angle that doesn’t get talked about enough: how realistic is it to actually implement AI in the way the hype suggests?
In my experience, most teams aren’t starting with a blank slate. They’ve got legacy systems, existing frameworks, technical debt, and a dozen other priorities competing for attention. Bolting AI onto that is not straightforward, period.
Sometimes the effort to integrate AI into an existing test ecosystem outweighs the benefit, at least in the short term. You end up bending over backwards to make it fit.
Which raises a different question: if you were starting fresh, would you go AI-first? Maybe. But most of us aren’t starting fresh. We’re working with what we’ve got, and that changes the calculus.
I’m not saying don’t explore AI. I’m saying be realistic about the investment, the integration effort, and whether it genuinely solves a problem you have. Or is it just a problem the industry says you should have, so you jump in with both feet to say “we use AI”?

The Shift
Here’s what I think is actually happening.
The role of QA isn’t disappearing. It’s shifting, a long way left, maybe all the way.
For years, the core of QA work has been doing testing, writing test cases, executing them, logging defects, running regression etc. Hands on the keyboard, working through scenarios real or perceived.
AI changes that. Not by removing the need for QA, but by changing where QA adds value.
The shift is from doing testing to directing and validating AI-assisted testing. Less time writing every test case manually. More time deciding what to test, reviewing what AI produces, and making judgement calls on risk and quality.
It’s a shift from execution to orchestration.
There’s also a “shift left” angle here. If AI takes on more of the execution, running tests, analysing results, flagging anomalies, then QA’s value moves earlier in the lifecycle. Less time at the end validating builds. More time at the start shaping what gets tested, how, and why, which can more easily feed into regression suites.
That’s not a bad thing. QA has been trying to shift left for years. AI might actually make it happen for real, not by replacing testers, but by freeing them from the repetitive work that kept them anchored to the back end of the sprint.
That’s not a smaller job. It might actually be a harder one. It requires deeper product knowledge, stronger communication skills, AI engineering skill, and the ability to think critically about what AI gets right and what it gets wrong (how AI actually works).
The Real Question
So, are we replacing testers? Or are we changing what testers need to be good at?
Spoiler: it’s the latter.
The testers who thrive in an AI-assisted world won’t be the ones who can write the most test cases. They’ll be the ones who can:
- Ask the right questions about risk and coverage
- Evaluate AI-generated tests with a critical eye
- Know when to trust the tooling and when to override it
- Communicate quality status clearly to technical and non-technical stakeholders
- Mentor others through the change
That’s not a diminished role. That’s an evolved, potentially more complex one.
The Take
I am definitely not anti-AI. I’m genuinely curious about where this goes. Over the years I’ve seen tools improve testing, and I expect AI will too…in the right context, with the right guidance and expectations. Every day, I talk and type to AI to get my work done, aiming for it to complete ~80% of the more mundane work, allowing me to focus on the nitty-gritty ~20%. This is the real value, once it truly materialises.
But I’ve also seen hype cycles promise transformation and only deliver incremental improvement. That’s not a criticism, incremental improvement is valuable. But it’s different from “AI will replace your QA team.” Some businesses have already jumped, how long before they jump back, at least to some degree?
QA has evolved before. We moved from waterfall to agile, from manual to automated. From siloed teams to embedded testers. Each shift changed the role, but didn’t eliminate it.
I think AI is the next evolution, not the extinction-level event, yet.
The question isn’t whether QA survives. It’s whether we evolve with it.
A Note on Context
Every business and every project is different. What works in one place won’t work in another, and that’s the point.
Nothing here is meant to be a step-by-step prescription. It’s guidance, drawn from my own experiences, and deliberately kept general to avoid pointing fingers anywhere.
Take what’s useful, ignore what isn’t, and adapt it to your own context. Or as Joe Colantonio always says: “Test everything and keep the good.”
