AI for QA – Simple Workflows That Actually Help
Table Of Contents
- The Core Principle: Make AI Explain Itself
- Workflow 1: Code Changes to Test Cases
- Workflow 2: Test Cases -> Automated Scripts
- Workflow 3: API Testing
- Workflow 4: Test Type Coverage
- Workflow 5: Let AI Write Your Prompts
- Building Your Own Process
- What You Don’t Need
- The Catch
- The Take
- A Note on Context
You don’t need to be a prompt engineer. You don’t need to know how to code. You don’t need to understand how large language models work.
You just need to know what you’re trying to achieve and be willing to ask for help.
That’s the thing about AI tools right now. They’re genuinely useful for QA work, but most of the content out there makes it sound complicated. It isn’t. If you can write a Jira comment, you can use AI to make your testing better.
This article is for QA professionals, junior or senior, manual or automated, who want practical ways to use AI in their day-to-day work. No magic prompts. No complex setups. Just workflows that save time and improve quality.
The Core Principle: Make AI Explain Itself
Before we get into specific workflows, there’s one habit that will make everything else work better.
Always ask AI to tell you what it did, how it did it, and why.
This isn’t optional. It’s how you confirm whether the AI actually helped you or wasted your time. If the AI can’t explain its reasoning, or if the reasoning doesn’t make sense, you’ve learned something important: don’t trust that output.
For example, if you ask AI to generate test cases and it gives you a list, follow up with: “Explain why you chose these test cases and what risks they’re designed to catch.”
If the explanation is solid, you’ve got useful test cases and you’ve learned something. If the explanation is weak or generic, you know to dig deeper or try a different approach.
This habit also helps you learn. Over time, you’ll start anticipating what good test coverage looks like because you’ve seen AI explain its reasoning hundreds of times. Still, you should always verify.

Workflow 1: Code Changes to Test Cases
This is probably the quickest win for any QA working alongside developers.
The scenario: A developer has made changes to the codebase. You need to understand what changed and create test cases for it.
The old way: Read through the Jira ticket, try to understand the code changes, mentally map them to functionality, write test cases based on your interpretation. Then view the implementation isn’t as you read.
The AI way:
- Open VS Code with Claude Code (or your preferred AI assistant)
- Point it at the code changes (the diff, the PR, or the changed files)
- Ask: “Review these code changes and create test cases for the Jira ticket [ticket number/description]. Include happy path, unhappy path, edge cases, and any boundary conditions. Explain why each test case is needed.”
That’s it. The AI will analyse the code changes, identify what functionality is affected, and generate test cases with reasoning.
Why this works: The AI can read code faster than you can, and it doesn’t miss things because it got distracted or tired. It also doesn’t have the curse of knowledge that developers sometimes have, where they assume certain things are obvious.
The key question to ask: “What scenarios might this change break that aren’t covered by these test cases?”
This catches the gaps. AI is good at generating obvious test cases. It’s less good at imagining weird edge cases unless you explicitly ask. You can’t always put them in the same prompt, things can get lost.
Workflow 2: Test Cases -> Automated Scripts
Once you have test cases, you can take it further.
The scenario: You have manual test cases and want to automate them, but writing automation scripts takes time and you’re always playing catch-up with new changes.
The AI way:
- Give the AI your test cases
- Ask: “Convert these test cases into Playwright test scripts. Use [your project’s conventions/patterns]. Explain the structure and any assumptions you’ve made.”
The AI will generate automation code that you can review, adjust, and integrate into your test suite.
Why this matters: The n-1 problem is real. Your automation suite is always one release behind, or worse, because writing and confirming scripts takes time. AI dramatically reduces that time, helping you stay current with changes instead of perpetually catching up.
Important: Don’t just copy and paste the output. Review it. Run it. Understand it. The AI might make assumptions about your application that aren’t correct. But even if you need to fix things, you’re starting from 80% done instead of 0%.

Workflow 3: API Testing
The same approach works for API changes.
The scenario: The API has been updated. You need to test the new or modified endpoints.
The AI way:
- Give the AI the API documentation, Swagger/OpenAPI spec, or even just the endpoint details
- Ask: “Create test cases for this API endpoint covering valid requests, invalid requests, authentication scenarios, error handling, and edge cases. Explain the purpose of each test.”
For automation, follow up with: “Generate these as Playwright API tests” or “Create a Postman collection for these test cases.”
What to watch for: AI is good at generating standard API test patterns (valid input, invalid input, missing fields, wrong types). It’s less good at understanding your specific business logic. Always review the generated tests against what the API is actually supposed to do, not just what the spec says.
Workflow 4: Test Type Coverage
Sometimes you need help thinking through what types of testing are needed.
The scenario: You’re planning testing for a new feature and want to make sure you haven’t missed anything.
The AI way:
Ask: “For this feature [describe feature], what types of testing should be considered? Include functional, security, performance, accessibility, and any other relevant categories. For each type, explain why it’s relevant and suggest specific test approaches.”
The AI will give you a structured breakdown of testing types with reasoning. This is particularly useful for:
- Planning test coverage for new features
- Preparing for test reviews or sign-offs
- Making sure you haven’t forgotten something obvious
- Justifying your test approach to stakeholders
Pro tip: You can also flip this around. Ask: “What are the risks that testing might not catch for this feature?” This surfaces the gaps and limitations, which is valuable information for risk discussions.
Workflow 5: Let AI Write Your Prompts
Here’s something that sounds circular but actually works.
If you’re not sure how to ask AI for what you need, ask AI to help you ask.
Example: “I need to test a login feature. Help me write a prompt that will generate comprehensive test cases covering security, usability, and edge cases.”
The AI will generate a better prompt than you would have written, and you can use that prompt (or refine it) to get better results.
Over time, you’ll build up a collection of prompts that work well for different situations. Save them. Reuse them. Refine them.
Building Your Own Process
The real power comes when you stop treating AI as a one-off tool and start building repeatable workflows.
Here’s what that looks like:
Start simple. Pick one workflow from above and use it consistently for a few weeks.
Save what works. When you find a prompt or approach that gives good results, save it somewhere you can find it again.
Create guide documents. As you learn what context helps AI give better answers, document it. What does AI need to know about your application? Your testing standards? Your automation framework?
Build coverage templates. Create prompt templates for different test types:
- Functional testing prompts
- Security testing prompts
- Performance testing prompts
- Accessibility testing prompts
- API testing prompts
Share with your team. Once you’ve got workflows that work, share them. This isn’t and shouldn’t be secret knowledge. The whole team benefits when everyone can use AI effectively.
What You Don’t Need
You don’t need to understand how AI works internally, but yes it does help.
You don’t need to learn “prompt engineering” as a separate skill.
You don’t need to memorise special phrases or magic words.
You don’t need to know how to code (though it helps for reviewing automation scripts).
You just need to:
- Know what you’re trying to achieve
- Be specific about what you want
- Ask AI to explain its reasoning
- Review and validate the output
- Keep thinking critically about the inputs and outputs
That’s it. The AI does the heavy lifting. Your job is to guide it and verify the results.
The Catch
AI isn’t magic. It makes mistakes. It hallucinates. It sometimes generates confident-sounding nonsense.
That’s why the “explain yourself” habit matters so much. When AI explains its reasoning, you can spot when something’s off. Even AI can spot itself in a hallucination at times. When it just gives you output without explanation, you’re flying blind.
Also, AI doesn’t know your specific context unless you tell it. The more context you provide (your application, your users, your risks, your standards), the better the results.
And finally, AI is a tool, not a replacement for thinking. It can generate test cases, but it can’t tell you whether those test cases actually matter for your users. That’s still your job.
The Take
AI tools are genuinely useful for QA work right now. Not in some hypothetical future. 100% right now.
You can use them to:
- Turn code changes into test cases
- Generate automation scripts from manual tests
- Create comprehensive API test coverage
- Plan test approaches for new features
- Build prompts that get better results over time
The barrier to entry is lower than most people think. If you can describe what you want in plain English, you can use AI effectively.
Start with one workflow. See how it goes. Build from there. It’s not perfect and there’s much more we can do, but over time you’ll find what works for you.
The goal isn’t to become an AI expert. The goal is to get better testing done in less time. AI is just one more tool that helps you do that, allowing you to focus on critical thinking strategies that can’t be prompted. Yet.
A Note on Context
Every business and every project is different. What works in one place won’t work in another, and that’s the point.
Nothing here is meant to be a step-by-step prescription. It’s guidance, drawn from my own experiences, and deliberately kept general to avoid pointing fingers anywhere.
Take what’s useful, ignore what isn’t, and adapt it to your own context. Or as Joe Colantonio always says: “Test everything and keep the good.”

