Blog/Quality Assurance

AI as an Assistant for Manual Testing

Man looking at a laptop

The pace of software development continues to accelerate, and testing teams are expected to keep pace without sacrificing quality. But as products grow more complex, traditional QA methods alone often aren’t enough.

While much of the conversation regarding artificial intelligence (AI) has focused on developers, it's worth noting that AI adoption in software testing has clearer benefits. Developers using AI can sometimes generate excessive or unoptimized code. In contrast, QA teams can use AI to streamline workflows without compromising precision, to generate test cases, review logs, or improve test coverage, all without bloating the product.

According to the latest Stack Overflow Developer Survey, over three-quarters of developers are now using or planning to use AI tools, largely to increase productivity. QA can benefit so much in the same way, but with less risk and more control.

This blog post explores how AI truly supports QA efforts, what it can’t replace, and how to use these tools, such as TestDevLab’s own Barko Agent, in practical, high-impact ways.

Common QA tasks where AI can help

As a QA specialist, you often deal with repetitive tasks and extensive documentation. AI can be a smart assistant for these day-to-day activities. 

Below is a list of common tasks where AI can help, saving time, reducing errors, and letting you think more about critical tasks and quality strategy.

Requirement gathering & clarification

QA engineers typically receive new tasks through tools like Jira or Azure DevOps, often during sprint planning or daily assignments. Once the task is received, the next step is to review the related documentation, such as user stories or product requirement documents (PRDs). If anything is unclear, it’s important to follow up directly with developers or stakeholders to make sure expectations are well understood before testing begins.

During test case review or creation, you might encounter scenarios that seem unclear; therefore, instead of searching the whole PRD, AI tools can help speed this up by scanning the document and identifying whether the behavior is mentioned or not.

Screenshot of prompt sample 1
Barko Agent prompt sample #1

Test case generation

QA engineers typically create test cases by reviewing the Product Requirements Document (PRD), using the described functionality, user flows, and acceptance criteria to define what needs to be tested. This ensures alignment with the product’s intended behavior and provides a reliable foundation for coverage.

Traditionally, this has been a manual process requiring careful analysis of the PRD. However, with advancements in AI, tools like Barko now allow teams to feed the PRD directly into the system and automatically generate test cases. This not only accelerates the process but also helps reduce human oversight and ensures consistency in interpreting requirements.

Screenshot of Barko Agent prompt sample 2
Barko Agent prompt sample #2

Bug report creation 

QA always need to document unexpected or incorrect behaviors after following a series of user interactions, for example: “On iOS app version 1.2.3 when user taps on ‘Login’ button after entering user credentials, app crashes instead of signing user in”, simply describing the issue like this and asking AI (such as Barko) to create a bug report, it can quickly generate a well structured report including all the details you have provided.

Screenshot of Barko Agent prompt sample 3
Barko Agent prompt sample #3

Exploratory testing ideas (edge cases generation)

Exploratory testing helps uncover unexpected behaviors by testing beyond the obvious, while exploratory testing relies heavily on human intuition. AI can be a good supporter in helping testers to explore more efficiently, for example: 

  • Suggest edge cases based on known input
  • Propose unusual user paths 
  • Help create a wide range of test data, including edge and limit cases
Screenshot of Barko Agent prompt sample 4
Barko Agent prompt sample #4

Test summary or report writing

Test summaries and reports are essential for communicating testing status, coverage, and results to stakeholders. They help ensure transparency, track quality progress, and support decision-making at sprint or release checkpoints.

Below is a prompt idea after feeding the AI with the test run results file.

Screenshot of Barko Agent prompt sample 5
Barko Agent prompt sample #5

Priority & severity suggestions

Assessing impact and how urgent it is to fix the bug, analyzing the described behavior, and suggesting appropriate priority/severity levels, for example, you faced an issue and you are not sure what the right priority and severity of it is.

Screenshot of Barko Agent prompt sample 6
Barko Agent prompt sample #6

Test data generation

Generating structured test data for functionality by using field-level requirements, such as length and character type, imagine you are testing the contact form, which contains 3 required fields: Name, Surname, and Email.

  • “Name” and “Surname” fields must be between 2 to 32 characters, and they must not contain digits or special symbols. 
  • The “Email” must follow the valid format, including the username part, the “@” symbol, and the domain name. 
Screenshot of Barko Agent prompt sample 7
Barko Agent prompt sample #7

Regression scope selection/risk analysis

When a new functionality has been implemented or part of an existing functionality has changed, regression testing might be required to make sure that the new changes did not introduce any new issues, this is where AI can help by quickly identifying which areas might be affected, to get accurate results, feed AI with a sample list of possible test cases along with a clear description of changes that were implemented.

Screenshot of Barko Agent prompt sample 8
Barko Agent prompt sample #8

What AI can’t replace

There are still key areas where human testers are not replaceable by AI. These tasks rely on experience, creativity, and judgement - things that AI can't really replicate.

Exploratory testing and experience

AI can follow rules and execute test cases, but exploratory testing is very human. It requires thinking like a real user, trying unpredictable actions, and applying past knowledge to find hidden bugs. Humans can spot familiar patterns and recall similar issues from previous projects - this is something AI does not truly remember in the same way.

Creativity and scenario invention

AI is good at generating ideas based on patterns, but it lacks creativity. Human testers often imagine unusual or unexpected situations - like what if the user loses connection during payment or switches tabs.

These kinds of experience-based scenarios go beyond scripted logic. They require curiosity, imagination, and the ability to think outside the box.

Accountability for mistakes

QA is still responsible for what gets tested and what gets missed. It is up to human testers to review, validate, and adapt what the AI provides, whether it is a test plan or generated test cases. 

If a bug slips through and makes it to production, it is not AI that gets questioned - it is the QA team. That is why critical thinking, judgment, and accountability can not be outsourced to a machine, no matter how advanced the tool is.

Contextual judgment

Imagine you are testing a new feature in a banking app, a new feature that allows users to send money to contacts via phone number. From the technical point of view, the feature is working: Phone number is validated on the backend, accounts are confirmed, and money is being received by recipients. 

In this case, most likely AI would sign off on the feature. But as a human tester, you notice something critical that there is no confirmation step before sending the money. Now imagine a user accidentally sends money to someone else, and there is no way to undo the transfer. 

This is one of the examples of “bigger picture” that AI can not truly evaluate, because it does not understand real-world consequences. 

Challenges and considerations

AI can be a huge time-saver for QA, but it’s not something you can just set and forget. It’s a tool, not a tester. Using it responsibly means knowing where it helps and where it still needs human oversight.

1. Don’t skip reading the PRD yourself

Even if an AI tool pulls out test cases or flags key points from the PRD, it doesn’t understand what the product is supposed to do. That’s still on you. Skimming or skipping the PRD and relying only on AI suggestions is risky; things can get missed. Use AI as an assistant.

2. Some features still aren’t there yet

Things like uploading screenshots or pasting in designs straight from your clipboard aren't fully supported yet in Barko Agent. It’s coming soon, but for now, most of the work must be done in text form or by feeding the AI with documents. Keep that in mind when planning how you want to use AI in your workflow.

3. Don’t assume the AI nailed it

Yes, the AI might generate a clean test case or a neat bug summary, but that doesn’t mean it’s flawless. You still need to sanity-check everything. Especially when timelines are tight, it’s tempting to use AI and rely on it, but that’s when mistakes slip through.

4. Prompts matter more than you think

What you get from the AI is only as good as what you put in. If your prompt is vague, confusing, or missing context, the result will reflect that. Learning how to ask the right questions or provide just enough detail is now part of the job.

5. Be mindful of what you feed it

If you're using Barko Agent internally, you're already on solid ground. It's designed to safely handle internal documents like PRDs, test cases, or spec sheets. That said, common sense still applies: don’t upload passwords, personal user info, or anything that clearly shouldn’t be shared, even in a secure setup. Just because it’s safe doesn’t mean everything belongs there.

Final thoughts

Using AI in QA isn’t about doing less - it’s about working smarter. It helps with the repetitive parts, keeps things moving, and gives you more room to focus on what needs attention.

At the same time, no tool replaces experience. Knowing when something feels off, asking the right questions, and thinking like a real user - that’s still on us. If AI can handle the busywork so we can focus on quality, that’s a win.

QA engineer having a video call with 5-start rating graphic displayed above

Deliver a product made to impress

Build a product that stands out by implementing best software QA practices.

Get started today