How to write good test cases
Good test cases do more than just validation functionality—they ensure coverage, uncover hidden defects, and provide knowledge for all stakeholders. Acting as a safety net during development, they help identify issues early in the development life cycle, streamline automation efforts, and maintain confidence in the system as it grows.
This page explores the key principles, strategies, and practices for creating test cases that are effective, reusable, scalable, and easy to understand. Whatever your seniority, mastering the art and science of writing good test cases is an invaluable skill.
Understand the requirements
In order to write good tests, you must first fully understand the requirements of the feature or system you will be testing. Requirements define what a system is supposed to do, and should be used as a guide or basis for both development and testing.
If any requirements are unclear, they should be discussed with stakeholders before you start test planning. Ambiguities or gaps in requirements can lead to incomplete or invalid test cases, resulting in missed defects, and wasted resources.
Define the test objective
Before you start writing test cases, you should define the test objective or the purpose of each test case. A test objective serves as a clear statement of the purpose of the test case.
The test objective should clearly outline the goal, and map directly to a specific requirement or user story. This ensures alignment with the requirements being validated by helping testers maintain focus.
BAD EXAMPLES | GOOD EXAMPLES |
---|---|
|
|
|
|
|
|
Follow an established format
To ensure consistency, clarity, and ease of understanding across the testing team, you should use an established format for writing test cases.
If you’re working in a project, where a format is already established, use that. But if you have to make up a format from scratch, it’s better to know what a typical test case template might include:
- Test case ID - a unique identifier for the test case
- Title - a short descriptive name for the test case
- Description - an overview of the test’s objectives
- Preconditions - any configuration or setup needed before executing the test
- Test steps - sequential steps to execute the test
- Test data - required inputs to execute the test
- Expected results - the expected outcome after each step or the entire test
- Postconditions - state of the system after test execution
- Priority - indicates the priority of the test case
- Environment - details of the test environment, such as OS or browser
How to write good preconditions and postconditions
Well-written preconditions and postconditions establish the context, and expected state of the system before and after test execution.
Preconditions define the required setup to execute a test case successfully. They explain the initial state of the system, environment configurations, test data required to execute the test, and any other dependencies.
Ensure to document only the information relevant to the test case and avoid unnecessary details. And address any other dependencies, such as external systems, network conditions, database states in order for those blockers to be resolved before test execution.
BAD EXAMPLES | GOOD EXAMPLES |
---|---|
|
|
|
|
|
|
Postconditions define the expected state of the system after the test has been executed, regardless whether expected results were achieved or not.
Well-written postconditions should be specific and as measurable as possible, while described in objective terms. Additionally, good postconditions ensure that the environment is always restored to a known state, and that the tests are written in a logical order.
BAD EXAMPLES | GOOD EXAMPLES |
---|---|
|
|
|
|
|
|
How to write good test steps
Writing clear and precise test steps is essential for creating effective and easy to understand test cases. Each step should be sequentially in a logical order, and written using simple language so that anyone would be able to understand. Each test step should avoid unnecessary details.
Avoid combining multiple actions in a single step, as it might complicate the execution process or debugging of the failure later on. On the same note, avoid including too many steps in your test case. If there are too many steps, consider splitting the test case.
Avoid referencing specific styles in your tests, as these details might change later on and result in a lot of maintenance work needed. If you need to reference elements with specific names, wrap the names in quotation marks to improve readability. If a test case requires specific test data, and the data is not specified separately, it should be mentioned in the test step.
BAD EXAMPLES | GOOD EXAMPLES |
---|---|
|
|
|
|
|
|
Good practice: if your test cases are formatted as “user does something”, use gender neutral pronouns (they/them) where possible.
BAD EXAMPLES | GOOD EXAMPLES |
---|---|
|
|
How to write good expected results
Writing clear and precise expected results is a crucial part in test planning as it is the basis for determining the success or failure of a test during execution.
Good expected results include all intended outcomes, covering both the functional and non-functional aspects of the system. Functional expected results may include validating the correctness of data shown, users being redirected to other screens, updates to a database, etc. Non-functional aspects of the system may include response times, system performance, security checks, etc.
In case of complicated test steps leading to multiple expected results in between the steps, it is wisest to define the expected results after each step, not only the final result after all the steps have been executed.
BAD EXAMPLES | GOOD EXAMPLES |
---|---|
|
|
|
|
|
|
Positive and negative scenarios
Including both positive and negative scenarios in test planning is a critical part in ensuring test coverage and properly assessing the overall quality of the application.
Positive tests verify that the application behaves as expected with valid inputs and correct conditions, while negative tests check the system’s behavior when meeting unexpected conditions or invalid inputs.
While positive scenarios ensure that core functionality works reliably, only relying on the positive scenarios is not enough to guarantee quality. Negative scenarios help identify vulnerabilities, edge cases, and the error-handling capabilities of the system under test. Consider using testing techniques—such as Boundary Value analysis, Equivalence Partition, and others—to identify positive and negative scenarios for implementation.
Organize and Prioritize
Well-organized test cases allow the testing team to systematically cover all functionalities, identify gaps, and ensure that no critical areas are overlooked. Tests might be categorized based on modules, features, or type of the tests, which makes it easier to manage large test suites.
Equally importantly, each test case should be prioritized based on its importance and impact to software quality. Typically used priorities are: High, Medium, and Low. Priorities should be assigned based on business impact, risk, and frequency of use.
Review
Have other team members or stakeholders review your test cases. It’s an efficient way to catch any errors or omissions early on. A thorough review ensures that the test plan aligns with requirements and doesn’t miss any. Remember, each test case should be directly connected to a specific requirement, and vice versa each requirement and its scenarios should be tested.
During review, it’s also important to verify the formatting and readability of the test case, as well as to check spelling and grammar.
Maintenance
Maintenance is an important step in the software testing life cycle, as it ensures that the test strategy remains effective and relevant, even as the project progresses. Software systems are dynamic, and requirements could change, new features are developed, and old parts of the system are modified and improved. Test cases should be updated when requirements or functionalities change.
However, any changes to the tests should be tracked. This will give you a history of the updates, which might prove beneficial in the long run. This also promotes better traceability.