10 Guiding Principles for Effective E2E Test Automation in CI/CD | TestDevLab Blog

10 Guiding Principles for Effective E2E Test Automation in CI/CD

fdsa

Test automation is the standard for high-quality software development within organizations focused on continuous development. Modern teams need just the right level of feedback at the right time to ensure their development is fast, effective, and of high quality. The overall testing process must be integrated into the software delivery to provide this. Building a test automation framework that works on a long-term, scalable basis starts with a proper test automation strategy in place alongside matching technical expertise of test automation engineers.

This article is intended to help test automation engineers, like you and me, to evaluate their current work practices, set their priorities straight and think about the long-term milestones and goals of their current or upcoming projects. Regardless of your seniority level, it is always good to have a handy checklist that will remind you how to increase the quality of your work and become even better at what you do. Remember, even test scripts need to undergo some trial-and-error testing to improve their quality.

So, here are the 10 guiding principles for effective end-to-end test automation in CI/CD that will help you evaluate and improve your automation efforts.

Keep the system under test clean

The SUT should be in the same state before and after the test

Keeping the system “clean” from any previous test outputs helps with getting clear and unbiased results.

The proper way to apply this principle is to:

  • Start each test by setting the required preconditions.
  • “Dust” all test assets off the “system shelves” after the test has been completed.

These are two necessities every test automation engineer should be aware of. The reason is very simple: E2E testing should not interfere with the performance of the application (performance testing being an exception), while test scripts should be reusable and not dependent on the presence/absence of test data from previous executions. This means getting rid of the already used test resources to avoid unnecessary overload, so the same can then be reused in other tests and results will not be affected by any unwanted duplication.

Automation is development

Glasses with computer screens in background

Automation, just like development, should follow best coding practices

Test automation engineers should strive for code consistency, reusability and scalability.

Therefore, make sure to:

  • Plan and design your code
  • Strive for simplicity, clarity, and brevity
  • Avoid hard-coded values
  • Use meaningful variable or method naming
  • Document the code with wise use of comments
  • Practice insightful code reviews
  • Revise, refactor, and keep code up-to-date

Automation, same as development, is a team effort and as such should be well organized, structured, and easy to understand so that anyone can easily pick up where someone else has left off. It also should be stable and endure the test of time gracefully. To ensure this, devise your tests mindfully and stay abreast of latest coding practices and improvements.

One test, one condition

Dice with tick and cross

Limit your tests to check only one condition per test

Manual test cases cannot always be taken for granted and applied as automation test cases without making some needed tweaks. Usually, this involves adding some additional checks that would verify in automation what is immediately visible to the human eye in manual testing. On the other hand, in E2E automated testing there are also situations where you need to combine several manual test cases into one automated test simply because the entire user journey needs to be tested as a whole.

Different situations require different approaches. However, there are two things that you should always be careful about:

  • Do not over-engineer automated scripts by adding too many conditions and trying to verify every state or element in a single test.
  • Keep single tests granular and linear, meaning that you focus on that one condition that needs to be checked and you verify it thoroughly.

If more than one condition is being tested, it is better to separate methods into two distinct tests. This gives a clearer picture of what it means if the test is failing and where the bug can be located. The sooner you are able to locate the failure, the faster the issue will be reported and, hopefully, resolved. Or, in the case of an outdated test script, the faster the test script will be fixed and ready for a rerun.

Proof or it didn’t happen

Tests should include proof of success/failure — screenshot, video, log file

Integrating automated tests into CI/CD pipelines means frequent, on-demand execution of all available test scripts. Analyzing execution failures can be a serious time drain when the automation team is under pressure to provide quick feedback. And not to mention, deployment to production depends on that feedback.

That is when visual proof makes all the difference. It requires you to:

  • Set up proper reporting accompanied by visual proof of what went wrong in failing steps (such as screenshot or video).
  • Add a reference pointer of how to locate the failing method in the code and start debugging (such as log files).

One very important advantage of automated testing is that it provides the opportunity to generate real-time reports containing detailed information of the test execution. This is a powerful tool at the disposal of all automation engineers and it should be taken advantage of to the fullest by including as many details of the failures as possible. Not spending the time to properly configure them from the start, would result in spending even more time on figuring out the issues and trying to fix them in the end.

Avoid the domino effect

Dominos falling on a surface

Make test scripts independent of the order of execution

There are tempting situations to use the order of execution to decrease total execution time. Usually, this includes reusing previous test scripts as a precondition to the next. So, it’s easy to get fooled into believing that precious time has been saved.

However, it is best practice that test automation engineers should refrain from doing so to avoid getting a lineup of red alerts in a streak triggered by one failing test case. This will inevitably happen at some point and you will need to spend tremendous time investigating the root cause of the failure (because it will not be so clear why several tests are failing with different errors). By avoiding inter-test dependencies you eliminate any additional unwanted uncertainty in your test infrastructure.

What you can do instead is:

  • Configure execution in parallel to double down on the execution time without taking the risk of failing all test cases in a row.
  • Use tags to make different test suite combinations and get faster results in a desired scope (regression, smoke, sanity, any specific feature).

Make preconditions stable

A tower of building blocks

Use API calls for all precondition steps to avoid unnecessary flakiness

Sometimes preconditions are just too lengthy and take up too much of the execution time. Yet, they are necessary and you cannot just skip them. This makes the test case more prone to random failure in the precondition steps, causing the whole test to fail on a step that is not even in the main focus of the test.

To avoid this situation, it is better to:

  • Replicate all precondition GUI steps with already tested and reliable API calls.
  • Continue to test by performing actions on the UI once required conditions are set.

Programming interfaces tend to provide more stability in comparison to graphic ones. Therefore, test engineers should use it to their advantage even in E2E tests, which are mainly focused on testing user experience on the GUI. This will significantly decrease execution time, while it will also offer a more stable way to finally get to the most important step that needs to be validated on the GUI.

Stay CSS resistant

Use unique locators to reduce code maintainence after design changes

Design changes are quite frequent throughout software development, especially in the initial stages when the product is undergoing many different combinations until the right one is found. This is something that we, as QA engineers, cannot and should not argue against, as design is an important factor in the success of any application.

One thing that we can do is to be prepared for it and plan it in advance. The sooner we do that, the better it would affect our automation efforts. To get the most out of your preparation time, it is best to communicate the need for unique element selectors to developers well in advance so they can also start planning for it.

Keep in mind to:

  • Request special locators from the development team that will only be used for testing needs (this is the best way to create tests that are resistant to UI changes).
  • Select elements by locators that have no other purpose in the code to significantly reduce maintenance time.

This way, you will never have to worry if a class or even an ID might get changed with the next deployment and mess up your locators.

Execute and fail fast

Make execution and failure time shorter and be mindful with the retries

The speed of automated execution usually lures us into believing that every failure is fast and painless. However, that is rarely the case. This is where retries, timeouts and headless executions can play a deadly combination.

Even though primarily used as means for optimization, you have to be aware of the downside they can have when improperly combined. Enabling a large number of test retries, along with setting a high default timeout could seriously extend the execution time in headless mode when there is a failing step:  

(N-number of Retries) X ( N-seconds of Timeout per locating an element) = Total Execution Time

Yet, if there is a show-stopper issue with the application, testing should be stopped as soon as possible. The test script should be able to fail on assertions for such P0 issues and abort the test run, thus ringing the alarm bells.

To avoid the risk of having a pipeline execution run for too long when there is a serious issue, you need to:

  • Place appropriate wait-until assertion functions for validating the presence of crucial elements throughout the test steps without adding any unnecessary waiting times.
  • Optimize for the least number of retries and just the right timeout that could actually make a difference if an element is not being located due to either high execution speed or a sudden plunge in network conditions.

Isolate expected failures

QA engineer working at his desk

Execute only the automated tests that are expected to succeed

The start of an automation project is more about quantity and all efforts are focused on automating as many of the selected test cases for automation as possible. However, as the project progresses and CI/CD pipeline execution is up and running, the quality of selected automated tests starts taking its rightful precedence over quantity.

At this point, when deployment starts to be dependent on the execution of automated builds, it is crucial that only those tests that are expected to pass are actually running. This means that you should:

  • Skip the execution for all test cases that could be failing due to outdated methods or element locators until the respective test script is made functional and aligned with the latest changes in the system.
  • Clear out false negatives to avoid turning a blind eye to the ‘noise’ made by the many failing test cases and miss a failure that is caused by an actual bug.

Feedback from builds should be consistent and accurate in order to help the development team in reaching a quick decision if deployment is good to proceed.

Get both sides of the story

Validate whether your test will actually fail if there is a bug

A good test script passes when there are no issues found, but also fails when there is a reason for it. As a test engineer, you need to verify and validate not only the quality of the SUT, but also the quality of your tests and make sure that they are not faulty in the first place.

The best way to prevent an issue from slipping away is to:

  • Use every chance to raise custom exceptions that will handle unexpected behaviour when there is such.
  • Throw proper errors together with well written console logs to identify potential issues without any doubts about their origin.

Following this principle will ensure any potential issue is obvious and easy to spot. It is very important that the automation code does not create false positives by marking a positive outcome even when something in the application is broken. After all, the purpose of testing is to confirm the presence of defects, not the absence, right?

Key takeaways

To sum up, automating tests is more than just a one-time job of writing a code that works. This code should be maintained to work every day—across every browser and on every user device after every back-end or front-end change. By following the above-mentioned best practices, you will be able to amplify the speed and stability of your automated test infrastructure, making the upscale much smoother. One final thought to keep in mind is that the actual implementation of each principle will be different depending on the test framework and programming language you are using for your project, but the idea and the result will be applicable to any technology stack at your disposal.

If you agree with the principles listed and would like to bring them into action, feel free to download this summarized and easy-to-use checklist and start ticking those progress checks!

Want to introduce test automation in your existing testing processes? Our experienced test automation engineers can help you boost your testing efforts and increase efficiency by implementing the best testing practices and using various automation tools. Get in touch and let’s discuss your project.



Subscribe to our newsletter

Sign up for our newsletter to get regular updates and insights into our solutions and technologies: