Write automated tests with repeatable results

Published by

on

Writing automated tests is no longer the biggest challenge in the testing community. Writing reliable automated tests is. So many times tests that were once written are sent to the garbage bin or thrown into oblivion. They are unreliable and people will just ignore them when they are running, simply because they have a history of failing for various random invalid reasons.

One of the reasons of having such useless tests is that they are written either in haste, or carelessly. Many times people have tasks they need to accomplish within a pre-allocated timeframe, usually to finish the testing during the sprint. Or they have to write a specified number of tests during one day. These are the cases when, once a test gets written, and passes at least one time, people are happy to cross it off their list and move on the next one. Even if, from 3 test runs, it only passed once. Managers are probably happier hearing that 100 new tests were written, instead of hearing that 5 sturdy and reliable ones were. They like math and reporting.

But we are testers, and we should not be concerned with these numbers. Our main focus should be to write tests that are reliable, which means a lot of things. Among others, a good automated test must have repeatable results. This means that if the software under test does not change, each test run must have the same outcome each time it runs. If you create a test that checks for the happy flow, and all the code behind the happy flow is correct, that test should pass every time, when running on the same code version. That is if there is no bug in the software.

How to achieve repeatable results?

Well, to start with, run a test that you wrote more than three times. I usually run mine about 20 times around the time I finish writing them. But I also run them several times throughout the day, to see how they get impacted by the different activity going on in the environment where they run. And I also run them several times throughout the week. Simply to have them running on different ‘states’ of the environment. If you are doing continuous delivery that should be no problem, as you can schedule periodic CI jobs to run them. Even if you don’t, you can just pick them up and run them by yourself.

Doing that will easily reveal if there are any issues in your tests that you need to address.  If you are doing front end testing for example, several test runs will highlight any timing issues and any tweaks you need to implement due to pages not loading at the same speed each time you interact with them.

When you run tests enough times, it will seem like each time they fail there is another reason for the failure. Look at them one by one and see why they happen: is it a bug, is it an environment issue, or is it an inevitable delay or some similar events? If it’s a bug, raise it with the team. If it’s an environment issue, raise it with the relevant people. If it’s just one of those timing issues (for example), just update the test to handle the timing. Don’t make the test pass no matter what. But make it pass by addressing its bottlenecks.

The update might be tricky at times, you might need to do plenty of debugging and have lots of patience to get to a green test. But consider it a challenge, and once the challenge is solved, you will feel good about having created a reliable test that will be consistent across all its’ future runs.

The best thing about having a reliable test with repeatable results is that it will require less maintenance in the future, and it will not require any re-running, hence it will not add any extra run time when you are running a huge suite of tests at once (like when you are preparing for a release).

Therefore, if you see a test that does not pass at each run, when it should, don’t hesitate to take a look at it and transform it into a beautifully reliable test. Managers are also happy to hear that you only spend more time on writing a good test once, as opposed to spending even more whenever the test randomly fails, when it shouldn’t.

2 responses to “Write automated tests with repeatable results”

  1. Meredith Courtney Avatar

    Different environments, different expectations – and mine were “automation isn’t done until the module has been accepted by the regression test team, who will bounce it back to you for non-repeatable results (among other things)”. Not every company has or needs that, so I really shouldn’t have been surprised that it’s a real problem for people. Could you get some of the same “tests must meet acceptance criteria” effect by trading with another tester, as in “you run the tests I wrote, and I’ll run yours”? And maybe then go on to trading code reviews. But yes, for a person who has a lot of tests that “randomly” fail, definitely start by running tests a lot, and investigating failures. The first few iterations will be painful, but one ends up learning how to write tests that are more stable to begin with.

    You write interesting stuff, and I know very little about testing GUIs, I’ve bookmarked your blog so I can read more about that. Thanks for writing!

    Like

    1. iamalittletester Avatar
      iamalittletester

      Thanks for following! The trading with other testers part is also a good approach, especially if the setups are different for each of the computers you run tests from (like maybe one has a Mac and one has Windows, if this is applicable). Code reviews are also an essential part of the automation process. Just like it is in development, it should become a part of the Definition of Done for automation code. However some issues regarding “non repeatable results” are not that obvious from code reviews in general. So running your tests frequently will, at one point at least, underline non obvious tweaks that might be required.

      Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.