Automated tests are supposed to help validate that the product under test is working as expected. For that, a key ingredient is required in your automation: the actual testing. Just as, when you are performing a test case yourself, you are visually inspecting that the expected behavior occurred, so does your automation need to have verification in place. And that verification needs to be implemented by you. Here are some thoughts on this.
As usual, let me start off by giving an example that i wish was made up, but is in fact a true story. Some ages ago i had an encounter with a UI web automated test that i had to run, that someone else had written. I ran it the first time without looking into the actual code, just to see what happens, and it passed. I though ‘Great’. Then…i ran it again, this time actually looking at it as it ran in real time in the browser. There was clearly an error on the page, due to which the test should not have passed. That is when i realized there was something weird happening within the test code, so i started reading it thoroughly.
The test was filling out the information on the page (by simply typing into a set of mandatory fields). The last step of the test was: click on the page submit button. And that’s it. So just a succession of send_keys, then a click (yes it was Selenium, but imagine this being a sequence of ‘fill’s or ‘type’s followed by a ‘click’ in other frameworks). No checks at all whatsoever at any step in the test. None.
Because of the nature of how the page was implemented, any one of the following situations could have happened but the test would not have ever failed, even if they happened (one of them actually happened while i was running the test that second time):
- the submission button would not have triggered any back-end call. Therefore the page would not have shown a ‘success’ message or a new ‘success’ page would not have been shown. And the submission would not have actually been sent
- the submission button would have triggered a back-end call but there would have been an error returned. Therefore the submission would not have went through. The reason could have been either a server side error or even an input error (invalid data provided on the page that did not pass the page validation)
- the submission button would have triggered a back-end call that was successful, the correct success message would have been displayed, but another extra error message could have also been shown at the same time (that should not have been shown)
- the submission button would have triggered a back-end call that was successful and all the expected behavior would have been present. Yes, it could have happened. But without an adequate check for this, the test would not have actually signaled this situation. That is because, as i mentioned, there was no check in place. Just because the test passed does not mean that it signaled that an expected behavior occurred. Without checks, that piece of code is not a test – it is just a succession of actions and nothing else
Looking a little bit more in depth at the incorrect input data situation: this could have been caused by the value of an input being reset when the focused moved away. But also consider what happens when you type twice into the same field – do you clear the first value before typing again, so that the second value (the one you expect to be there) is actually there? Thus avoiding the second text getting appended to the first one? Or even: if the data you wrote into the field was randomly or automatically generated by the test script based on a pattern – was it an actually valid test data? Another weird one is when you identify the input to type into by a certain identifier (e.g. CSS selector), but in fact it is another element you should type into for that information to show up in the initial field. And there could be other factors that could cause what you think you typed into a field to not be what you typed into the field when submitting the page.
There might be a few cases when the ‘test’, even though there was no check in place, could have failed: when any of the inputs or the submission button would not have been present. An exception would have been thrown when trying to interact with these. But that’s all. Any other situation the page would have been in would have given a successful, green ‘test’ run.
Now keep in mind that a test cannot guess what a ‘success’ situation looks like. In the above example, it would be: the inputs were correctly filled in with valid data, the submission went through and the success message was displayed (without any error messages being present). When it comes to UI tests, they can at most signal that certain states of the elements you are interacting with are incorrect – you want to type into a field but it is not enabled, or it is not even present. However the UI test (without you implementing the relevant check) cannot guess that a text that you type into an input is valid or not. Or even whether there is any value typed into the input.
So it’s very important and it’s up to you to implement all the checks that, when there is an incorrect state of the product you are testing, will properly highlight this state. But also that your checks properly highlight the success state. Based on the above, here are some personal tips on how you can ensure that your automated test is an actual test, and not just a succession of steps that do not highlight the right things:
- properly understand the way the product you are testing behaves, so you can implement the relevant checks
- use wait based methods to type into fields. Make sure that the wait methods contain both the typing part, and validating that once the focus moves away, the text you type in the field is still the correct one (the field still shows what you typed). And generally speaking, use wait based methods for all your screen interactions. These can help a lot with the reliability of the test, but also with the validation part
- any action, as in physics, comes with a reaction. Therefore, if you have a chunk of code that represents a page interaction, which is not followed by a check, it is irrelevant and should be probably removed. Again, that is because the test cannot guess what your expectation is regarding what should happen after the action, represented by your code, happened. And there is no point of performing an action if you have no expectation about what happens afterwards
- correctly use any programming language constructs you need in testing. For example, incorrect ifs and try/catches are the largest contributors to the pool of incorrectly passing tests (e.g. you are not considering the ‘else’ situation or the ‘catch’ situation)
- test that the checks you implemented properly signal both a failure and a success
I will come back with further tips in the following blog posts. Thanks for reading!
