During the agile process, we get new stories to test every sprint. Many stories get subtasks to automate the checks. During the automation process, you always check whether your code’s quality is good enough. You ask by yourself: “Can I improve the code?”

You improve the code to take advantage of future development. Test automation results in better efficiency because you automate more things. This makes the proportion manual testing smaller and smaller every time. You also improve the code to give it more features, so you can do more with it. Of course, better code readability is also one of the goals of improving your code.

Duplicate code shall not pass!

A best practice is not to have duplicate code. After all, if there is an error in that duplicate code, you have to change it more than once. That is why it is better to put similar things in a common function. You can even put it in a library afterwards. Then all test suites can also use it. If you then correct an error in that code, it is suddenly is fixed for all projects.

You also can put similar code in a common function. Then you should use parameters.

Here is an example of duplicated code:

array_a = [ 2, 3, 5, 6]
sum_a = 0
for number in range(4):
	sum_a += array_a[number]
average_a = sum_a / 4

array_b = [ 5, 7, 10, 4]
sum_b = 0
for number in range(4):
	sum_b += array_b[number]
average_b = sum_b / 4

This code can be rewritten with a function that calculates the average for us:

def calculate_average(numbers):
	total = 0
	for number in range(4):
	    total += number
	return total / 4

Now, you can call the average function two times with the correct input:

array_a = [ 2, 3, 5, 6]
array_b = [ 5, 7, 10, 4]
average_a = calculate_average(array_a)
average_b = calculate_average(array_b)

Of course, calculating the average could be even better. I could also use the sum function in python itself, but that was not the purpose of this example.

Recover the errors

Once upon a time, in the morning I got an email from our build server jenkins. All the tests had failed. What happened? After investigation, it turned out that the very first test caused the system to enter a state where the system returned an error every time you called the REST api.

The solution to continue without errors was to delete some records in the database. If only the test had done that, I would have seen that only 1 test had failed and the problem was not so bad.

Therefore, it is a very good idea to put the System Under Test in non-error state after an error. That way, the next test case can run without problems. Sometimes a reboot is needed, sometimes database actions. A lot depends on how your system is structured.

To wait or not to wait

One of the things where automating testing causes the most problems is waiting. Waiting for a response from the server or waiting for a particular field in a user interface to become visible. It often annoys me. To wait, there are a number of possibilities.

You can hard coded a number of milliseconds or seconds into your code. This is the cause of a lot of test automation problems. Never use this! You have seen that you have to wait 2 seconds, but many sprints later so much has been added to the code that those 2 seconds are suddenly 5 seconds. Then wherever you’ve used those 2 seconds you can adjust that. Sometimes on a build server it’s different again. That’s why stay far away from those hardcoded waits.

You can use a dynamic wait. You then poll the server or user interface several times until you get the reply or until you see the user interface element you want. This has the advantage that, first of all, you are already not wasting time.

With a hardcoded wait in your code, if you wait for 5 seconds and you already have what you want after 1 second, you may also have to wait that remaining 4 seconds. Multiply those 4 seconds by 100 test cases and you are already at 400 seconds of waiting time.

By polling and moving on when you get your answer, you won’t waste this useless time. You have to make sure you implement a timeout, though. Because otherwise if your reply doesn’t come, you may find yourself waiting forever. And that can’t be the intention.

Deleting tests is not forbidden

It is not forbidden to have a critical look at all your scripts once in a while. Are they still necessary? Especially if they create problems and cause you to have flacky tests. Personally, I think that flacky tests don’t exist. There is always an underlying cause. Sometimes it is the system we are testing that causes the instability. And in a lot of cases, the root cause is the test script.

If you have such scripts that fail frequently, isn’t it better to either delete them completely or rewrite them? Check if the script is still adding value and if not, the delete button is a good idea.

Test software is software too

Test software is software. So treat it that way. Place your code like production code in a version control system like git. Have your code reviewed by the developers on your team? Refactor the code regularly. Make good coding standards. You can usually reuse the same standards and techniques the developers use.


One of the things that is very often forgotten is documentation. You need continuously improve the documentation to benefit from test automation. I am not just talking about documentation of the test software or the user manuals. I talk about log files generated by the system or the test itself too. If you have good log files, you often know faster what the root cause of a failing test is.

Another form of documentation are dashboards. This is where you can quickly see how the software performs. New features should also appear on the board, and the old features that are no longer used may be removed from the boards. After all, they are no longer relevant.


By continuously improving your code, you will benefit every day from it. It feels good that you have a test suite you can rely on. It’s not just about improving your test code, but also about improving the other testing tasks involved in test automation.