To automate or not to automate

There's a lot of talk about automation at the moment, and definitely a strong push in the recruitment market towards those skills being a requirement for almost any job. It's clear why this would be the case - on the face of it, automation is a simple way to perform a large number of tests in a short amount of time, and there is no obvious cost involved in running these tests. This puts me in mind of something I read years ago in Edward De Bono's book "Children solve problems":

"The usual press-button paradise. You press a button and everything happens."

When young children are set a task (e.g. design a car to go across bumpy ground) a lot of the solutions will involve the phrase "You press a button and...". No thought is given to what goes behind the button and how it makes the desired effect appear; only the end result is considered.

There is nothing wrong in principle with that mindset - often the linkage between the button and end result is perfectly achievable, but there is bound to be cost in making that happen. This is most definitely the case with automation; there is not a tool in the world that will take a functional spec (if you're lucky enough to have one) and turn it into a set of automated tests. There will always be a need for human intervention, interpretation of the information, application of that information to the system under test, and exploration of the boundaries of what the information actually means. And all this takes time.

Even if you have an automated test suite, once it has been run there is still the question of interpreting the results - it's great if all the automated tests you have pass for a new build, but if there is a fail, what does that actually mean? It may be a genuine regression, the result of a genuine change in the product or it may be a fault in the test exposed by the new changes. In any case some level of investigation will be required and amends either to the codebase or test set in order to get the test passing.

Plus, in the fast moving development environments we now inhabit, this is not going to be a one-time activity - even the tiniest of changes to a product will require new automated tests to be written to cover the new functionality, and most likely changes to existing tests to incorporate the alteration. This carries its own risk; as pointed out in Richard Bradshaw's F.A.R.T. Model we test to gain knowledge, and if we concentrate on the automation instead of the testing we are no longer gaining new knowledge - there is a danger that the focus will be on fixing the tests and not on actual testing. Simply repeating the same tests over and over is also ineffective in the long run (the so-called Pesticide Paradox).

The type of product under test also influences the suitability of automated testing - a back-office utility that involves a lot of data crunching is liable to be a much better candidate than a website that is mainly presentational content. That's not to say there isn't a place for automation in both cases, but the scope of what testing can be automated effectively is liable to be very different.

An extreme example: the home page of a website has an extra paragraph added at the top, a change which can be done in seconds in the CMS. If the automated tests use relative paths to reach other parts of the same page, this tiny change could easily break all the automated tests as they now need to add that extra paragraph reference (admittedly that would imply the original tests were not very robustly written, but you get the idea). And what of the added paragraph itself? A new test would be required to make sure that this is displaying as expected (assuming you are automating to that level).

So there's a balance to be struck. On one side, the ease and convenience of having a set of automated tests that can be run on every build without the need for test resource (assuming the devs are able to diagnose problems from the results). On the other, there is the overhead of setting up and maintaining the automated test suite and interpreting the results of any test run. It may be that the short-term benefit is outweighed in the long run; in some cases the best automation may be no automation at all.

Comments

  1. Does automation HAVE to equal automated tests? What about using automation as a tool to help generate data, for example? Automation defeinitely isn't suitable in all circumstances, but there are plenty of repetitive, grindy processes, where automation can be of great help, without having to result in a pass/fail checkbox.

    ReplyDelete
    Replies
    1. That's a good point - I am focussing here on Automated Test Suites. As you say, automation can take many forms - I regularly use a link checker and have recently been building a tool to take some of the manual labour out of the stuff I do (blog post to follow!). But whatever the tool, there still has to be a balance between the time saving and maintenance overhead.

      Delete
  2. Could you provide more information, on how we can ease out the automation process?

    ReplyDelete
    Replies
    1. By "ease out" I assume you already have some test automation in place? In my experience, the best way to change anything is to back up your argument with evidence - look at how long the automated tests take to run, how many bugs it finds, and (particularly) how much time is being spent maintaining the automation suite. You can then compare that with the time it would take to perform the same tasks manually - there are very few project managers, who, when confronted with evidence of a time-saving opportunity, will turn it down. Also be open to the idea of a mixed approach - maybe some of the automation suite would be better done manually, but there are likely some jobs where automation will be cost-effective (particularly where the tests are already established). If you gather enough evidence around how the current testing is working it should be obvious which areas actually benefit from automation.

      Delete

Post a Comment