It's not all about the acceptance tests
A few of my colleagues recently posted their opinions about acceptance tests which tied in nicely with a discussion about acceptance testing that was had at the Alt.NET conference in London.
For the sake of argument I will assume that when we refer to acceptance tests we are talking about tests at the GUI level which are being automatically driven by a tool, usually Selenium but maybe something like White if it is a client side application. I’m not sure if this definition is 100% accurate but it feels to me that this is what is being referred to when we describe something as an acceptance test.
The discussion at the Alt.NET conference centered around the value of having these acceptance tests when they often take a long time to run therefore dragging out the time that it takes to build our application. These are some of the same problems that Sarah points out as well as issues around the maintenance of these tests.
Some of the responses to this pointed out that we should not rely too heavily on acceptance tests to confirm the correctness of our system especially when we can often do this using functional or integration tests which do not have to hit the UI and therefore run much more quickly.
We could still have acceptance tests but maybe they would only be necessary for critical scenarios where the system would become unusable if they did not pass - e.g. for the login screen.
I’m not sure what the solution is - I haven’t actually worked on a project which had the problems that Phillip and Sarah have experienced but I’ve heard the horror stories from speaking to some of my colleagues.
Sarah made one particular comment which I can relate to:
In fact, when I finish developing my story, the integration/persistence/unit tests already cover all acceptance criteria.
One of the projects I worked on had the concept of acceptance criteria for every story but we didn’t have acceptance tests so to speak. There were certainly unit tests and functional tests covering each story but that was the extent of our automated testing.
There were a couple of regression issues which we may not have had if we had acceptance tests but overall I felt the approach worked really well for us and the team consistently delivered. At one stage we did try to introduce some White driven GUI level tests but they were never introduced into the continuous integration process while I was on the project.
I appreciate that this is just one scenario and that each project is different but the lessons I took from this was that we should just do what works best in a given situation and not be too dogmatic in our approach.
Phillip actually has a different view and (to probably quote him out of context) believes that having automated acceptance tests is vital:
My main concern is that I think people value Acceptance Testing too much. Don’t get me wrong: automated acceptance tests are essential to a healthy agile process, the problem is what we do with them after we deliver a story.
I do agree that we often place too much emphasis on acceptance testing but I’m not convinced that we need automated acceptance tests to have a 'healthy agile process'.
As I’ve been writing this the terminology seems a bit strange to me. Is it actually correct to say that an 'acceptance test' always implies an automated test that is run from the GUI?
Or can an 'acceptance test' be a collection of functional and integration tests which confirm that a piece of business functionality works as expected?
About the author
I'm currently working on short form content at ClickHouse. I publish short 5 minute videos showing how to solve data problems on YouTube @LearnDataWithMark. I previously worked on graph analytics at Neo4j, where I also co-authored the O'Reilly Graph Algorithms Book with Amy Hodler.