How we used fast, reliable automated tests at The Travel Corporation
|Of which UI tests||1169|
|Execution time in CI||20 minutes CPU time, with paralellisation 12 minutes in real time|
|Execution time on a typical development laptop||10 minutes|
|Production releases over last 6 months||56|
Test automation is something that Featurists have been doing for a very long time now so it was natural for us to begin our project with this approach. What surprised us in the end was how it enabled us to manage such a large project, and make so many releases, with so few development and test staff.
Our focus here was on two main elements of automated testing: end to end integration testing and test performance.
We saw the advantages of testing features end to end in a realistic runtime architecture: our specifications were largely written in terms of end-user behaviour, not internal functionality, this gave us the scope and confidence to perform large refactorings without requiring corresponding changes to our tests. When a test failed, it was frequently because the application’s user-visible behaviour had changed, not because the implementation had changed.
The other aspect we took seriously was test performance. We’ve been in situations where we’ve waited hours for flakey tests to tell us if we can deploy or not, and none of us wanted to be back there again. We were careful to ensure that we had good fast mocks, and that our tests weren’t doing more than they had to. More significantly, with the amount of UI we were building, we had developed technologies that gave us an edge on fast, reliable UI testing like Browser Monkey and techniques like running tests inside Electron to allow us to run the UI and the backend in the same test and debugging environment.
The tests afforded us a reliable development velocity and stable codebase. While it’s hard to say we didn’t have production issues, the tests did give the team the mental space to concentrate on solving the real problems and challenges TravCorp presented us.