Automated acceptance testing for customisable off-the-shelf products is useful, but the approach should be different to bespoke-built products in order to get the best bang for buck. This is particulary true for OTS products which provide most of the external interface (e.g. a web browser interface).
Example: automated testing on a greenfields product
The product we were building was a web application to sell business insurance online. It was basically designed as a wizard with many input fields requiring validation. At the mid point, the user was given a quote and at the end, the user was invited to sign up for the insurance (and hence become a customer).
The architecture of the application was as follows:
- on the browser to display validation messages to the user before submitting data to the server
- on the server (via Rhino) to make sure that no invalid data was passed down to the backend service. This was basically done for security reasons just in case someone bypassed the browser application and used HTTP directly.
We decided that there was little value in testing all validation cases through the browser as:
- browser tests are slow to run
- unit tests are much more precise and hence any error can be found and fixed more quickly
The other areas of functionality (e.g. referrals, denials) followed the same pattern as for validation. Hence, the pyramid above was representative of the whole test suite.
Example: automated testing on an OTS product
Borrowing again from the insurance space, the product this time was a claims processing system which handled things like:
- claims lodgement
- communications with suppliers / repairers
- general ledger updates
This time, the company decided to go for the “buy” option and selected a product called Guidewire ClaimCenter. ClaimCenter provides a default confuration which covers the above activities (claims lodgement etc) using a browser interface for human interactions.
ClaimCenter can be customised in 3 ways:
- via XML to tweak the browser interfaces
- via GOSU scripts (a bit like Groovy)
- via plugins written in Java for integration to backend systems (not supported out-of-the-box)
To support the target claims process (across all insurance brands), a significant amount of customisation was necessary.
About 3 months into the project, I sat in on a meeting to discuss automated testing. The conversation went something like this:
- Guidewire consultant
- Automated testers
At the time, the conversation ended in a stalemate, and automated testing continued to be done through the browser. I’ll be honest at this point and admit I was in the “automated tester” camp, though I regret taking that point of view now.
In retrospect, the Guidewire consultant had a good point. For most of the acceptance tests, we could have tested just the customisation code (XML, GOSU, Java) in isolation. By testing the UI, we were duplicating the testing already done internally by Guidewire. Had we tested more pragmatically, our tests plus the internal Guidewire tests would have yielded a pyramid very similar to the bespoke example above:
As it turned out, we ended up with a test suite which took many hours to run and hence couldn’t be run before a developer commited their code.
There are two cases I can think of where a project may want to test at a higher level (eg via the browser):
- as a deployment test - e.g. to make sure no one has deleted a vital piece of config
- when defects are found in the OTS product itself.
How can we avoid vendor lockin?
One advantage of testing at a higher level (e.g. via a browser) is that the implementation beneath can change without affecting the test code. One might suggest that this is a good way of avoiding vendor lock-in. I am of the opinion that isn’t for two reasons:
- the trade-off in time generally isn’t worth it.
- the interface design is often heavily influenced by the defaults of the OTS product. I.e. when the OTS product changes, so does the interface
What is less likely to change (when a vendor changes) are the business rules and processes. What we can do is specify these business rules and then link the automated tests to them. A great technique for doing this is Specification by Example. For the OTS case above, the automated testers were using a tool called Concordion which is capable of this very thing.
Of course if business processes are changed to fit an OTS product, then it is almost impossible to avoid vendor lock-in.
I recommend using Specification By Example when writing acceptance tests in general, regardless of whether a product is built bespoke or purchased off-the-shelf and then customised.
I also recommend using the ideas in Test Pyramid to write tests at the appropriate level. This will help keep the run-time of the automated test suite down, and allow for bugs to be pin-pointed quickly. There is no point in duplicating testing done by a vendor unless there are holes in that test suite - e.g. defects in the OTS product.