I first came across Behaviour Driven Development thanks to Mauro Talevi, my first hire on the team that I set up at HSBC back in 2008. He's a major contributor on the JBehave project that is one of the major BDD frameworks. I hope that I'm not going to embarrass him too much by saying that he was an invaluable influence on the project.
I'm not going to talk about the technical implementation as that is well covered elsewhere but instead in this blog I will talk about the general principles and practical experience of working with BDDs.
The basic idea is simple. Any story or feature being implemented in a project should have automated integration tests that proves that the work has been completed successfully. It is important the the tests be comprehensible to the stakeholders requesting the story / feature so that they can confirm that it is complete and done. This provides a very clear goal for the developers and works to build up a suite of regression tests that give a healthy confidence in the application. The tests are defined in terms of business scenarios that represent various paths through the business process.
BDD is complementary to Unit Testing and is in no way a replacement.
The key to building an effective suite of BDD test scenarios is the derivation of a clearly understood Domain Specific Language for the tests. This language bridges the gap between the business concerns and the services and components created by the development team. In one sense this should not be a difficult task as the solution domain being developed should reflect the problem domain defined by the customer. Thus the effort needed to bridge the gap between the DSL and the technical solution is relatively small.
We had a lot of difficulty at first working with the stakeholders to agree and define the DSL as they were used to more traditional development practices and we had to educate them to understand that the scenarios that they were producing were testable. What they had to understand was that the scenarios were the contract between the development team and the project stakeholders. If the scenario was incorrect as signed of by the stakeholders and the analysts then the work produced by the development team would be incorrect too. The scenarios provided a very important structure to the SCRUM methodology we used. The analysis producing the scenarios was typically 1 to 2 sprints ahead of the implementation work.
JBehave provides a set of tools for building the elements of the DSL and for running the scenarios defined in it. Any scenarios in JBehave are written in simplified plain English (or the language of choice for the organisation). A scenario is structured as a list of sentences defining steps in the scenario. These sentences all begin with one of three key words:
- Given: The remainder of the sentence defines pre-conditions for the next steps of the scenario.
- When: Defines activity that the system under test is carrying out as part of the test.
- Then: The assertions that the test should be checking.
These sentences are the reusable elements of your BDD scenarios; the Givens, Whens and Thens can be put together to build different scenarios. The sentences are of course parameterised so that different values can be used in different tests.
What proved very important is the granularity of the concerns addressed by the DSL. Too coarse and you have very short (but deep) scenarios made of elements which are almost impossible to reuse; too fine and the size of the scenarios balloon and they become unmaintainable because of the sheer volume of change required as the system evolves.
In the end on the project at HSBC the appropriate level emerged organically, we paid close attention to the tests and worked hard to refactor and leverage common elements that emerged.
Another critical element is the data representation. When we started we bootstrapped the BDD test data management by using XStream to dump the objects help in memory to be compared with saved text files. This proved very helpful in the early stages but proved an increasing burden as the suite of tests broadened and deepened. The problem was that the test data was acutely sensitive to the domain data model and as that evolved we found ourselves expending major effort to correct tests that were not testing the changes but were affected because of our dumping the whole domain model.
The solution was to move to a factory and builder based mechanism. All tests used the same basic data sets generated by factories which then used builders to alter them to match the test requirements. When we came to make assertions we found that it was best to focus on the data elements required by the specific test and allow other tests to check their own data elements. This dramatically reduced the amount of data that needed incidental maintenance.
As we worked we found that after every major release it was worth spending some time on consolidating the BDD scenarios. The earlier scenarios for new functionality tended to be supplanted by scenarios testing more elaborate versions of the functionality. We worked to actively identify situations where this occurred and reduced the volume of tests while keeping their coverage.
JBehave integrated very effectively with Selenium and proved very effective at automating Web UI testing but rapidly exposed shortcomings in Selenium's implementation. By the time I left work was underway to use the WebDriver integration to address this.
By working to continually polish the DSL and the data representation we started seeing great reuse and efficiencies in developing provably tested new functionality. Where we originally spent anything up to 50% of the development time on building the BDD scenarios we saw that fall to as little as 5% of the development time.
Where this really began to pay dividends was that we had the tools that the test team needed to build out a much wider suite of tests and as they developed them we had probably the most thoroughly tested system I have ever worked on. The regression tests were run on our continuous build server producing near real time feedback as we developed which meant that problems were fixed as they occurred rather than after weeks of formal testing. This meant a much smaller testing team was required to thoroughly test the system at release. Fewer regressions were found and we were much more confident that the acceptance testing cycle would run on budget.
BDD is a key technique that I will use on future projects. It provides real benefits for the work invested and contributes materially to successful delivery.
No comments:
Post a Comment