Long Term Value of Acceptance Tests
more detail on case studies presented at this session at http://specificationbyexample.com
Does agile work?
Gojko ->
Where's the data?
- Defect detections rate
- Number of defects caught internally vs the number of defects found by customers
- Above 80% pretty good
- Team with 99% rate, no bug in production in over 5 years
- Examle of Moving from C++ to Java
- First attempt didn't complete
- Second attempt blew up in production, reverted
- Third attempt build using detailed acceptance testing, 2 year project
- Example of Student loan company in thee US
- 2008 bond sale fell through, no money to operate (financial crisis)
- Option to completely change the way that they get money (private investors, etc)
- Because they had a good suite of acceptance tests, they were able to review their existing business workflow and go live within 2 months, able to stay in business
These are all good example of the benefits of acceptance tests
Regression testing isn't really the final goal of acceptance tests. If that's all that you aim for, you won't necesarily get the full benefits of what can be achieved
YouSwitch reported a very good cultural change, company became much more collaborative.
hugs -> gojko : a lot of people can get stuck in the UI test angle of acceptance tests, and so missing business value of the tests
gojko -> Personas can be very helpful
gojko -> acceptance tests written in terms of the UI tend to have long term maintenance problems
antony -> what people label something isn't necessarly what it is. Some people might define "acceptance tests" in terms of user experience of clicking ui elements, not really a test that defines the business benefit. It's a labelling issue
hugs -> the key word is "acceptance". It's monetary :-) if the customer clicks a buttton and it terns red, will they pay the invoice?
gojko -> there is no generic terms, it depends upon the company as to what term is used
gojko -> most teams use automated acceptance tests, with exploratory testing on top. No one really uses manual testing (at least, what he used). In some cases, a company didn't automate anything for the first 3 months of a project. Paul Gerrard argues that what you really need to do is to define the process of how a team is going to validate something.
gojko -> the guys who get the best long term benefit are those who put the business knowledge in the tests, which can be used as a resource for future development. These things are not really tests any more. This is more a system of documenting business processes.
gojko -> executable specfications mean that we know what the system does, and it is correct as to what the system does right now. Getting business processes documented and automatically testing against current systems is the real benefit
gojko -> the people who got the best benefits spent a lot of time organising the improving their specifications, refactoring, etc
gojko -> most of the people who ended up with systems like these got there by chance. What we need is a more deliberate way of approaching this
antony -> example of the Eclipse workflow system (please update)
question -> language?
gojko -> within a single business context, need a ubiquitous shared language, which business users and executable acceptance tests
antony -> writing the tests was able to help a company decide upon the consistent terms in the language.
antony -> using metaphor to express different terms to provide a boundary about technical content
gojko -> that's a model. the language used in your specifications drives your model. e.g. if somebody says "customer", the developer is likely to create a customer class. If you find it very difficult to express something, then the code underneath this model is like to be very hard and complex.
gojko -> this is a sign that the language of the specification is unclear, and needs to be addressed. This makes it easier to argue with the business
antony -> half the problem is trying to apply a single identifier. More option is to use tags e.g. "spec" "test", perhaps different tags to the same thing in different contexts.
antony -> example of a company delivering code with executable
hugs -> selenium proved the value of tests (dev: now I can charge for tests!"). The ui feature worked. Difference between console showing asterixs for passing tests, now can show browsers doing the real work
question -> should we train qa's to do business analysis?
gojko -> where is the bottleneck in the process? e.g. vp marketing knew the process, but was able to get qa to do some initial analysis. all very context driven in deciding what to do. context matters, but very much need collaboration
antony -> key word is "collaboration"
gojko -> cross functional is good. example of team who got rid of "tester" title. Testing staff retained, but focus on getting devs to think more about testing themselves.
gojko -> one of the main results of scrum is to develop cross-functional teams
gojko -> example of flickr: once a developer commits something, it goes live! This mean that developers became much more concious about what they commit.
gojko -> trust is important. Initially do all tests through selenium, later on get more trust, start doing tests at a lower later. Initially, UAT test before every release. However, if the UAT is consistently showing no problems, trust can build up