Difference between revisions of "AcceptanceTesting"
Yrnclndymn (talk | contribs) |
|||
Line 98: | Line 98: | ||
* and then people will approach before | * and then people will approach before | ||
− | Other notes | + | == Other notes == |
* not easy - because of silo-culture | * not easy - because of silo-culture | ||
* the problem of fractional people | * the problem of fractional people |
Revision as of 05:10, 19 September 2009
Top 5(ish) reasons why teams fail with acceptance testing
- No collaboration
- Focusing on 'how' not on 'what'
- Tests unusable as live documentation
- Acceptance testing is not considered as an 'value-adding' activity
- Expecting acceptance tests to be a full regression suite
- Focusing on tools
- ("test code" not maintained with love) Automation code is not considered as important as 'production code' - 'it's only test code' - normal code rules are not applied - 'test code' is
- objectives of team members not aligned
- no management buy-in
- underestimating the skill required to do this well
Acceptance tests are a specification of a system - in order to be a good specification, they should be exemplars, but don't need to be dealing with every single edge case (if they are to remain readable/useable as documentation)
You could split out more exhaustive testing into a separate section, separate suite, or (better?) a separate tool.
Don't reject acceptance testing because you don't like the tool - start with the tasks you need to achieve. If it is difficult to automate, it doesn't mean it can be ignored - it is still an 'acceptance test' and it still needs to be run.
Definition of 'acceptance test': whatever you've agreed with the client (not just that that can be automated)
Dislike term 'acceptance testing' - could mean definition as above (Test Driven Requirements, Example Driven Requirements, etc), but often is thought as being equivalent to 'UAT' - this is *not* what is being discussed. 'Specification Workshop' has been successful as a term.
Business people and testers collaborating
Currently in 2nd Sprint
- Result for earlier approach was not good
- 50k hours in one release
- Siloed teams
- It was 9 months before the software was finished
- now switching to 6 scrum teams (2 designers, 1 tester, 4 developers)
- (but switching to new application also)
- positive results so far
Collaboration as a sys admin -> team 'hand over' the application ...
- lots of arguing during deployment
- team started to ask sysadmin to verify things 'up front'
- then brought sys admin into team
- eventually contributing to prioritisation of stories
Another story
- Waterfall model, siloed
- To help a move to agile, have management showcase the project
- Writing requirements is a collaborative activity, involving the whole team
- Everyone can voice an opinion + help define the acceptance criteria
- Try to automate as much as possible
The way the F15 was designed
- Customers said 'we want a 2.5 mach airplane'
- Designers attempted it, and couldn't (for the right cost)
- Go back, and asked 'why?'
- We need to get away from Russian planes really quickly
- How would a more agile plane work?
- Yes, yes - that would be fine!
- Developers know the technical limitations - tell them what the problem is, and maybe they'll come up with a different/better solution - get everyone in the same room to discuss it
If you have a waterfall project with lots of specifications, should you throw them away?
- Yes - but be mindful of the political ramifications - perhaps suggest that they need 'clarification'?
- If you write specifications as tests, you are a 'test analyst', not a 'business analyst' (but you win out in the end! :) )
- backlog items tend to look like 'tasks' not 'stories' - aim for 'things we want the system to do'
- story is 'intentionally vague' - 'a promise/invitation for a conversation'
- important factor is the 'shared understanding'
- acceptance criteria are the examples that are an output of the conversation, and limit the scope (or are the specification)
- Ron Jeffries - the most important thing on a requirements document is the phone number of the person who wrote it
- 3 C's "card", "confirmation", "conversation"
For software vendors, with 1000s of customers - how do you manage 'the customer'?
- eg. iPlayer - customer is 'the british public' 50M users! - as with TV, use focus groups - and the 'producers' = product owners
- affordable sessions - just members of the public (who belong to a specified group) - for an hour at a time
How to decide how something should work, vs. whether something is 'in' or 'out'?
- need more than just a single 'truth'
- it is a conversation that needs to happen
- involve wider stakeholders - eg. financial controller, who can estimate a cost/value
"collaboration doesn't happen when people have different objectives"
failure: only collaborate during the workshop, then after:
- BA - deliver specification on a certain date
- PM - deliver a project on a certain date
- Tester - test what is built by a certain date
- no-one had objective of building a quality product
success: everyone was able to share the same tool (fitnesse)
- everyone was working on the same 'document' - with the same goal
- nothing 'lost in translation'
- but was a different team (perhaps more of a people thing)
In the beginning, there is often lots of resistance to collaborate
- but sometimes the arguments win through
- and this earns respect
- and then people will approach before
Other notes
- not easy - because of silo-culture
- the problem of fractional people
- it is an accounting issue - and you should refuse to carry the burden of the cost of a partial resource when you aren't getting the benefit
- should be looking at throughput of delivered value, not the costs of utilisation of resources
- also, risk of resources being pulled, 'unreliable resources'
- don't take a requirements document, then write acceptance tests from that
- 'translation problem': http://www.brokenpicturetelephone.com