Beyond Given-When-Then

From CitconWiki
Revision as of 22:47, 22 February 2014 by Nigel.charman (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Facilitator - @nigel_charman

When discussing BDD/SBE scenarios, it is important to consider Context - Action - Outcome. Using Context Questioning and Outcome Questioning, we expand our understanding of desired behaviour.

However, the Given-When-Then format is constraining and often results in lengthy scenario descriptions that bear little relation to how customers would explain the scenario. Nigel believes we should move closer to how our customers describe the scenarios.

For example, on one project the scenarios were described on a whiteboard using Venn diagrams. When documented using the Given-When-Then format, the BDD scenarios were around 10-15 lines which took much longer to comprehend than the Venn diagrams. In some cases, the BAs created additional documents to describe the scenarios - an anti-pattern to creating a Living Documentation system that acts as a single source of truth.

(Another example was tenpin bowling scores, where a visualisation of a conventional scoresheet is much quicker to read than a Given-When-Then description.)

Nigel prototyped a couple of solutions using Concordion. The first embedded a copy of the Venn diagram in the specification, alongside a short table containing the values. The second used a SVG representation of the Venn diagram to set up the context. By applying Concordion instrumentation to the SVG tags, the fixture code was called with values directly from the Venn diagram. In this scenario, the outcome was asserted against textual values, but there is no reason that the outcome couldn't be checked against a visualisation either.

Related to this is @natpryce's discussion of using Approval Testing with SBE, where he describes that "The final output for approval does not have to be text. For a numerical function, a test can render a graphical visualisation so one can more easily see calculation errors, such as undesirable discontinuities, then when results are displayed in tabular form."

The comments on Nat's article discuss using property-based testing tools (eg. QuickCheck, ScalaCheck) as generators of the input values for the approval testing. With BDD/SBE, the Acceptance Criteria would be used to define the properties. (The Specs2 library supports ScalaCheck properties.)

The attendees also discussed using mind-maps to provide a high-level overview of the test scenarios. One attendee described an in-house tool that visualised the scenario results on a mind-map. Tests could be triggered on any sub-branch of the mind-map.

Related to this is @katrina_tester's work on using mind maps to visualise what testing has occurred, not just the results of automation.

Another attendee described documenting both the automated and manual scenarios using Fitnesse. The manual scenarios are differentiated so that they can be picked for manual testing.

Another topic discussed was how to get BAs to write scenarios in a format that can be automated. This is often difficult when starting to use BDD/SBE, but once a library of steps is developed there should be opportunity for re-use, or creating new steps using a similar format to existing ones. Cucumber Pro is attempting to make it easy for the team to collaborate and re-use steps.

Some teams have their BAs commit directly to version control. One attendee's team have their BAs describe scenarios in JIRA using Behave for JIRA. Another team are using Confluence. The danger of using a wiki such as this is that the scenarios are hard to version, branch and merge. The preference is to keep the scenarios in source control alongside the application code.