https://citconf.com/wiki/api.php?action=feedcontributions&user=Nigel.charman&feedformat=atomCitconWiki - User contributions [en]2024-03-28T16:08:31ZUser contributionsMediaWiki 1.35.11https://citconf.com/wiki/index.php?title=Beyond_Given-When-Then&diff=15632Beyond Given-When-Then2014-02-23T05:47:57Z<p>Nigel.charman: </p>
<hr />
<div>Facilitator - [http://twitter.com/nigel_charman @nigel_charman]<br />
<br />
When discussing [http://en.wikipedia.org/wiki/Behavior-driven_development BDD]/[http://en.wikipedia.org/wiki/Specification_by_example SBE] scenarios, it is important to consider Context - Action - Outcome. Using [http://lizkeogh.com/2011/09/22/conversational-patterns-in-bdd/ Context Questioning and Outcome Questioning], we expand our understanding of desired behaviour.<br />
<br />
However, the Given-When-Then format is constraining and often results in lengthy scenario descriptions that bear little relation to how customers would explain the scenario. Nigel believes we should move closer to how our customers describe the scenarios. <br />
<br />
For example, on one project the scenarios were described on a whiteboard using Venn diagrams. When documented using the Given-When-Then format, the BDD scenarios were around 10-15 lines which took much longer to comprehend than the Venn diagrams. In some cases, the BAs created additional documents to describe the scenarios - an anti-pattern to creating a Living Documentation system that acts as a single source of truth.<br />
<br />
(Another example was tenpin bowling scores, where a visualisation of a conventional scoresheet is much quicker to read than a Given-When-Then description.)<br />
<br />
Nigel prototyped a couple of solutions using [http://concordion.org/ Concordion]. The first embedded a copy of the Venn diagram in the specification, alongside a short table containing the values. The second used a [http://en.wikipedia.org/wiki/Scalable_Vector_Graphics SVG] representation of the Venn diagram to set up the context. By applying Concordion instrumentation to the SVG tags, the fixture code was called with values directly from the Venn diagram. In this scenario, the outcome was asserted against textual values, but there is no reason that the outcome couldn't be checked against a visualisation either.<br />
<br />
Related to this is @natpryce's [http://www.natpryce.com/articles/000801.html discussion] of using [http://www.approvaltests.com/ Approval Testing] with SBE, where he describes that "''The final output for approval does not have to be text. For a numerical function, a test can render a graphical visualisation so one can more easily see calculation errors, such as undesirable discontinuities, then when results are displayed in tabular form.''"<br />
<br />
The comments on Nat's article discuss using [http://blog.jessitron.com/2013/04/property-based-testing-what-is-it.html property-based testing] tools (eg. [http://www.cse.chalmers.se/~rjmh/QuickCheck/ QuickCheck], [http://www.scalacheck.org/ ScalaCheck]) as generators of the input values for the approval testing. With BDD/SBE, the [http://www.assurity.co.nz/community/our-thoughts/acceptance-criteria-part-1-seeing-the-wood-and-some-trees/ Acceptance Criteria] would be used to define the properties. (The Specs2 library [http://etorreborre.github.io/specs2/guide/org.specs2.guide.Matchers.html#ScalaCheck+properties supports ScalaCheck properties].)<br />
<br />
<br />
The attendees also discussed using mind-maps to provide a high-level overview of the test scenarios. One attendee described an in-house tool that visualised the scenario results on a mind-map. Tests could be triggered on any sub-branch of the mind-map. <br />
<br />
Related to this is [http://twitter.com/katrina_tester @katrina_tester]'s work on [http://katrinatester.blogspot.co.nz/2013/11/mind-maps-and-automation.html using mind maps to visualise what testing has occurred], not just the results of automation.<br />
<br />
Another attendee described documenting both the automated and manual scenarios using [http://fitnesse.org/ Fitnesse]. The manual scenarios are differentiated so that they can be picked for manual testing.<br />
<br />
<br />
Another topic discussed was how to get BAs to write scenarios in a format that can be automated. This is often difficult when starting to use BDD/SBE, but once a library of steps is developed there should be opportunity for re-use, or creating new steps using a similar format to existing ones. [http://cucumber.pro/ Cucumber Pro] is attempting to make it easy for the team to collaborate and re-use steps. <br />
<br />
Some teams have their BAs commit directly to version control. One attendee's team have their BAs describe scenarios in JIRA using [http://marketplace.atlassian.com/plugins/com.hindsighttesting.behave.jira Behave for JIRA]. Another team are using Confluence. The danger of using a wiki such as this is that the scenarios are hard to version, branch and merge. The preference is to keep the scenarios in source control alongside the application code.</div>Nigel.charmanhttps://citconf.com/wiki/index.php?title=Beyond_Given-When-Then&diff=15631Beyond Given-When-Then2014-02-23T05:46:32Z<p>Nigel.charman: </p>
<hr />
<div>Facilitator - [http://twitter.com/nigel_charman @nigel_charman]<br />
<br />
When discussing [http://en.wikipedia.org/wiki/Behavior-driven_development BDD]/[http://en.wikipedia.org/wiki/Specification_by_example SBE] scenarios, it is important to consider Context - Action - Outcome. Using [http://lizkeogh.com/2011/09/22/conversational-patterns-in-bdd/ Context Questioning and Outcome Questioning], we expand our understanding of desired behaviour.<br />
<br />
However, the Given-When-Then format is constraining and often results in lengthy scenario descriptions that bear little relation to how customers would explain the scenario. Nigel believes we should move closer to how our customers describe the scenarios. <br />
<br />
For example, on one project the scenarios were described on a whiteboard using Venn diagrams. When documented using the Given-When-Then format, the BDD scenarios were around 10-15 lines which took much longer to comprehend than the Venn diagrams. In some cases, the BAs created additional documents to describe the scenarios - an anti-pattern to creating a Living Documentation system that acts as a single source of truth.<br />
<br />
(Another example was tenpin bowling scores, where a visualisation of a conventional scoresheet is much quicker to read than a Given-When-Then description.)<br />
<br />
Nigel prototyped a couple of solutions using [http://concordion.org/ Concordion]. The first embedded a copy of the Venn diagram in the specification, alongside a short table containing the values. The second used a [http://en.wikipedia.org/wiki/Scalable_Vector_Graphics SVG] representation of the Venn diagram to set up the context. By applying Concordion instrumentation to the SVG tags, the fixture code was called with values directly from the Venn diagram. In this scenario, the outcome was asserted against textual values, but there is no reason that the outcome couldn't be checked against a visualisation either.<br />
<br />
Related to this is @natpryce's [http://www.natpryce.com/articles/000801.html discussion] of using [http://www.approvaltests.com/ Approval Testing] with SBE, where he describes that "''The final output for approval does not have to be text. For a numerical function, a test can render a graphical visualisation so one can more easily see calculation errors, such as undesirable discontinuities, then when results are displayed in tabular form.''"<br />
<br />
The comments on Nat's article discuss using [http://blog.jessitron.com/2013/04/property-based-testing-what-is-it.html property-based testing] tools (eg. [http://www.cse.chalmers.se/~rjmh/QuickCheck/ QuickCheck], [http://www.scalacheck.org/ ScalaCheck]) as generators of the input values for the approval testing. With BDD/SBE, the [http://www.assurity.co.nz/community/our-thoughts/acceptance-criteria-part-1-seeing-the-wood-and-some-trees/ Acceptance Criteria] would be used to define the properties. (The Specs2 library [http://etorreborre.github.io/specs2/guide/org.specs2.guide.Matchers.html#ScalaCheck+properties supports ScalaCheck properties].)<br />
<br />
<br />
The attendees also discussed using mind-maps to provide a high-level overview of the test scenarios. One attendee described an in-house tool that visualised the scenario results on a mind-map. Tests could be triggered on any sub-branch of the mind-map. <br />
<br />
Related to this is [http://twitter.com/katrina_tester @katrina_tester]'s work on [http://katrinatester.blogspot.co.nz/2013/11/mind-maps-and-automation.html using mind maps to visualise what testing has occurred], not just the results of automation.<br />
<br />
Another attendee described documenting both the automated and manual scenarios using [http://fitnesse.org/ Fitnesse]. The manual scenarios are differentiated so that they can be picked for manual testing.<br />
<br />
<br />
Another topic discussed was how to get BAs to write scenarios in a format that can be automated. This is often difficult when starting to use BDD/SBE, but once a library of steps is developed there should be opportunity for re-use, or creating new steps using a similar format to existing ones. [Cucumber Pro http://cucumber.pro/] is attempting to make it easy for the team to collaborate and re-use steps. <br />
<br />
Some teams have their BAs commit directly to version control. One attendee's team have their BAs describe scenarios in JIRA using [Behave for JIRA http://marketplace.atlassian.com/plugins/com.hindsighttesting.behave.jira]. Another team are using Confluence. The danger of using a wiki such as this is that the scenarios are hard to version, branch and merge. The preference is to keep the scenarios in source control alongside the application code.</div>Nigel.charmanhttps://citconf.com/wiki/index.php?title=Beyond_Given-When-Then&diff=15630Beyond Given-When-Then2014-02-23T05:44:05Z<p>Nigel.charman: </p>
<hr />
<div>Facilitator - [https://twitter.com/nigel_charman @nigel_charman]<br />
<br />
When discussing [http://en.wikipedia.org/wiki/Behavior-driven_development BDD]/[http://en.wikipedia.org/wiki/Specification_by_example SBE] scenarios, it is important to consider Context - Action - Outcome. Using [http://lizkeogh.com/2011/09/22/conversational-patterns-in-bdd/ Context Questioning and Outcome Questioning], we expand our understanding of desired behaviour.<br />
<br />
However, the Given-When-Then format is constraining and often results in lengthy scenario descriptions that bear little relation to how customers would explain the scenario. Nigel believes we should move closer to how our customers describe the scenarios. <br />
<br />
For example, on one project the scenarios were described on a whiteboard using Venn diagrams. When documented using the Given-When-Then format, the BDD scenarios were around 10-15 lines which took much longer to comprehend than the Venn diagrams. In some cases, the BAs created additional documents to describe the scenarios - an anti-pattern to creating a Living Documentation system that acts as a single source of truth.<br />
<br />
(Another example was tenpin bowling scores, where a visualisation of a conventional scoresheet is much quicker to read than a Given-When-Then description.)<br />
<br />
Nigel prototyped a couple of solutions using [http://concordion.org/ Concordion]. The first embedded a copy of the Venn diagram in the specification, alongside a short table containing the values. The second used a [http://en.wikipedia.org/wiki/Scalable_Vector_Graphics SVG] representation of the Venn diagram to set up the context. By applying Concordion instrumentation to the SVG tags, the fixture code was called with values directly from the Venn diagram. In this scenario, the outcome was asserted against textual values, but there is no reason that the outcome couldn't be checked against a visualisation either.<br />
<br />
Related to this is @natpryce's [http://www.natpryce.com/articles/000801.html discussion] of using [http://www.approvaltests.com/ Approval Testing] with SBE, where he describes that "''The final output for approval does not have to be text. For a numerical function, a test can render a graphical visualisation so one can more easily see calculation errors, such as undesirable discontinuities, then when results are displayed in tabular form.''"<br />
<br />
The comments on Nat's article discuss using [http://blog.jessitron.com/2013/04/property-based-testing-what-is-it.html property-based testing] tools (eg. [http://www.cse.chalmers.se/~rjmh/QuickCheck/ QuickCheck], [http://www.scalacheck.org/ ScalaCheck]) as generators of the input values for the approval testing. With BDD/SBE, the [http://www.assurity.co.nz/community/our-thoughts/acceptance-criteria-part-1-seeing-the-wood-and-some-trees/ Acceptance Criteria] would be used to define the properties. (The Specs2 library [http://etorreborre.github.io/specs2/guide/org.specs2.guide.Matchers.html#ScalaCheck+properties supports ScalaCheck properties].)<br />
<br />
<br />
The attendees also discussed using mind-maps to provide a high-level overview of the test scenarios. One attendee described an in-house tool that visualised the scenario results on a mind-map. Tests could be triggered on any sub-branch of the mind-map. <br />
<br />
Related to this is [https://twitter.com/katrina_tester @katrina_tester]'s work on [http://katrinatester.blogspot.co.nz/2013/11/mind-maps-and-automation.html using mind maps to visualise what testing has occurred], not just the results of automation.<br />
<br />
Another attendee described documenting both the automated and manual scenarios using [http://fitnesse.org/ Fitnesse]. The manual scenarios are differentiated so that they can be picked for manual testing.<br />
<br />
<br />
Another topic discussed was how to get BAs to write scenarios in a format that can be automated. This is often difficult when starting to use BDD/SBE, but once a library of steps is developed there should be opportunity for re-use, or creating new steps using a similar format to existing ones. [Cucumber Pro https://cucumber.pro/] is attempting to make it easy for the team to collaborate and re-use steps. <br />
<br />
Some teams have their BAs commit directly to version control. One attendee's team have their BAs describe scenarios in JIRA using [Behave for JIRA https://marketplace.atlassian.com/plugins/com.hindsighttesting.behave.jira]. Another team are using Confluence. The danger of using a wiki such as this is that the scenarios are hard to version, branch and merge. The preference is to keep the scenarios in source control alongside the application code.</div>Nigel.charmanhttps://citconf.com/wiki/index.php?title=Beyond_Given-When-Then&diff=15629Beyond Given-When-Then2014-02-23T05:42:03Z<p>Nigel.charman: </p>
<hr />
<div>Facilitator - @nigel_charman<br />
<br />
When discussing [http://en.wikipedia.org/wiki/Behavior-driven_development BDD]/[http://en.wikipedia.org/wiki/Specification_by_example SBE] scenarios, it is important to consider Context - Action - Outcome. Using [http://lizkeogh.com/2011/09/22/conversational-patterns-in-bdd/ Context Questioning and Outcome Questioning], we expand our understanding of desired behaviour.<br />
<br />
However, the Given-When-Then format is constraining and often results in lengthy scenario descriptions that bear little relation to how customers would explain the scenario. Nigel believes we should move closer to how our customers describe the scenarios. <br />
<br />
For example, on one project the scenarios were described on a whiteboard using Venn diagrams. When documented using the Given-When-Then format, the BDD scenarios were around 10-15 lines which took much longer to comprehend than the Venn diagrams. In some cases, the BAs created additional documents to describe the scenarios - an anti-pattern to creating a Living Documentation system that acts as a single source of truth.<br />
<br />
(Another example was tenpin bowling scores, where a visualisation of a conventional scoresheet is much quicker to read than a Given-When-Then description.)<br />
<br />
Nigel prototyped a couple of solutions using [http://concordion.org/ Concordion]. The first embedded a copy of the Venn diagram in the specification, alongside a short table containing the values. The second used a [http://en.wikipedia.org/wiki/Scalable_Vector_Graphics SVG] representation of the Venn diagram to set up the context. By applying Concordion instrumentation to the SVG tags, the fixture code was called with values directly from the Venn diagram. In this scenario, the outcome was asserted against textual values, but there is no reason that the outcome couldn't be checked against a visualisation either.<br />
<br />
Related to this is @natpryce's [http://www.natpryce.com/articles/000801.html discussion] of using [http://www.approvaltests.com/ Approval Testing] with SBE, where he describes that "''The final output for approval does not have to be text. For a numerical function, a test can render a graphical visualisation so one can more easily see calculation errors, such as undesirable discontinuities, then when results are displayed in tabular form.''"<br />
<br />
The comments on Nat's article discuss using [http://blog.jessitron.com/2013/04/property-based-testing-what-is-it.html property-based testing] tools (eg. [http://www.cse.chalmers.se/~rjmh/QuickCheck/ QuickCheck], [http://www.scalacheck.org/ ScalaCheck]) as generators of the input values for the approval testing. With BDD/SBE, the [http://www.assurity.co.nz/community/our-thoughts/acceptance-criteria-part-1-seeing-the-wood-and-some-trees/ Acceptance Criteria] would be used to define the properties. (The Specs2 library [http://etorreborre.github.io/specs2/guide/org.specs2.guide.Matchers.html#ScalaCheck+properties supports ScalaCheck properties].)<br />
<br />
<br />
The attendees also discussed using mind-maps to provide a high-level overview of the test scenarios. One attendee described an in-house tool that visualised the scenario results on a mind-map. Tests could be triggered on any sub-branch of the mind-map. <br />
<br />
Related to this is @katrinatester's work on [http://katrinatester.blogspot.co.nz/2013/11/mind-maps-and-automation.html using mind maps to visualise what testing has occurred], not just the results of automation.<br />
<br />
Another attendee described documenting both the automated and manual scenarios using [http://fitnesse.org/ Fitnesse]. The manual scenarios are differentiated so that they can be picked for manual testing.<br />
<br />
<br />
Another topic discussed was how to get BAs to write scenarios in a format that can be automated. This is often difficult when starting to use BDD/SBE, but once a library of steps is developed there should be opportunity for re-use, or creating new steps using a similar format to existing ones. [Cucumber Pro https://cucumber.pro/] is attempting to make it easy for the team to collaborate and re-use steps. <br />
<br />
Some teams have their BAs commit directly to version control. One attendee's team have their BAs describe scenarios in JIRA using [Behave for JIRA https://marketplace.atlassian.com/plugins/com.hindsighttesting.behave.jira]. Another team are using Confluence. The danger of using a wiki such as this is that the scenarios are hard to version, branch and merge. The preference is to keep the scenarios in source control alongside the application code.</div>Nigel.charmanhttps://citconf.com/wiki/index.php?title=Beyond_Given-When-Then&diff=15624Beyond Given-When-Then2014-02-22T20:46:21Z<p>Nigel.charman: Created page with "Facilitator - @nigel_charman When discussing [http://en.wikipedia.org/wiki/Behavior-driven_development BDD]/[http://en.wikipedia.org/wiki/Specification_by_example SBE] scenar..."</p>
<hr />
<div>Facilitator - @nigel_charman<br />
<br />
When discussing [http://en.wikipedia.org/wiki/Behavior-driven_development BDD]/[http://en.wikipedia.org/wiki/Specification_by_example SBE] scenarios, it is important to consider Context - Action - Outcome. Using [http://lizkeogh.com/2011/09/22/conversational-patterns-in-bdd/ Context Questioning and Outcome Questioning], we expand our understanding of desired behaviour.<br />
<br />
However, the Given-When-Then format is constraining and often results in lengthy scenario descriptions that bear little relation to how customers would describe the scenario. Nigel believes we should move closer to how the customers describe the scenarios. <br />
<br />
For example, on one project the scenarios were described on a whiteboard using Venn diagrams. When documented, the scenarios were expanded out to 10-15 lines of Given-When-Then which were hard to scan and took much longer to understand than the Venn diagrams.<br />
<br />
Another example was tenpin bowling scores, where a visualisation of a conventional scoresheet is much quicker to read than a Given-When-Then description.<br />
<br />
Nigel prototyped a couple of solutions using [http://concordion.org/ Concordion]. The first embedded a copy of the Venn diagram in the specification, alongside a short table containing the values. The second used a [http://en.wikipedia.org/wiki/Scalable_Vector_Graphics SVG] representation of the Venn diagram to set up the context. By applying Concordion instrumentation to the SVG tags, the fixture code was called with values directly from the Venn diagram. In this scenario, the outcome was asserted against textual values, but there is no reason that the outcome couldn't be checked against a visualisation either.<br />
<br />
Related to this is @natpryce's [http://www.natpryce.com/articles/000801.html discussion] of using [http://www.approvaltests.com/ Approval Testing] with SBE, where he describes that "''The final output for approval does not have to be text. For a numerical function, a test can render a graphical visualisation so one can more easily see calculation errors, such as undesirable discontinuities, then when results are displayed in tabular form.''"<br />
<br />
The comments on Nat's article discuss using [http://blog.jessitron.com/2013/04/property-based-testing-what-is-it.html property-based testing] tools (eg. [http://www.cse.chalmers.se/~rjmh/QuickCheck/ QuickCheck], [http://www.scalacheck.org/ ScalaCheck]) as generators of the input values for the approval testing. With BDD/SBE, the [http://www.assurity.co.nz/community/our-thoughts/acceptance-criteria-part-1-seeing-the-wood-and-some-trees/ Acceptance Criteria] would be used to define the properties. (The Specs2 library [http://etorreborre.github.io/specs2/guide/org.specs2.guide.Matchers.html#ScalaCheck+properties supports ScalaCheck properties].)<br />
<br />
<br />
The session also discussed using mind-maps to provide a high-level overview of the test scenarios. One participant described an in-house tool that visualised the scenario results on a mind-map. Tests could be triggered on any sub-branch of the mind-map. <br />
<br />
Related to this is @katrinatester's work on [http://katrinatester.blogspot.co.nz/2013/11/mind-maps-and-automation.html using mind maps to visualise what testing has occurred], not just the results of automation.<br />
<br />
Another organisation are using Fitnesse, and include both automated and manual scenarios in the Fitnesse spec. The manual scenarios are differentiated so that they can be picked for manual testing.<br />
<br />
<br />
Another topic was how to get BAs to write scenarios in a format that can be automated. This is often difficult when starting to use BDD/SBE, but once a library of steps is developed there should be opportunity for re-use, or creating new steps using a similar format to existing ones. [Cucumber Pro https://cucumber.pro/] is attempting to make it easy for the team to collaborate and re-use steps. <br />
<br />
Some teams have their BAs commit directly to version control. One attendee's team have their BAs describe scenarios in JIRA using [Behave for JIRA https://marketplace.atlassian.com/plugins/com.hindsighttesting.behave.jira]. Another team are using Confluence. The danger of using a wiki such as this is that the scenarios are hard to version, branch and merge. The preference is to keep the scenarios in source control alongside the application code.</div>Nigel.charmanhttps://citconf.com/wiki/index.php?title=CITCONANZ2014Sessions&diff=15623CITCONANZ2014Sessions2014-02-22T19:40:41Z<p>Nigel.charman: </p>
<hr />
<div>CITCON ANZ 2014 Auckland Sessions<br />
<br />
Back to the [[Main Page]]<br />
<br />
== 10:00 Topics ==<br />
<br />
# [[Virtualisation Services]]<br />
# [[CI & TDD for legacy Systems]]<br />
# [[CI in thirty minutes]]<br />
# [[?]]<br />
# [[?]]<br />
<br />
== 11:15 Topics ==<br />
<br />
# [[Vagrant/Packer for Continuous Delivery of Application Infrastructure]]<br />
# [[?]]<br />
# [[?]]<br />
# [[?]]<br />
# [[Tips and tricks for CI and CD]]<br />
<br />
== Lunch Topics ==<br />
<br />
# [[?]]<br />
<br />
== 2:00 Topics ==<br />
<br />
# [[How do you change the team culture from waterfall to shepherding change to production + Communication model]]<br />
# [[?]]<br />
# [[?]]<br />
# [[Test Execution Time and Running Web-UI Tests in Parallel]]<br />
# [[?]]<br />
<br />
== 3:15 Topics ==<br />
<br />
# [[?]]<br />
# [[?]]<br />
# [[?]]<br />
# [[?]]<br />
# [[?]]<br />
<br />
== 4:30 Topics ==<br />
<br />
# [[?]]<br />
# [[?]]<br />
# [[?]]<br />
# [[Beyond Given-When-Then]]<br />
# [[?]]<br />
<br />
<br />
== Table View ==<br />
<br />
{| class="wikitable"<br />
|-<br />
! Room name<br />
! 10:00<br />
! 11:15<br />
! 2:00<br />
! 3:15<br />
! 4:30<br />
|-<br />
| Cube<br />
| [[?]]<br />
| [[?]]<br />
| [[?]]<br />
| [[?]]<br />
| [[?]]<br />
|-<br />
| Cube Bar<br />
| [[?]]<br />
| [[?]]<br />
| [[?]]<br />
| [[?]]<br />
| [[?]]<br />
|-<br />
| Cube Hall<br />
| [[CI in thirty minutes]]<br />
| [[?]]<br />
| [[?]]<br />
| [[?]]<br />
| [[?]]<br />
|-<br />
| Oku Wairangi<br />
| [[?]]<br />
| [[?]]<br />
| [[Test Execution Time and Running Web-UI Tests in Parallel]]<br />
| [[?]]<br />
| [[Beyond Given-When-Then]]<br />
|-<br />
| Yellow Circle<br />
| [[?]]<br />
| [[Tips and tricks for CI and CD]]<br />
| [[?]]<br />
| [[?]]<br />
| [[?]]<br />
|}</div>Nigel.charmanhttps://citconf.com/wiki/index.php?title=NoMeansNo&diff=15068NoMeansNo2013-02-09T01:15:24Z<p>Nigel.charman: </p>
<hr />
<div>No means No - how to keep testing failures meaningful in CI. Slow tests. Flaky tests. Tests that change.<br />
<br />
Katrina Edgar<br />
<br />
Devs writing unit tests<br />
Testers writing integration tests<br />
Devs multiple check-ins per day.<br />
Difficult to get integration tests meaningful to testers<br />
2 hour test suite:<br />
Bad code, copy and paste<br />
Moved setup steps out of Selenium<br />
<br />
got down to half hour<br />
Fragile – needed to change sleeps to waits<br />
<br />
Because tester had ownership, tests would break and testers would have to constant stream of red builds. Devs stopped paying attention to red builds.<br />
<br />
Integration tests only helpful to teste r, became same interpretation as in manual environment.<br />
<br />
Loop of death - different reasons for tests to go red each time<br />
Green light once a week, celebrated with coffee, acknowledged as a problem (“squirrel dance” at TIM)<br />
<br />
<br />
Andrew – same problem but worse, not part of deployment pipeline <br />
5 devs, 2 testers – needs more testers than usual <br />
<br />
<br />
<br />
<br />
<br />
Martin – integration tests owned by devs, testing team test from external interfaces, large amount of manual testing, embedded radio systems, automated environments with hand-held units, tests call quality etc. <br />
Same problem with Jenkins turning red and slow feedback time<br />
Looking to run smoke test on each build, then less frequent or nightly build<br />
Painful to install stuff onto device<br />
Dev checkin – 30 minutes for CI tests to complete, some tests only nightly<br />
<br />
Julian – canary tests for risky areas,<br />
<br />
How to tackle continual red builds?<br />
Potential for doing a “hearts and minds” <br />
Katrina tried Gold star charts, which was a good motivation for devs<br />
Also got devs writing the tests<br />
<br />
Ward – failures are personal, need to make it fun, if you're not writing bugs, you're not writing code<br />
<br />
HTML tests are always going to be flaky – :because browsers suck”, write tests at service layer, subcutaneous testing<br />
Devs already had responsibility for the quick tests<br />
Would you throw away GUI tests? Tend to be repetitive, need to refactor down<br />
<br />
False negatives - <br />
Claim plugin – assigned “cake points”, if you didn't claim you had to bring in cake. Plus cafe bonuses<br />
<br />
Jenkins game plugin – useful to start people getting interested in it, but could lead to bad behaviour (eg. Checking in meaningless tests to get points)<br />
Stop the line on broken builds<br />
Make a developer responsible for checking the build and doing triage of failures – can make you feel crap always have to go back to same person<br />
What was better – picking on one person or stopping whole team<br />
<br />
Reverting check-ins<br />
Validated merge plugin<br />
Git plugin – merge to branch on successful build<br />
Gerrit <br />
<br />
Source code management<br />
10 teams checking in on branches then merging to trunk. Teams have to wait when trunk is broken.<br />
<br />
Visibility of breakage<br />
build radiator<br />
USB tower of LEDs that showed breakages<br />
<br />
Build radiator also showed message of the day, jokes etc to act as central source of information<br />
<br />
Have to slow down before you speed up. <br />
Not doing CI if you're not stopping when it breaks<br />
<br />
Look at definition of done criteria – can't claim points until its green<br />
<br />
CD is powerful – can't deploy until working<br />
<br />
ATDD – check-in of incomplete features, use Pending/Expected to fail flag on acceptance tests while developing feature and checking in successul unit tests<br />
<br />
Pushback from devs on creating and running integration tests. Breaks “flow”.<br />
<br />
Acceptance tests shouldn't fail if sufficient testing at a lower level.<br />
<br />
Test on own machine – can pass, then still fail on build server due to environmental issues<br />
<br />
By time of failure, multiple commits have been picked up so hard to ascertain blame. Potential changes – slowing down commits, concurrent builds, spin-up multiple environments in the cloud to run tests that require environment<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Devs unwilling to run tests, needs extra environmental setup<br />
<br />
Jan - 30 minutes integration test time. Devs run tests over lunch or at end of day. Commit every 6-8 working hours.<br />
Daphne – use Git for tiny commits.<br />
<br />
Delete integration tests that never fail.<br />
<br />
Devs saw as someone else's code, tester owned. Potentially having the tester pair with developer may have resulted in shared ownership. Co-location helps.<br />
<br />
Make it fun – devs will stick around longer and do extra stuff. <br />
<br />
Team ownership of broken builds.<br />
<br />
Silos can cause friction – eg. Different reporting lines for devs and testers, turf war. <br />
Needs buy-in from management, and focus on better working relationships.<br />
<br />
Silo books -<br />
# Silos, Politics and Turf Wars: A leadership fable about destroying the barrers that turn colleagues into competitors, Lencioni<br />
# Bust the silos, Hunter Hastings and Jeff Saperstein<br />
# Silo Busting, conference workshop by Tom Perry and Lourdes Vidueira<br />
# The Robbers Cave Experiment, Sherif<br />
<br />
Coding dojos – get team working together on shared goal that's not production code, can use CI approach and ensure CI principles are followed<br />
<br />
In large org, having meeting 2-3 times a week across scrum of scrums helps get understanding of what is being committed and less broken builds</div>Nigel.charmanhttps://citconf.com/wiki/index.php?title=NoMeansNo&diff=15067NoMeansNo2013-02-09T01:11:23Z<p>Nigel.charman: Created page with "No means No - how to keep testing failures meaningful in CI. Slow tests. Flaky tests. Tests that change. Katrain Edgar Devs writing unit tests Testers writing integration te..."</p>
<hr />
<div>No means No - how to keep testing failures meaningful in CI. Slow tests. Flaky tests. Tests that change.<br />
<br />
Katrain Edgar<br />
<br />
Devs writing unit tests<br />
Testers writing integration tests<br />
Devs multiple check-ins per day.<br />
Difficult to get integration tests meaningful to testers<br />
2 hour test suite:<br />
Bad code, copy and paste<br />
Moved setup steps out of Selenium<br />
<br />
got down to half hour<br />
Fragile – needed to change sleeps to waits<br />
<br />
Because tester had ownership, tests would break and testers would have to constant stream of red builds. Devs stopped paying attention to red builds.<br />
<br />
Integration tests only helpful to teste r, became same interpretation as in manual environment.<br />
<br />
Loop of death - different reasons for tests to go red each time<br />
Green light once a week, celebrated with coffee, acknowledged as a problem (“squirrel dance” at TIM)<br />
<br />
<br />
Andrew – same problem but worse, not part of deployment pipeline <br />
5 devs, 2 testers – needs more testers than usual <br />
<br />
<br />
<br />
<br />
<br />
Martin – integration tests owned by devs, testing team test from external interfaces, large amount of manual testing, embedded radio systems, automated environments with hand-held units, tests call quality etc. <br />
Same problem with Jenkins turning red and slow feedback time<br />
Looking to run smoke test on each build, then less frequent or nightly build<br />
Painful to install stuff onto device<br />
Dev checkin – 30 minutes for CI tests to complete, some tests only nightly<br />
<br />
Julian – canary tests for risky areas,<br />
<br />
How to tackle continual red builds?<br />
Potential for doing a “hearts and minds” <br />
Katrina tried Gold star charts, which was a good motivation for devs<br />
Also got devs writing the tests<br />
<br />
Ward – failures are personal, need to make it fun, if you're not writing bugs, you're not writing code<br />
<br />
HTML tests are always going to be flaky – :because browsers suck”, write tests at service layer, subcutaneous testing<br />
Devs already had responsibility for the quick tests<br />
Would you throw away GUI tests? Tend to be repetitive, need to refactor down<br />
<br />
False negatives - <br />
Claim plugin – assigned “cake points”, if you didn't claim you had to bring in cake. Plus cafe bonuses<br />
<br />
Jenkins game plugin – useful to start people getting interested in it, but could lead to bad behaviour (eg. Checking in meaningless tests to get points)<br />
Stop the line on broken builds<br />
Make a developer responsible for checking the build and doing triage of failures – can make you feel crap always have to go back to same person<br />
What was better – picking on one person or stopping whole team<br />
<br />
Reverting check-ins<br />
Validated merge plugin<br />
Git plugin – merge to branch on successful build<br />
Gerrit <br />
<br />
Source code management<br />
10 teams checking in on branches then merging to trunk. Teams have to wait when trunk is broken.<br />
<br />
Visibility of breakage<br />
build radiator<br />
USB tower of LEDs that showed breakages<br />
<br />
Build radiator also showed message of the day, jokes etc to act as central source of information<br />
<br />
Have to slow down before you speed up. <br />
Not doing CI if you're not stopping when it breaks<br />
<br />
Look at definition of done criteria – can't claim points until its green<br />
<br />
CD is powerful – can't deploy until working<br />
<br />
ATDD – check-in of incomplete features, use Pending/Expected to fail flag on acceptance tests while developing feature and checking in successul unit tests<br />
<br />
Pushback from devs on creating and running integration tests. Breaks “flow”.<br />
<br />
Acceptance tests shouldn't fail if sufficient testing at a lower level.<br />
<br />
Test on own machine – can pass, then still fail on build server due to environmental issues<br />
<br />
By time of failure, multiple commits have been picked up so hard to ascertain blame. Potential changes – slowing down commits, concurrent builds, spin-up multiple environments in the cloud to run tests that require environment<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Devs unwilling to run tests, needs extra environmental setup<br />
<br />
Jan - 30 minutes integration test time. Devs run tests over lunch or at end of day. Commit every 6-8 working hours.<br />
Daphne – use Git for tiny commits.<br />
<br />
Delete integration tests that never fail.<br />
<br />
Devs saw as someone else's code, tester owned. Potentially having the tester pair with developer may have resulted in shared ownership. Co-location helps.<br />
<br />
Make it fun – devs will stick around longer and do extra stuff. <br />
<br />
Team ownership of broken builds.<br />
<br />
Silos can cause friction – eg. Different reporting lines for devs and testers, turf war. <br />
Needs buy-in from management, and focus on better working relationships.<br />
<br />
Coding dojos – get team working together on shared goal that's not production code, can use CI approach and ensure CI principles are followed<br />
<br />
In large org, having meeting 2-3 times a week across scrum of scrums helps get understanding of what is being committed and less broken builds</div>Nigel.charmanhttps://citconf.com/wiki/index.php?title=CITCONANZ2013Sessions&diff=15066CITCONANZ2013Sessions2013-02-09T01:07:27Z<p>Nigel.charman: /* 11:15 Topics */</p>
<hr />
<div>CITCON ANZ 2013 Sydney Sessions<br />
<br />
Back to the [[Main Page]]<br />
<br />
== 10:00 Topics ==<br />
<br />
[[MultipleCdDiscussion]]<br />
# placeholder<br />
# placeholder<br />
# placeholder<br />
# placeholder<br />
<br />
== 11:15 Topics ==<br />
<br />
# placeholder<br />
# placeholder<br />
# [[NoMeansNo]]<br />
# placeholder<br />
# placeholder<br />
<br />
== 2:00 Topics ==<br />
<br />
# placeholder<br />
# placeholder<br />
# placeholder<br />
# placeholder<br />
# placeholder<br />
<br />
== 3:15 Topics ==<br />
<br />
# placeholder<br />
# placeholder<br />
# placeholder<br />
# placeholder<br />
# placeholder<br />
<br />
== 4:30 Topics ==<br />
<br />
# placeholder<br />
# placeholder<br />
# placeholder<br />
# placeholder<br />
# placeholder<br />
# placeholder<br />
<br />
== Table View ==<br />
<br />
{| class="wikitable"<br />
|-<br />
! #<br />
! 10:00<br />
! 11:15<br />
! 2:00<br />
! 3:15<br />
! 4:30<br />
|-<br />
| Room A (large)<br />
| placeholder for 10 am <br />
| placeholder for 11:15 am <br />
| placeholder for 2 pm <br />
| placeholder for 3:15 pm <br />
| placeholder for 4:30 pm <br />
|-<br />
| Room B (medium)<br />
| placeholder for 10 am <br />
| placeholder for 11:15 am <br />
| placeholder for 2 pm <br />
| placeholder for 3:15 pm <br />
| placeholder for 4:30 pm <br />
|-<br />
| Room C (medium)<br />
| placeholder for 10 am <br />
| placeholder for 11:15 am <br />
| placeholder for 2 pm <br />
| placeholder for 3:15 pm <br />
| placeholder for 4:30 pm <br />
|-<br />
| Room D (medium)<br />
| placeholder for 10 am <br />
| placeholder for 11:15 am <br />
| placeholder for 2 pm <br />
| placeholder for 3:15 pm <br />
| placeholder for 4:30 pm <br />
|-<br />
| Room E (small)<br />
| placeholder for 10 am <br />
| placeholder for 11:15 am <br />
| placeholder for 2 pm <br />
| placeholder for 3:15 pm <br />
| placeholder for 4:30 pm <br />
|}</div>Nigel.charmanhttps://citconf.com/wiki/index.php?title=Nigel_Charman&diff=15026Nigel Charman2013-01-28T09:08:07Z<p>Nigel.charman: </p>
<hr />
<div>Delivery Practices coach with [http://www.assurity.co.nz Assurity Consulting] in Wellington, NZ. I'm a contributor to the [http://www.concordion.org Concordion] project and co-leader of the [http://jug.wellington.net.nz Wellington Java User Group], along with [[John_Hurst|John Hurst]]. <br />
<br />
My interests for CITCON include pretty much everything relating to Continuous Delivery and Testing.<br />
<br />
LinkedIn: http://www.linkedin.com/in/nigelcharman<br />
<br />
Twitter: @nigel_charman</div>Nigel.charmanhttps://citconf.com/wiki/index.php?title=Nigel_Charman&diff=15025Nigel Charman2013-01-28T09:05:49Z<p>Nigel.charman: </p>
<hr />
<div>Delivery Practices coach with [http://www.assurity.co.nz Assurity Consulting] in Wellington, NZ. I'm a contributor to the [http://www.concordion.org Concordion] project and co-leader of the [http://jug.wellington.net.nz Wellington Java User Group], along with [[John_Hurst|John Hurst]]. <br />
<br />
My interests for CITCON include pretty much everything relating to Continuous Delivery and Testing.<br />
<br />
LinkedIn: http://www.linkedin.com/in/nigelcharman<br />
Twitter: @nigel_charman</div>Nigel.charmanhttps://citconf.com/wiki/index.php?title=Continuous_deployment&diff=7812Continuous deployment2010-06-26T20:04:39Z<p>Nigel.charman: </p>
<hr />
<div>what environment? Dev, QA, Perf, Staging, Production<br />
What frequency? Per release, iteration, weekly, nightly, every commit.<br />
<br />
Nigel talked about continuous deployment for his start-up. Inspired by Eric Ries IMVU.com<br />
<br />
If tests pass, then it's deployed<br />
<br />
Dev on main trunk, no merge conflicts.<br />
<br />
Can have features turned off until ready to go.<br />
<br />
Idea is bugs only occur once.<br />
<br />
This is a startup, so getting code out fast is critical for market validation. If things break, write new test, fix code, and deploy again.<br />
<br />
IMVU deploying 60 times a day! 60 devs.<br />
<br />
Forward and backward patch for DB, test code works with both states.<br />
<br />
Written in ''Catalyst'' (Ruby on Rails with Perl... MVC).<br />
<br />
Model test (unit test) - so models are testable<br />
Integration tests (selenium)<br />
<br />
Code modularity<br />
<br />
IMVU ensures released (but turned off) features passes all tests with existing code. So when switched on, no surprises (Yay for feature flags!).<br />
<br />
<br />
E.g. A/B testing, when features are turned on for only some users! Twitter, Google doing this.<br />
<br />
Flickr deploy every 15 mins.<br />
<br />
Tests test things that have ''actually'' broken.<br />
<br />
Sounds risky, but when working well is a risk reduction model. <br />
When scared we slow down, release less, risk frequency goes down, but magnitude of something going wrong increases.<br />
<br />
For db rollback scripts, should be tested, could be that forward migration does not drop old tables/data, and even new code writes transactions to both old and new tables.<br />
<br />
<br />
Risk. Many coys already have some devs/dbas with root production access to fix bugs on production because they have no good process for releasing fixes quickly.<br />
<br />
e.g. in a bank, directors could sign-off the process of continuous deployment, not each release as would likely be the case now.<br />
<br />
Continuous deployment can be ok to a UAT environment... get comfortable with it, then go to production. But if not that far, then go for UAT at least.<br />
<br />
Banks may hold off as they can't currently automate all the testing (<br />
<br />
Delaying releasing does not increase chance of finding a bug, or a poorly written test... actually delays the inevitable.<br />
<br />
Build up set of tests/mocks to test downstream and upstream systems you integrate with.<br />
<br />
IMVU clustered tests that take so long... break them out and architect tests so you can cluster our long tests, e.g. for batch tasks and integration.<br />
<br />
Can still deploy twice a day and still use a QA team.... zero defects is not a requirement, it is balanced with an agreed level of risk.<br />
<br />
Continuous / frequent deployment is less about automated processes, but more about increasing the rate of feedback.<br />
<br />
Nigel's start-up is http://www.getyourgameon.co.nz<br />
<br />
-- I should point out that CD is not my idea, [http://startuplessonslearned.com Eric Ries] is widely credited with formalising it as part of a system called Customer Development which has a lot of mind share in the startup world right now. It's definitely worth checking out his blog for the posts on CD, from there you'll find presentations and a deeper analysis of how it works at IMVU and why it's a great idea - Nigel</div>Nigel.charman