<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://citconf.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=WimHeemskerk</id>
	<title>CitconWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://citconf.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=WimHeemskerk"/>
	<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Special:Contributions/WimHeemskerk"/>
	<updated>2026-04-24T21:32:13Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.11</generator>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Repeatable_Failures&amp;diff=16031</id>
		<title>Repeatable Failures</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Repeatable_Failures&amp;diff=16031"/>
		<updated>2015-09-18T14:40:19Z</updated>

		<summary type="html">&lt;p&gt;WimHeemskerk: /* Connections / Environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Repeatable failures&amp;#039;&amp;#039;&amp;#039; (over repeatable success). Write-up of a lunch-time session at CITCON Europe 2015, Helsinki (https://pbs.twimg.com/media/COsgVvJW8AA1VbK.jpg).&lt;br /&gt;
[[#TL;DR]] at bottom.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==The premise==&lt;br /&gt;
When a test fails, we want to be able to repeat it, exactly. And so we run our automated tests on each commit. But can&amp;#039;t we do better than running ever more checks in CI? And what about all those hours of the day that the CI machines are idle? Couldn&amp;#039;t they be used to explore something?&lt;br /&gt;
&lt;br /&gt;
Emerging theme of this CITCON seems to be tossing away large numbers of automated tests, ones that apparently got disconnected from delivering value to the team. We are ever more aware of the costs of maintenance and lengthening the feedback loop. So when do automated checks deliver value to us? When they specify and validate functional or technical aspects of the system; key examples of how the system is meant to work implemented as checks that indeed it does.&lt;br /&gt;
&lt;br /&gt;
==Code==&lt;br /&gt;
Therefore the first thing we focused on was challenging the automated tests by mutating the code, to weed the crap tests out from the valuable one and highlight coverage gaps. With mutation test tools like PIT (for Java, http://pitest.org/ ), it should be possible to get a much better impression of what is truly covered by tests, providing valuable feedback for tests and code alike. (Ideally each mutation of the code will be &amp;#039;killed&amp;#039; by exactly one test specifying that specific behaviour.) Some pointed out this is also a teaching and design aid and thus running a good number of mutation tests should probably become a regular thing.&lt;br /&gt;
&lt;br /&gt;
==Input / Data==&lt;br /&gt;
Next we wondered about the implications of testing with random (valid) values for input (or data). When the spec for example says we can &amp;#039;&amp;#039;add any two integers between 1 and 10&amp;#039;&amp;#039;, how could we test with just any two values in that range? Well, isn&amp;#039;t that simple? You add the two in your test and check that the answer you get is correct! Fortunately for us, Jeffrey ([https://twitter.com/Jtf @Jtf]) has a lot of experience in this area and quickly caught this line of thinking.&lt;br /&gt;
&lt;br /&gt;
Are we rebuilding the system in our tests? Seems unwieldy and prone to the same errors as the application code. Can we rely on an oracle, like we probably did for our key examples? If an accurate oracle is that easily available, why are we building the system? No, we&amp;#039;ll have to let go of asserting the specific values and work with &amp;#039;&amp;#039;&amp;#039;weak assertions&amp;#039;&amp;#039;&amp;#039; instead. What are the invariants that no answer should violate? In our example: the answer should always be an integer between 2 and 20.&lt;br /&gt;
&lt;br /&gt;
Jessica Kerr has done a lot of work in this field, which also ended up in jUnit as support for &amp;#039;&amp;#039;&amp;#039;property-based testing&amp;#039;&amp;#039;&amp;#039; ( http://www.infoq.com/presentations/property-based-testing ).&lt;br /&gt;
&lt;br /&gt;
As we&amp;#039;re talking about &amp;#039;&amp;#039;&amp;#039;invariants&amp;#039;&amp;#039;&amp;#039; now, you needn&amp;#039;t know the specific situation anymore, but you can - and probably should - monitor them constantly, in production as well. Unfortunately, what your production monitoring picks up, may well be hard to repeat. This is why injecting the failures yourself and seeing how they play out is so practical.&lt;br /&gt;
&lt;br /&gt;
== Connections / Environment ==&lt;br /&gt;
Thinking of it as failure injection brought us to two other examples. First one Nat Pryce ([https://twitter.com/natpryce @natpryce]) gave [https://skillsmatter.com/skillscasts/6222-lessons-learned-breaking-the-tdd-rules at CukeUp! 2015]: he brutally vandalized JSON messages to make sure the software would never crash due to a poor connection, as well as ran the CI environment with live data streams. Then Netflix&amp;#039;s Chaos Monkey. Had you ever wondered how important it must be for the teams that it reports exactly what it disrupted when, to quickly locate and fix issues with handling its disruptions? [Can only hope it indeed does. Does anyone know?]&lt;br /&gt;
&lt;br /&gt;
== TL;DR==&lt;br /&gt;
This filled out our list of &amp;#039;&amp;#039;&amp;#039;failure injections&amp;#039;&amp;#039;&amp;#039;, things to (semi-randomly) manipulate in creative ways: program code, input, data, connections, environment. And our key tricks to repeatable failures: inject the failures yourself. If you&amp;#039;ve manipulated code: use your automated regression test set. Otherwise use weak assertions to detect the effect of the injected failure on the system.&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
I, session host Wim Heemskerk, picked up this premise from a presentation by Nat Pryce at CukeUp, where he gave the example mentioned of applying it in one way. My purpose for this session: to explore the various options for it. Coming at it from a testing perspective, I focused on failure injection; throwing artifical challenges at the system (generally done before the production environment). Working from pure monitoring / telemetry was placed out of scope for this particular discussion.&lt;/div&gt;</summary>
		<author><name>WimHeemskerk</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Repeatable_Failures&amp;diff=16030</id>
		<title>Repeatable Failures</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Repeatable_Failures&amp;diff=16030"/>
		<updated>2015-09-18T13:15:17Z</updated>

		<summary type="html">&lt;p&gt;WimHeemskerk: Created page with &amp;quot;&amp;#039;&amp;#039;&amp;#039;Repeatable failures&amp;#039;&amp;#039;&amp;#039; (over repeatable success). Write-up of a lunch-time session at CITCON Europe 2015, Helsinki (https://pbs.twimg.com/media/COsgVvJW8AA1VbK.jpg). #TL;...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Repeatable failures&amp;#039;&amp;#039;&amp;#039; (over repeatable success). Write-up of a lunch-time session at CITCON Europe 2015, Helsinki (https://pbs.twimg.com/media/COsgVvJW8AA1VbK.jpg).&lt;br /&gt;
[[#TL;DR]] at bottom.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==The premise==&lt;br /&gt;
When a test fails, we want to be able to repeat it, exactly. And so we run our automated tests on each commit. But can&amp;#039;t we do better than running ever more checks in CI? And what about all those hours of the day that the CI machines are idle? Couldn&amp;#039;t they be used to explore something?&lt;br /&gt;
&lt;br /&gt;
Emerging theme of this CITCON seems to be tossing away large numbers of automated tests, ones that apparently got disconnected from delivering value to the team. We are ever more aware of the costs of maintenance and lengthening the feedback loop. So when do automated checks deliver value to us? When they specify and validate functional or technical aspects of the system; key examples of how the system is meant to work implemented as checks that indeed it does.&lt;br /&gt;
&lt;br /&gt;
==Code==&lt;br /&gt;
Therefore the first thing we focused on was challenging the automated tests by mutating the code, to weed the crap tests out from the valuable one and highlight coverage gaps. With mutation test tools like PIT (for Java, http://pitest.org/ ), it should be possible to get a much better impression of what is truly covered by tests, providing valuable feedback for tests and code alike. (Ideally each mutation of the code will be &amp;#039;killed&amp;#039; by exactly one test specifying that specific behaviour.) Some pointed out this is also a teaching and design aid and thus running a good number of mutation tests should probably become a regular thing.&lt;br /&gt;
&lt;br /&gt;
==Input / Data==&lt;br /&gt;
Next we wondered about the implications of testing with random (valid) values for input (or data). When the spec for example says we can &amp;#039;&amp;#039;add any two integers between 1 and 10&amp;#039;&amp;#039;, how could we test with just any two values in that range? Well, isn&amp;#039;t that simple? You add the two in your test and check that the answer you get is correct! Fortunately for us, Jeffrey ([https://twitter.com/Jtf @Jtf]) has a lot of experience in this area and quickly caught this line of thinking.&lt;br /&gt;
&lt;br /&gt;
Are we rebuilding the system in our tests? Seems unwieldy and prone to the same errors as the application code. Can we rely on an oracle, like we probably did for our key examples? If an accurate oracle is that easily available, why are we building the system? No, we&amp;#039;ll have to let go of asserting the specific values and work with &amp;#039;&amp;#039;&amp;#039;weak assertions&amp;#039;&amp;#039;&amp;#039; instead. What are the invariants that no answer should violate? In our example: the answer should always be an integer between 2 and 20.&lt;br /&gt;
&lt;br /&gt;
Jessica Kerr has done a lot of work in this field, which also ended up in jUnit as support for &amp;#039;&amp;#039;&amp;#039;property-based testing&amp;#039;&amp;#039;&amp;#039; ( http://www.infoq.com/presentations/property-based-testing ).&lt;br /&gt;
&lt;br /&gt;
As we&amp;#039;re talking about &amp;#039;&amp;#039;&amp;#039;invariants&amp;#039;&amp;#039;&amp;#039; now, you needn&amp;#039;t know the specific situation anymore, but you can - and probably should - monitor them constantly, in production as well. Unfortunately, what your production monitoring picks up, may well be hard to repeat. This is why injecting the failures yourself and seeing how they play out is so practical.&lt;br /&gt;
&lt;br /&gt;
== Connections / Environment ==&lt;br /&gt;
Thinking of it as failure injection brought us to two other examples. First one Nat Pryce ([https://twitter.com/natpryce @natpryce]) gave [https://skillsmatter.com/skillscasts/6222-lessons-learned-breaking-the-tdd-rules at CukeUp! 2015]: he brutally vandalized XML messages to make sure the software would never crash due to a poor connection. Then Netflix&amp;#039;s Chaos Monkey. Had you ever wondered how important it must be for the teams that it reports exactly what it disrupted when, to quickly locate and fix issues with handling its disruptions? [Can only hope it indeed does. Does anyone know?]&lt;br /&gt;
&lt;br /&gt;
== TL;DR==&lt;br /&gt;
This filled out our list of &amp;#039;&amp;#039;&amp;#039;failure injections&amp;#039;&amp;#039;&amp;#039;, things to (semi-randomly) manipulate in creative ways: program code, input, data, connections, environment. And our key tricks to repeatable failures: inject the failures yourself. If you&amp;#039;ve manipulated code: use your automated regression test set. Otherwise use weak assertions to detect the effect of the injected failure on the system.&lt;br /&gt;
&lt;br /&gt;
==Notes==&lt;br /&gt;
I, session host Wim Heemskerk, picked up this premise from a presentation by Nat Pryce at CukeUp, where he gave the example mentioned of applying it in one way. My purpose for this session: to explore the various options for it. Coming at it from a testing perspective, I focused on failure injection; throwing artifical challenges at the system (generally done before the production environment). Working from pure monitoring / telemetry was placed out of scope for this particular discussion.&lt;/div&gt;</summary>
		<author><name>WimHeemskerk</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=User_talk:WimHeemskerk&amp;diff=16029</id>
		<title>User talk:WimHeemskerk</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=User_talk:WimHeemskerk&amp;diff=16029"/>
		<updated>2015-09-18T12:33:17Z</updated>

		<summary type="html">&lt;p&gt;WimHeemskerk: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>WimHeemskerk</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONEurope2015Sessions&amp;diff=16028</id>
		<title>CITCONEurope2015Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONEurope2015Sessions&amp;diff=16028"/>
		<updated>2015-09-18T12:28:22Z</updated>

		<summary type="html">&lt;p&gt;WimHeemskerk: /* Lunch Topics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CITCON Europe 2015 Helsinki Sessions&lt;br /&gt;
&lt;br /&gt;
Back to the [[Main Page]]&lt;br /&gt;
&lt;br /&gt;
== 10:00 Topics ==&lt;br /&gt;
&lt;br /&gt;
# [[Test Tools The Next Generation]]&lt;br /&gt;
# [[Advanced Unit Testing]]&lt;br /&gt;
# [[One Day To Live Elephant Carpaccio]]&lt;br /&gt;
# [[Alerts Everywhere]]&lt;br /&gt;
# [[GIT Branching]]&lt;br /&gt;
&lt;br /&gt;
== 11:15 Topics ==&lt;br /&gt;
&lt;br /&gt;
# [[Dynamic CI Automatic Test Scope]]&lt;br /&gt;
# [[Performance Testing]]&lt;br /&gt;
# [[Why Should I Dockerize My App?]]&lt;br /&gt;
# [[Manual QA Without Tears]]&lt;br /&gt;
# [[Automation vs Security]]&lt;br /&gt;
&lt;br /&gt;
== Lunch Topics ==&lt;br /&gt;
&lt;br /&gt;
# [[Repeatable Failures]]&lt;br /&gt;
# [[...]]&lt;br /&gt;
&lt;br /&gt;
== 2:00 Topics ==&lt;br /&gt;
&lt;br /&gt;
# [[Test Automation Pyramid Adoption]]&lt;br /&gt;
# [[Visualize Results]]&lt;br /&gt;
# [[Selenium]]&lt;br /&gt;
# [[Continuous Delivery Without Automation]]&lt;br /&gt;
# [[Path From Legacy Code To Unit Tests]]&lt;br /&gt;
&lt;br /&gt;
== 3:15 Topics ==&lt;br /&gt;
&lt;br /&gt;
# [[TDD]]&lt;br /&gt;
# [[Robot Framework]]&lt;br /&gt;
# [[Mentoring - Apprenticeship]]&lt;br /&gt;
# [[Automated Testing Of CSS]]&lt;br /&gt;
# [[CI PowerPoint Karaoke]]&lt;br /&gt;
&lt;br /&gt;
== 4:30 Topics ==&lt;br /&gt;
&lt;br /&gt;
# [[Automating E2E Testing]]&lt;br /&gt;
# [[Mob Exploratory Testing]]&lt;br /&gt;
# [[Mobile Testing]]&lt;br /&gt;
# [[War Stories]]&lt;br /&gt;
# [[Meta-Pipeline Systems]]&lt;br /&gt;
&lt;br /&gt;
== Table View ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Room name&lt;br /&gt;
! 10:00&lt;br /&gt;
! 11:15&lt;br /&gt;
! 2:00&lt;br /&gt;
! 3:15&lt;br /&gt;
! 4:30&lt;br /&gt;
|-&lt;br /&gt;
| Auditorium &lt;br /&gt;
| [[Test Tools The Next Generation]]&lt;br /&gt;
| [[Dynamic CI Automatic Test Scope]]&lt;br /&gt;
| [[...]]&lt;br /&gt;
| [[...]]&lt;br /&gt;
| [[...]]&lt;br /&gt;
|-&lt;br /&gt;
| 20&lt;br /&gt;
| [[Advanced Unit Testing]]&lt;br /&gt;
| [[Performance Testing]]&lt;br /&gt;
| [[...]]&lt;br /&gt;
| [[...]]&lt;br /&gt;
| [[...]]&lt;br /&gt;
|-&lt;br /&gt;
| 16&lt;br /&gt;
| [[One Day To Live Elephant Carpaccio]]&lt;br /&gt;
| [[Why Should I Dockerize My App?]]&lt;br /&gt;
| [[...]]&lt;br /&gt;
| [[...]]&lt;br /&gt;
| [[...]]&lt;br /&gt;
|-&lt;br /&gt;
| 18&lt;br /&gt;
| [[Alerts Everywhere]]&lt;br /&gt;
| [[Manual QA Without Tears]]&lt;br /&gt;
| [[...]]&lt;br /&gt;
| [[...]]&lt;br /&gt;
| [[...]]&lt;br /&gt;
|-&lt;br /&gt;
| 14&lt;br /&gt;
| [[GIT Branching]]&lt;br /&gt;
| [[Automation vs Security]]&lt;br /&gt;
| [[...]]&lt;br /&gt;
| [[...]]&lt;br /&gt;
| [[...]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>WimHeemskerk</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=User:WimHeemskerk&amp;diff=15948</id>
		<title>User:WimHeemskerk</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=User:WimHeemskerk&amp;diff=15948"/>
		<updated>2015-09-10T20:28:11Z</updated>

		<summary type="html">&lt;p&gt;WimHeemskerk: Blanked the page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>WimHeemskerk</name></author>
	</entry>
</feed>