<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://citconf.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Kentbye</id>
	<title>CitconWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://citconf.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Kentbye"/>
	<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Special:Contributions/Kentbye"/>
	<updated>2026-04-24T23:13:49Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.11</generator>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Takeaways_from_Citcon_2012&amp;diff=14591</id>
		<title>Takeaways from Citcon 2012</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Takeaways_from_Citcon_2012&amp;diff=14591"/>
		<updated>2012-09-23T00:55:36Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: Created page with &amp;quot;&amp;#039;&amp;#039;&amp;#039;Takeaways from Citicon Portland 2012&amp;#039;&amp;#039;&amp;#039; * Break addiction to avoidance. Focus on speed. * Opportunity that&amp;#039;s here to improve things * Testing. We all have the same problems...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Takeaways from Citicon Portland 2012&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* Break addiction to avoidance. Focus on speed.&lt;br /&gt;
* Opportunity that&amp;#039;s here to improve things&lt;br /&gt;
* Testing. We all have the same problems. Work towards.&lt;br /&gt;
* Experienced similar problems in testing gives him heart, and have more vision.&lt;br /&gt;
* Ideas from other processes and tools that didn&amp;#039;t know existed.&lt;br /&gt;
* Enjoyed open space format, and choose own topics&lt;br /&gt;
* My team doesn&amp;#039;t check in code often enough.&lt;br /&gt;
* Anti-pattern notes and the taxonomy&lt;br /&gt;
* At different stages and could empathasize&lt;br /&gt;
* Andy&amp;#039;s session on the Quagmire on change management.&lt;br /&gt;
* There&amp;#039;s still a lot of work to be done. Lots of common struggles.&lt;br /&gt;
* Get other to improve themselves. Need to improve your own self first.&lt;br /&gt;
* Continuous integration is a process and mindset of people than the technology that you use to do it. Lots of ways to do it. The continuous part is the hard part.&lt;br /&gt;
* CI who check in stuff, but don&amp;#039;t turn it on. Most favorite phrase was Voodoo charm was to say something to shut down thinking&lt;br /&gt;
* Expect more from team of developers, and trust that they can do their jobs&lt;br /&gt;
* Intrigued by idea of CI testing and sysadmin setups and testing the client that sets up the system and whether the system is working right. Lots of complexities that didn&amp;#039;t think about before&lt;br /&gt;
* Felt shockingly like same problems as 2 1/2 years ago. Problems are pretty universal and not just the software. Would help for whatever industry he&amp;#039;s in.&lt;br /&gt;
* Problems are hard, and he still doesn&amp;#039;t have an answer. Aha moment with Jez Humble. Stop thinking about optimizing between mean time to failure. But optimize to mean time form failure and that might be just as good.&lt;br /&gt;
* It&amp;#039;s still a people problem&lt;br /&gt;
* Enjoyed open space because lots of things to learn. It&amp;#039;s all about people, and getting richer conversations.&lt;br /&gt;
* Enjoyed brainstorming environment.&lt;br /&gt;
* Meeting Adam in person was a highlight. Anti-patterns and smells was also interesting. There&amp;#039;s a diversity of ingenuity to solve these problems. Would like to produce catalog of ingenious solutions.&lt;br /&gt;
* Watching people pair with James Short, and learned a lot and it was very cool.&lt;br /&gt;
* Software development and devops need to collaborate&lt;br /&gt;
* 1-on-1 session on what the real problem is, and what they need to solve. Perhaps start a CI user group.&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONNA2012Sessions&amp;diff=14590</id>
		<title>CITCONNA2012Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONNA2012Sessions&amp;diff=14590"/>
		<updated>2012-09-23T00:54:55Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CITCON United States Portland 2012 Sessions&lt;br /&gt;
&lt;br /&gt;
Back to the [[Main Page]]&lt;br /&gt;
&lt;br /&gt;
== 10:00 Topics ==&lt;br /&gt;
&lt;br /&gt;
#[[CI Anti-Patterns]]&lt;br /&gt;
#&lt;br /&gt;
#[[Test Scope unit vs functional vs dev vs QA]]&lt;br /&gt;
#[[Consolidated dashboard reporting of unit tests]]&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
== 11:15 Topics ==&lt;br /&gt;
&lt;br /&gt;
#[[Detox the testing pyramid]]&lt;br /&gt;
#[[Bringing Automation to Manual Testers]]&lt;br /&gt;
#[[Out of the quagmire]]&lt;br /&gt;
#[[Sani Opinions pros cons]]&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
== 2:00 Topics ==&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# [[Anti-Automated Test Patterns]]&lt;br /&gt;
# &lt;br /&gt;
# [[Lets Play TDD]]&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
== 3:15 Topics ==&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
== 4:30 Topics ==&lt;br /&gt;
&lt;br /&gt;
# [[Why won&amp;#039;t this work? Antidotes to resistance]]&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Takeaways from Citcon 2012]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Table View ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! #&lt;br /&gt;
! 10:00&lt;br /&gt;
! 11:15&lt;br /&gt;
! 2:00&lt;br /&gt;
! 3:15&lt;br /&gt;
! 4:30&lt;br /&gt;
|-&lt;br /&gt;
| 1&lt;br /&gt;
| [[CI Anti-Patterns]]&lt;br /&gt;
| [[Detox the testing pyramid]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 2&lt;br /&gt;
|&lt;br /&gt;
| [[Bringing Automation to Manual Testers]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 3&lt;br /&gt;
| [[Test Scope unit vs functional vs dev vs QA]]&lt;br /&gt;
| [[Out of the quagmire]]&lt;br /&gt;
| &lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 4&lt;br /&gt;
| [[Consolidated dashboard reporting of unit tests]]&lt;br /&gt;
| [[Sani Opinions pros cons]]&lt;br /&gt;
| [[Lets Play TDD]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Why_won%27t_this_work%3F_Antidotes_to_resistance&amp;diff=14589</id>
		<title>Why won&#039;t this work? Antidotes to resistance</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Why_won%27t_this_work%3F_Antidotes_to_resistance&amp;diff=14589"/>
		<updated>2012-09-23T00:35:00Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Common reasons of &amp;#039;&amp;#039;&amp;#039;Why can&amp;#039;t this work?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* We&amp;#039;re afraid of change. &lt;br /&gt;
* Sunk cost. Because we&amp;#039;re already paying for X product, and we want to use it. But could save money by dropping it.&lt;br /&gt;
* &amp;quot;Structure of magic 1 &amp;amp; 2&amp;quot; NLP book that pulls out structure of language&lt;br /&gt;
* Because our teams are in separate locations.&lt;br /&gt;
* No permission support. &amp;quot;Our managers won&amp;#039;t let us.&amp;quot;&lt;br /&gt;
* Our developers will rebel against a new process&lt;br /&gt;
* Everyone will quit.&lt;br /&gt;
* We don&amp;#039;t have time. We&amp;#039;re under pressure to deliver.&lt;br /&gt;
* It&amp;#039;s legacy. Lots of technical debt. Lots of stuff runs on batch processes.&lt;br /&gt;
* Because that&amp;#039;s not scrum. That&amp;#039;s not agile.&lt;br /&gt;
* What&amp;#039;s in it for me? Why?&lt;br /&gt;
* It might make me look bad.&lt;br /&gt;
* I don&amp;#039;t own it. Not my problem.&lt;br /&gt;
* Where&amp;#039;s data that will prove that this work.&lt;br /&gt;
* We don&amp;#039;t know how&lt;br /&gt;
* Only works on toy projects&lt;br /&gt;
* We don&amp;#039;t have a problem&lt;br /&gt;
* That&amp;#039;s too much new stuff. Quit teaching me these new things.&lt;br /&gt;
* Hard sell to have people try synchronous integration&lt;br /&gt;
* People should just do the right thing without it.&lt;br /&gt;
* That&amp;#039;s not enterprise technology.&lt;br /&gt;
* We should detect that in code review&lt;br /&gt;
* We have smart people we shouldn&amp;#039;t be telling them what to do&lt;br /&gt;
* Can&amp;#039;t prove the ROI on my stuff. Seen it on the data, but can&amp;#039;t prove that&lt;br /&gt;
* It&amp;#039;s too much work.&lt;br /&gt;
* Because we&amp;#039;ve followed the path of least resistance&lt;br /&gt;
* Resisting planning ahead and gantt does&lt;br /&gt;
* My ____ doesn&amp;#039;t care&lt;br /&gt;
* It&amp;#039;ll slow us down.&lt;br /&gt;
* That&amp;#039;s hard&lt;br /&gt;
* We only have a few users&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Antidotes&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
What is &amp;quot;this&amp;quot;? Missing referential indexes - &amp;quot;This&amp;quot; is generalized, and not specific enough.&lt;br /&gt;
&lt;br /&gt;
Most of these sound like false objections. Sometimes we should honor where these are coming from.&lt;br /&gt;
Asking for help is an awesome thing to do. These are not constructive objections.&lt;br /&gt;
&lt;br /&gt;
First objection you hear is always a false objection, and it&amp;#039;s not the real objection. The first answer is trying to get rid of you.&lt;br /&gt;
&lt;br /&gt;
Objections arise from a failure to establish value. Not establish problem that is serious enough to solve. Objections come from emotional reasons, and they are not from a rational perspective, and need to be considered from an emotional perspective. &lt;br /&gt;
&lt;br /&gt;
Need to live in other person&amp;#039;s shoes, and see what their concerns are form an emotional POV.&lt;br /&gt;
&lt;br /&gt;
List of eight common sales objections. If you can&amp;#039;t talk to these eight points, then you won&amp;#039;t be able to close sales.&lt;br /&gt;
&lt;br /&gt;
Politely ignore in order to dig deeper. If you push someone, then they&amp;#039;ll just strengthen their unreasonable position.  e.g. &amp;quot;Managers told us we can&amp;#039;t do unit testing.&amp;quot; &amp;quot;Must be frustrating to not be able to test your code.&amp;quot;  Reflect their perspective, believe it and be empathetic. Assume positive intent, and that they&amp;#039;re doing the best thing that they know how to do in the situation. In their mind, they&amp;#039;re doing it for noble reasons. &lt;br /&gt;
&lt;br /&gt;
Showing empathy and really understanding what they&amp;#039;re going through and understanding their pain and what they&amp;#039;re going through is really important.  Sympathy can reinforce their pain / I&amp;#039;m better than you, and I&amp;#039;ll give you pity and hand out. Empathy and compassion is a lot better. Compassion may be a better word.&lt;br /&gt;
&lt;br /&gt;
Near enemy. Compassion is the genuine compassion. Pity is that it&amp;#039;s too bad that you&amp;#039;re suffering / It sucks to be you. Compassion insights action to help. Compassion includes wanting to help.  &lt;br /&gt;
&lt;br /&gt;
One big motivation is reducing suffering.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Slight of mouth&amp;quot; patterns of taking language and reframing it.&lt;br /&gt;
&lt;br /&gt;
Assume what other people think is mind reading. Looking for invalid attributions.&lt;br /&gt;
&lt;br /&gt;
Get backup support to come support what you&amp;#039;re advocating for. Senior business person, and want to do the right thing even if it takes longer. 2nd best answer is no.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rejecting help, they believe that they&amp;#039;re doing the right thing for the business. That it&amp;#039;ll be a waste of time, and we should just get to work and get stuff done to complete our deadline. We&amp;#039;re already behind. Upper level management felt like a big problem was there, and they wanted to invest money to fix it, but developers were resistant.&lt;br /&gt;
&lt;br /&gt;
CTO should say we should do TDD, and there&amp;#039;s resistant.&lt;br /&gt;
&lt;br /&gt;
Interviewing each department, and understanding pain points of each department. In the end there were commonalities of not knowing how to get things to go through.&lt;br /&gt;
&lt;br /&gt;
Meet with individual contributors, and lack of development environments. Met with VP who had budgetary problem, and didn&amp;#039;t believe that it&amp;#039;s a problem. But didn&amp;#039;t believe that this wasn&amp;#039;t their biggest problem. &lt;br /&gt;
&lt;br /&gt;
Could be that it was asking VP to do something when other people need to change. Seemed to be mental block, and wasn&amp;#039;t able to do it after 3 tries. Steamroller of reality wasn&amp;#039;t happening, it changed a year later.&lt;br /&gt;
&lt;br /&gt;
Plant seeds. Sometimes you just have to let go.&lt;br /&gt;
&lt;br /&gt;
Unrealistic expectation of how fast you can learn agile stuff. It takes time.&lt;br /&gt;
&lt;br /&gt;
Celebrate the successes.&lt;br /&gt;
&lt;br /&gt;
Sometimes the most successful technique is to go away for a while. Don&amp;#039;t become too dependent. Let them know that he&amp;#039;ll be going away. Job is not to do it for them. Your job is to take this on and decide if you continue to do this. Tell them that this iteration is yours. Take well-timed bathroom breaks. A planning meeting, and they&amp;#039;re not showing up (Take a walk for 15 minutes)  This is setting up &amp;quot;responsibility boundaries&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Allow small failures. And all sorts of failures. Age-appropriate failures.&lt;br /&gt;
&lt;br /&gt;
Do &amp;quot;One small thing&amp;quot; pattern&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Try and see&amp;quot; pattern. See if this comes to fruition. Try it, and see what works and see what works out.&lt;br /&gt;
&lt;br /&gt;
Ask permission to be rigorous. Try to see it and see how it works.&lt;br /&gt;
&lt;br /&gt;
Leverage inertia&lt;br /&gt;
&lt;br /&gt;
Ask permission to help them&lt;br /&gt;
&lt;br /&gt;
Don&amp;#039;t inflict help.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Notes by Kent Bye&amp;#039;&amp;#039;&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Why_won%27t_this_work%3F_Antidotes_to_resistance&amp;diff=14588</id>
		<title>Why won&#039;t this work? Antidotes to resistance</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Why_won%27t_this_work%3F_Antidotes_to_resistance&amp;diff=14588"/>
		<updated>2012-09-23T00:34:27Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: Created page with &amp;quot;Common reasons of &amp;#039;&amp;#039;&amp;#039;Why can&amp;#039;t this work?&amp;#039;&amp;#039;&amp;#039; * We&amp;#039;re afraid of change.  * Sunk cost. Because we&amp;#039;re already paying for X product, and we want to use it. But could save money by...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Common reasons of &amp;#039;&amp;#039;&amp;#039;Why can&amp;#039;t this work?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* We&amp;#039;re afraid of change. &lt;br /&gt;
* Sunk cost. Because we&amp;#039;re already paying for X product, and we want to use it. But could save money by dropping it.&lt;br /&gt;
* &amp;quot;Structure of magic 1 &amp;amp; 2&amp;quot; NLP book that pulls out structure of language&lt;br /&gt;
Because our teams are in separate locations.&lt;br /&gt;
* No permission support. &amp;quot;Our managers won&amp;#039;t let us.&amp;quot;&lt;br /&gt;
* Our developers will rebel against a new process&lt;br /&gt;
* Everyone will quit.&lt;br /&gt;
* We don&amp;#039;t have time. We&amp;#039;re under pressure to deliver.&lt;br /&gt;
* It&amp;#039;s legacy. Lots of technical debt. Lots of stuff runs on batch processes.&lt;br /&gt;
* Because that&amp;#039;s not scrum. That&amp;#039;s not agile.&lt;br /&gt;
* What&amp;#039;s in it for me? Why?&lt;br /&gt;
* It might make me look bad.&lt;br /&gt;
* I don&amp;#039;t own it. Not my problem.&lt;br /&gt;
* Where&amp;#039;s data that will prove that this work.&lt;br /&gt;
* We don&amp;#039;t know how&lt;br /&gt;
* Only works on toy projects&lt;br /&gt;
* We don&amp;#039;t have a problem&lt;br /&gt;
* That&amp;#039;s too much new stuff. Quit teaching me these new things.&lt;br /&gt;
* Hard sell to have people try synchronous integration&lt;br /&gt;
* People should just do the right thing without it.&lt;br /&gt;
* That&amp;#039;s not enterprise technology.&lt;br /&gt;
* We should detect that in code review&lt;br /&gt;
* We have smart people we shouldn&amp;#039;t be telling them what to do&lt;br /&gt;
* Can&amp;#039;t prove the ROI on my stuff. Seen it on the data, but can&amp;#039;t prove that&lt;br /&gt;
* It&amp;#039;s too much work.&lt;br /&gt;
* Because we&amp;#039;ve followed the path of least resistance&lt;br /&gt;
* Resisting planning ahead and gantt does&lt;br /&gt;
* My ____ doesn&amp;#039;t care&lt;br /&gt;
* It&amp;#039;ll slow us down.&lt;br /&gt;
* That&amp;#039;s hard&lt;br /&gt;
* We only have a few users&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Antidotes&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
What is &amp;quot;this&amp;quot;? Missing referential indexes - &amp;quot;This&amp;quot; is generalized, and not specific enough.&lt;br /&gt;
&lt;br /&gt;
Most of these sound like false objections. Sometimes we should honor where these are coming from.&lt;br /&gt;
Asking for help is an awesome thing to do. These are not constructive objections.&lt;br /&gt;
&lt;br /&gt;
First objection you hear is always a false objection, and it&amp;#039;s not the real objection. The first answer is trying to get rid of you.&lt;br /&gt;
&lt;br /&gt;
Objections arise from a failure to establish value. Not establish problem that is serious enough to solve. Objections come from emotional reasons, and they are not from a rational perspective, and need to be considered from an emotional perspective. &lt;br /&gt;
&lt;br /&gt;
Need to live in other person&amp;#039;s shoes, and see what their concerns are form an emotional POV.&lt;br /&gt;
&lt;br /&gt;
List of eight common sales objections. If you can&amp;#039;t talk to these eight points, then you won&amp;#039;t be able to close sales.&lt;br /&gt;
&lt;br /&gt;
Politely ignore in order to dig deeper. If you push someone, then they&amp;#039;ll just strengthen their unreasonable position.  e.g. &amp;quot;Managers told us we can&amp;#039;t do unit testing.&amp;quot; &amp;quot;Must be frustrating to not be able to test your code.&amp;quot;  Reflect their perspective, believe it and be empathetic. Assume positive intent, and that they&amp;#039;re doing the best thing that they know how to do in the situation. In their mind, they&amp;#039;re doing it for noble reasons. &lt;br /&gt;
&lt;br /&gt;
Showing empathy and really understanding what they&amp;#039;re going through and understanding their pain and what they&amp;#039;re going through is really important.  Sympathy can reinforce their pain / I&amp;#039;m better than you, and I&amp;#039;ll give you pity and hand out. Empathy and compassion is a lot better. Compassion may be a better word.&lt;br /&gt;
&lt;br /&gt;
Near enemy. Compassion is the genuine compassion. Pity is that it&amp;#039;s too bad that you&amp;#039;re suffering / It sucks to be you. Compassion insights action to help. Compassion includes wanting to help.  &lt;br /&gt;
&lt;br /&gt;
One big motivation is reducing suffering.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Slight of mouth&amp;quot; patterns of taking language and reframing it.&lt;br /&gt;
&lt;br /&gt;
Assume what other people think is mind reading. Looking for invalid attributions.&lt;br /&gt;
&lt;br /&gt;
Get backup support to come support what you&amp;#039;re advocating for. Senior business person, and want to do the right thing even if it takes longer. 2nd best answer is no.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rejecting help, they believe that they&amp;#039;re doing the right thing for the business. That it&amp;#039;ll be a waste of time, and we should just get to work and get stuff done to complete our deadline. We&amp;#039;re already behind. Upper level management felt like a big problem was there, and they wanted to invest money to fix it, but developers were resistant.&lt;br /&gt;
&lt;br /&gt;
CTO should say we should do TDD, and there&amp;#039;s resistant.&lt;br /&gt;
&lt;br /&gt;
Interviewing each department, and understanding pain points of each department. In the end there were commonalities of not knowing how to get things to go through.&lt;br /&gt;
&lt;br /&gt;
Meet with individual contributors, and lack of development environments. Met with VP who had budgetary problem, and didn&amp;#039;t believe that it&amp;#039;s a problem. But didn&amp;#039;t believe that this wasn&amp;#039;t their biggest problem. &lt;br /&gt;
&lt;br /&gt;
Could be that it was asking VP to do something when other people need to change. Seemed to be mental block, and wasn&amp;#039;t able to do it after 3 tries. Steamroller of reality wasn&amp;#039;t happening, it changed a year later.&lt;br /&gt;
&lt;br /&gt;
Plant seeds. Sometimes you just have to let go.&lt;br /&gt;
&lt;br /&gt;
Unrealistic expectation of how fast you can learn agile stuff. It takes time.&lt;br /&gt;
&lt;br /&gt;
Celebrate the successes.&lt;br /&gt;
&lt;br /&gt;
Sometimes the most successful technique is to go away for a while. Don&amp;#039;t become too dependent. Let them know that he&amp;#039;ll be going away. Job is not to do it for them. Your job is to take this on and decide if you continue to do this. Tell them that this iteration is yours. Take well-timed bathroom breaks. A planning meeting, and they&amp;#039;re not showing up (Take a walk for 15 minutes)  This is setting up &amp;quot;responsibility boundaries&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Allow small failures. And all sorts of failures. Age-appropriate failures.&lt;br /&gt;
&lt;br /&gt;
Do &amp;quot;One small thing&amp;quot; pattern&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Try and see&amp;quot; pattern. See if this comes to fruition. Try it, and see what works and see what works out.&lt;br /&gt;
&lt;br /&gt;
Ask permission to be rigorous. Try to see it and see how it works.&lt;br /&gt;
&lt;br /&gt;
Leverage inertia&lt;br /&gt;
&lt;br /&gt;
Ask permission to help them&lt;br /&gt;
&lt;br /&gt;
Don&amp;#039;t inflict help.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Notes by Kent Bye&amp;#039;&amp;#039;&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONNA2012Sessions&amp;diff=14587</id>
		<title>CITCONNA2012Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONNA2012Sessions&amp;diff=14587"/>
		<updated>2012-09-23T00:34:06Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CITCON United States Portland 2012 Sessions&lt;br /&gt;
&lt;br /&gt;
Back to the [[Main Page]]&lt;br /&gt;
&lt;br /&gt;
== 10:00 Topics ==&lt;br /&gt;
&lt;br /&gt;
#[[CI Anti-Patterns]]&lt;br /&gt;
#&lt;br /&gt;
#[[Test Scope unit vs functional vs dev vs QA]]&lt;br /&gt;
#[[Consolidated dashboard reporting of unit tests]]&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
== 11:15 Topics ==&lt;br /&gt;
&lt;br /&gt;
#[[Detox the testing pyramid]]&lt;br /&gt;
#[[Bringing Automation to Manual Testers]]&lt;br /&gt;
#[[Out of the quagmire]]&lt;br /&gt;
#[[Sani Opinions pros cons]]&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
== 2:00 Topics ==&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# [[Anti-Automated Test Patterns]]&lt;br /&gt;
# &lt;br /&gt;
# [[Lets Play TDD]]&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
== 3:15 Topics ==&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
== 4:30 Topics ==&lt;br /&gt;
&lt;br /&gt;
# [[Why won&amp;#039;t this work? Antidotes to resistance]]&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Table View ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! #&lt;br /&gt;
! 10:00&lt;br /&gt;
! 11:15&lt;br /&gt;
! 2:00&lt;br /&gt;
! 3:15&lt;br /&gt;
! 4:30&lt;br /&gt;
|-&lt;br /&gt;
| 1&lt;br /&gt;
| [[CI Anti-Patterns]]&lt;br /&gt;
| [[Detox the testing pyramid]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 2&lt;br /&gt;
|&lt;br /&gt;
| [[Bringing Automation to Manual Testers]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 3&lt;br /&gt;
| [[Test Scope unit vs functional vs dev vs QA]]&lt;br /&gt;
| [[Out of the quagmire]]&lt;br /&gt;
| &lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 4&lt;br /&gt;
| [[Consolidated dashboard reporting of unit tests]]&lt;br /&gt;
| [[Sani Opinions pros cons]]&lt;br /&gt;
| [[Lets Play TDD]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Bringing_Automation_to_Manual_Testers&amp;diff=14584</id>
		<title>Bringing Automation to Manual Testers</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Bringing_Automation_to_Manual_Testers&amp;diff=14584"/>
		<updated>2012-09-22T22:11:49Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Bringing automation to manual testers (with no budget)&lt;br /&gt;
&lt;br /&gt;
There are 9 development cross-functional agile teams. 2 testers and 5 developers. Tester vs. developer? Staff of testers who only do manual testers. Defect leakage is lower.  No problem form an efficiency POV.&lt;br /&gt;
&lt;br /&gt;
If not have automation training, then job satisfaction is down b/c Google says manual testing is dead. Have no training budget for tools or training?&lt;br /&gt;
&lt;br /&gt;
What could he do? Do a number of activities to create hand-on experience to do small tests on their own. Brian isn&amp;#039;t from a testing background. It should really come from the testers. Would love to have someone from testing department to step up and lead it. Brian is being a facilitator.&lt;br /&gt;
&lt;br /&gt;
Potential Tactics &lt;br /&gt;
* Technology association Oregon to have a panel discussion to host at their office. Invite local experts who have implemented automation, and get testers to come to hear success stories. Hear about tools to use, Learning Curve.&lt;br /&gt;
* Brown bags: Do a series of internal presentations about automation with the tools and code base.&lt;br /&gt;
* Academy classes: Talk for a topic and ask questions.&lt;br /&gt;
* Use power shell as a testing tool. Invoke domain business objects and do some testing on. &lt;br /&gt;
&lt;br /&gt;
Need to foster hands-on experience&lt;br /&gt;
* Record and play automation tool. 3 classes to work through an application of record and playback -- all doing the same one. Everyone who wanted to do testing could do that. See scripts that was generated, and talk about the script after the fact&lt;br /&gt;
* Then train the testers in programming over 6-8 weeks, and have homework. Train control structures and basic OO programming.  Could buy books.&lt;br /&gt;
* Organize a user group to show others. &lt;br /&gt;
&lt;br /&gt;
* Identify places where automation would be helpful.&lt;br /&gt;
* Create opportunities&lt;br /&gt;
&lt;br /&gt;
How much free time do testers have? Are they storming or forming?&lt;br /&gt;
If they have homework, then they&amp;#039;d have to do extra time.&lt;br /&gt;
&lt;br /&gt;
Introduced test automation, and cut regression test time by a certain amount of %.&lt;br /&gt;
Personnel turn-over.&lt;br /&gt;
If need 20 hours to automate test, then is there going to get pushback.&lt;br /&gt;
&lt;br /&gt;
Commit to a story that&amp;#039;s focused on quality during each sprint.&lt;br /&gt;
Tag stories that you can track and report on them.&lt;br /&gt;
&lt;br /&gt;
Did some training classes where the students train each other in Java. People who are novices are better at training each other. Need one person who knows what they&amp;#039;re doing.&lt;br /&gt;
&lt;br /&gt;
Did training of OO analysis and design at a bank. Cultural differences, and they didn&amp;#039;t take well to outside training. Start small and engage testers at developing their own curriculum. Take into account the culture, and it&amp;#039;s more likely to succeed. Start small and grow it rather than starting with a big bang. Time is ripe, and there&amp;#039;s enough momentum out there that needs to be pushed a bit.&lt;br /&gt;
&lt;br /&gt;
Testing new feature, is it difficult and take significantly more time to test it automatically.&lt;br /&gt;
&lt;br /&gt;
Anti-pattern is short-term thinking rather than long-term planning. If product manager isn&amp;#039;t supportive from a timeline perspective, then the long-term benefits will be the first to go.&lt;br /&gt;
&lt;br /&gt;
Has to be a commitment to the long-term to get out of the short-term, fire-fighting mentality. And slowly quality slowly degrades.&lt;br /&gt;
&lt;br /&gt;
IT is on board for Agile, but didn&amp;#039;t educate business about agile. Need to slow down in the short-term in order to go faster in the long-term.&lt;br /&gt;
&lt;br /&gt;
Implemented agile, and the deliverable times is down to 2 weeks, and the QA became the bottleneck. Then had to go back up to 3 weeks.  Need to automate the testing.  Review the tools. Have the engineer go to QA staff and train them.  QA staff is heads down on their 3-week cycles.&lt;br /&gt;
&lt;br /&gt;
Fostering the sense of leadership, and have the developers start to evaluate&lt;br /&gt;
&lt;br /&gt;
Gauge their interest. How easy for them to pick up.&lt;br /&gt;
&lt;br /&gt;
In choosing tools: Easier to train on? Or better to get a tool best for their technology stack?&lt;br /&gt;
Potentially have a BDD.&lt;br /&gt;
&lt;br /&gt;
Developers will train the QA staff to take the test for Java certification course done by Oracle.&lt;br /&gt;
&lt;br /&gt;
Strategy for training people: Local tech user group to do their own training. Struggling how to teach novices who don&amp;#039;t have full time to learning this. Start with 1-day workshop to get development environment set up. Added a 2nd user group meeting for new users with 1/2 trainers and 1/2 people who are novices.  Ask experts how to do a specific task.  As an expert, you do them so much that don&amp;#039;t think about them as concepts.  As a practice, then you write down the question so that you can teach it to them the next time.&lt;br /&gt;
&lt;br /&gt;
If someone asks it, then others are thinking it.&lt;br /&gt;
&lt;br /&gt;
* Python and Ruby groups are having a 2nd meeting every month with an occasional workshop. Beginning Ruby meet-up.  Lots of people who want to learn. Experienced people know that you can learn bad things, and so testing helps to learn the good ones. Went over xUnit and BDD and some Cucumber stuff, and there will be more sessions like that in the future.&lt;br /&gt;
&lt;br /&gt;
A ladder of tasks and competencies for people contribute and start to being able to contribute. In Drupal, there&amp;#039;s the Drupal ladder.&lt;br /&gt;
&lt;br /&gt;
* Tester pull down developer code, and build it and run it.&lt;br /&gt;
* Look at unit test, and be able to read it. Start to turn Blackbox testers into Whitebox testers&lt;br /&gt;
* Open hatch: One Barrier to entry is using version control. How to make a commit to git, and it&amp;#039;s like a video game. Make a pull, and make a change. And then they get points or stars.&lt;br /&gt;
&lt;br /&gt;
* Automate low-hanging fruit. Start with some easy tests to get experience so that they can get through rough spots. Start small. Case studies: Need to have small projects where automation written didn&amp;#039;t need to be preserved beyond task that they could do within the course of their regular testing.&lt;br /&gt;
* Optimizing whatever is really repetitive and whatever would do the most impact in the least amount of work. Automate what you&amp;#039;re doing a lot, and look for something has a decent bang for your buck.&lt;br /&gt;
&lt;br /&gt;
* SQA User Group is just getting started up -- http://www.sqaug.org/&lt;br /&gt;
* Work on acceptance criteria as a cross-functional team, and help business see the benefit so that we can invest more time and energy into QA.&lt;br /&gt;
&lt;br /&gt;
* Started using Fitness tool to automated business-facing acceptance testing, and then the product managers would look at it within English. But the product manager didn&amp;#039;t care or look at the test. Testers and developers had lots of conversations which were really helpful.&lt;br /&gt;
&lt;br /&gt;
Business object layer exposed through an API, and the testers could give a scripting language so that they could reach the test objects. Testers could write business-facing acceptance tests with Cucumber, and then it&amp;#039;d be a good step towards automation.&lt;br /&gt;
&lt;br /&gt;
Java has a lot of odd rules for non-programmers, and it&amp;#039;d be intimidating because it&amp;#039;s not intuitive.&lt;br /&gt;
&lt;br /&gt;
Programmers know more than one language. Learn the easiest things to learn, and once you have the concepts, then going to more complicated languages like Java become easier. Don&amp;#039;t try to learn everything at once. Make it cumulative. &lt;br /&gt;
&lt;br /&gt;
If you need budget, then ask &amp;quot;How much is QA saving you?&amp;quot; instead of &amp;quot;How much is QA costing?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Notes by Kent Bye&amp;#039;&amp;#039;&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Anti-Automated_Test_Patterns&amp;diff=14583</id>
		<title>Anti-Automated Test Patterns</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Anti-Automated_Test_Patterns&amp;diff=14583"/>
		<updated>2012-09-22T22:10:21Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Anti-Automated Test Patterns&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Ice cream cone&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Inverse of triangle of having lots of manual testing at the top, and not a lot of unit tests at the bottom. Break it down by risk.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Happy Path&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Basic function of the system, but it&amp;#039;s not really testing anything serious. False sense of being complete. Don&amp;#039;t have complete code coverage that you need. Need to do more comprehensive testing.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Local Hero&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Wrote something that will always pass in environment because of how you&amp;#039;ve written it. Seems risky because you expect one thing, but explodes in the real world. Learn from the regressions that you get out of it. Get input from customer service. May not be looking at the business requirements properly. &lt;br /&gt;
&lt;br /&gt;
Perhaps deploy to a staging environment? May have lost the customer focus. Have customer service to testing.&lt;br /&gt;
&lt;br /&gt;
Beautiful thing in real world, but you&amp;#039;re not testing it in the same way that the users are using it. They may be doing more complicated tasks than you&amp;#039;re testing for.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;2nd Class Citizen&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
A lot of duplicate code, and so the code bloats up, and it becomes a maintenance headache. Same thing over and over again.  You might need to refactor your automated test.  May need to reuse some of the code that you&amp;#039;ve written.&lt;br /&gt;
&lt;br /&gt;
If it&amp;#039;s a high-risk thing, then perhaps refactor. If it&amp;#039;s used often, then use re-use.  Do evaluation before you start to refactor. Don&amp;#039;t pay off all of your tech debt all at once.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Chain Gang&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Set up. Task and then Tear down. Merge two tests into one set up and tear down. Create dependencies within a chain of tests. Set up and tear down can be a PITA.&lt;br /&gt;
Can be acceptable in some time if the tests are passing and you feel comfortable that they&amp;#039;re valid. Advice is to evaluate the risk in the gang, and if there are problems, then split it up.&lt;br /&gt;
Avoid it, but when you need it, then use it.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;The Mockery&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Use the real stuff. Over mock everything. Need something. Mock it. Not using the real world or the real servers you&amp;#039;ll be working with. If mocking REST and SOAP calls, and then it fails on the live data.  If there&amp;#039;s a lot of mockery. &lt;br /&gt;
&lt;br /&gt;
The opposite of mockery is the local hero. &lt;br /&gt;
&lt;br /&gt;
For example, with a maven repo, grab top layer, and it&amp;#039;ll work. Customer pulls from inside, and then it broke the code, and weren&amp;#039;t testing it. Purpose of mock objects is to avoid inspector.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;The Inspector&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
It knows everything about the system. Object knows everything and tightly coupled. If changes, then it breaks. Joined models together, and if you remove it, then it breaks.&lt;br /&gt;
&lt;br /&gt;
White box testing is something that if you test it, then it breaks. Perhaps only use it on edge cases.&lt;br /&gt;
&lt;br /&gt;
Break it up, and make it less dependent and decoupled.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;The Golddigger&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Greedy and they want everything in terms of resources. They need to have 50 things set up. Lots of time required. Can you really break it up? It takes 2 hours to set up. Pre-set up while doing a deployment. Deploy what the things that the gold digger needs.&lt;br /&gt;
&lt;br /&gt;
At what point in the process do you right that test? Do it at the end? Or do it at the beginning?&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Anal probe / Contract Violator&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Write test and get to the internals, and then override OO fundamentals with private and public stuff. Heavily white, and get into internals of anything. Are you testing it in a realistic way. Playing with the innards of the code, and if things change, then you&amp;#039;re screwed. Violating an object. &lt;br /&gt;
&lt;br /&gt;
Would exploratory or ad hoc testing be enough for this test? Might be blinded by real part of the test.&lt;br /&gt;
&lt;br /&gt;
If something is inside that you need to test, then it&amp;#039;s a design problem where it needs to exposed of whatever needs to be available. Need to reword code.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Test with no name&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
There&amp;#039;s a bug and name it non-sensical name like &amp;quot;Test CR2386.&amp;quot; Solution is to use better names and do it right. Name doesn&amp;#039;t tell you anything. This is more of a bad practice than an anti-pattern.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;The Slow Poke&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Takes a lot of time to run. Could potentially run in parallel. Or potentially break it up. Set up your own environment and make it aware. Make not put in CI or CD. Only run on release candidate instead of a daily build. It&amp;#039;s likely to be an integration test, but it could be at any level.  Database dependencies and network latencies. Can&amp;#039;t always run integration test.  Could you mock something?  &lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;The Giant / God Complex Test / Boss Hog&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
If it&amp;#039;s a big test, that is consuming. Way too much code, and may be a part of a chain gang. It&amp;#039;s very complex&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Wait &amp;amp; See&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Using Sleep. Love and hate relationship with sleep. Press button, and sleep. You&amp;#039;re not checking the validity of the system. You&amp;#039;re going to race condition. It&amp;#039;ll cause flickering. Solution is to not use sleep. If you have sleeps everywhere, and make sure you don&amp;#039;t have interrupts.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;China Vase&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Code is fragile. Selenium is too fragile. Biggest issue in industry. Every one keeps complaining about that&amp;#039;s it&amp;#039;s too long or it&amp;#039;s too fragile.  How do we deal with it? Break down into more stable pieces. Might have some other anti-patterns happens.  More concerned with the China passes than fails.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Flickering Lights&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Flickers between passing and failing. Didn&amp;#039;t write the test correctly. Too much mockery or too much golddigging. Had two different load balances, and direct to a working and not working breaking. It can also usually it&amp;#039;s an environmental issue. It can be demoralizing, and testers get used to living with red lights. Psychologically keep pushing button until it passes, which is a bad habit. If I hit restart it 3 times, and then I&amp;#039;ll investigate it. [laugh] If it doesn&amp;#039;t pass 1st time, then investigate it.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;The Pig&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Tests that don&amp;#039;t clean up after themselves, which can lead to flickering lights. Dependencies that relate to each other.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Edge Play&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Playing on the edges too much. Waste the test cycles of testing things that the user doesn&amp;#039;t do. High-risk, and you might only run it once. Reduce it form main test suites or take it out.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Customer Don&amp;#039;t Do that&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Testing things that customer doesn&amp;#039;t actually do.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Fear the automator&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
The fear that manual testers that automation will eliminate their jobs. Deliberate sabotage, and party when automated test fails. It&amp;#039;s a management issue, and will loose morale and testing cycles.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;The Metrics Lie&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Management will want to know how much test cycles are saving. Wanting ROI metrics, and sacrifice. Opening tickets on small tasks just to get the metrics to be higher. &amp;quot;Get lots of bugs now!&amp;quot; to justify bugs. &amp;quot;If you find a bug, then cover it up&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Test doesn&amp;#039;t test anything&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Happens in unit a lot. Who&amp;#039;s responsible for that? Developer or tester? Whomever wrote it or is maintaining it. Who&amp;#039;s&amp;#039; responsible for what? SOMEONE is responsible. If it doesn&amp;#039;t do what it&amp;#039;s supposed to do, then someone will need to take responsible.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Who owns this?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
No one owns it, and it gets ignored. Transparency and communication is the solution. Project team leads report to each other, and they have to fight it out. Denying responsibility, and have to prove it&amp;#039;s not yours. Manager should know, but sometimes there&amp;#039;s no management structure.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;How are these related?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Boss hog and slow poke are connected&lt;br /&gt;
Inspector and Gold Digger are connected.&lt;br /&gt;
China Vase and Flickering Lights are connected&lt;br /&gt;
If you&amp;#039;re seeing flickering lights, then root cause could be Pig&lt;br /&gt;
2nd class citizen and the Flickering lights would be related.&lt;br /&gt;
Gold Digger and Inspector are the same thing.&lt;br /&gt;
Chain gang would lead to Boss Hog&lt;br /&gt;
Mockery is connected to flickering lights&lt;br /&gt;
Local hero goes with flickering lights. Works fine in staging, but not production&lt;br /&gt;
Ice cream cone is independent. Break it down by risk is the answer to anything.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Bad practices&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Test with no name, wait and see, 2nd class citizen&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Notes by Kent Bye&amp;#039;&amp;#039;&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Anti-Automated_Test_Patterns&amp;diff=14582</id>
		<title>Anti-Automated Test Patterns</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Anti-Automated_Test_Patterns&amp;diff=14582"/>
		<updated>2012-09-22T22:09:46Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Anti-Automated Test Patterns&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Ice cream cone&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Inverse of triangle of having lots of manual testing at the top, and not a lot of unit tests at the bottom. Break it down by risk.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Happy Path&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Basic function of the system, but it&amp;#039;s not really testing anything serious. False sense of being complete. Don&amp;#039;t have complete code coverage that you need. Need to do more comprehensive testing.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Local Hero&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Wrote something that will always pass in environment because of how you&amp;#039;ve written it. Seems risky because you expect one thing, but explodes in the real world. Learn from the regressions that you get out of it. Get input from customer service. May not be looking at the business requirements properly. &lt;br /&gt;
&lt;br /&gt;
Perhaps deploy to a staging environment? May have lost the customer focus. Have customer service to testing.&lt;br /&gt;
&lt;br /&gt;
Beautiful thing in real world, but you&amp;#039;re not testing it in the same way that the users are using it. They may be doing more complicated tasks than you&amp;#039;re testing for.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;2nd Class Citizen&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
A lot of duplicate code, and so the code bloats up, and it becomes a maintenance headache. Same thing over and over again.  You might need to refactor your automated test.  May need to reuse some of the code that you&amp;#039;ve written.&lt;br /&gt;
&lt;br /&gt;
If it&amp;#039;s a high-risk thing, then perhaps refactor. If it&amp;#039;s used often, then use re-use.  Do evaluation before you start to refactor. Don&amp;#039;t pay off all of your tech debt all at once.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Chain Gang&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Set up. Task and then Tear down. Merge two tests into one set up and tear down. Create dependencies within a chain of tests. Set up and tear down can be a PITA.&lt;br /&gt;
Can be acceptable in some time if the tests are passing and you feel comfortable that they&amp;#039;re valid. Advice is to evaluate the risk in the gang, and if there are problems, then split it up.&lt;br /&gt;
Avoid it, but when you need it, then use it.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;The Mockery&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Use the real stuff. Over mock everything. Need something. Mock it. Not using the real world or the real servers you&amp;#039;ll be working with. If mocking REST and SOAP calls, and then it fails on the live data.  If there&amp;#039;s a lot of mockery. &lt;br /&gt;
&lt;br /&gt;
The opposite of mockery is the local hero. &lt;br /&gt;
&lt;br /&gt;
For example, with a maven repo, grab top layer, and it&amp;#039;ll work. Customer pulls from inside, and then it broke the code, and weren&amp;#039;t testing it. Purpose of mock objects is to avoid inspector.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;The Inspector&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
It knows everything about the system. Object knows everything and tightly coupled. If changes, then it breaks. Joined models together, and if you remove it, then it breaks.&lt;br /&gt;
&lt;br /&gt;
White box testing is something that if you test it, then it breaks. Perhaps only use it on edge cases.&lt;br /&gt;
&lt;br /&gt;
Break it up, and make it less dependent and decoupled.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;The Golddigger&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Greedy and they want everything in terms of resources. They need to have 50 things set up. Lots of time required. Can you really break it up? It takes 2 hours to set up. Pre-set up while doing a deployment. Deploy what the things that the gold digger needs.&lt;br /&gt;
&lt;br /&gt;
At what point in the process do you right that test? Do it at the end? Or do it at the beginning?&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Anal probe / Contract Violator&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Write test and get to the internals, and then override OO fundamentals with private and public stuff. Heavily white, and get into internals of anything. Are you testing it in a realistic way. Playing with the innards of the code, and if things change, then you&amp;#039;re screwed. Violating an object. &lt;br /&gt;
&lt;br /&gt;
Would exploratory or ad hoc testing be enough for this test? Might be blinded by real part of the test.&lt;br /&gt;
&lt;br /&gt;
If something is inside that you need to test, then it&amp;#039;s a design problem where it needs to exposed of whatever needs to be available. Need to reword code.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Test with no name&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
There&amp;#039;s a bug and name it non-sensical name like &amp;quot;Test CR2386.&amp;quot; Solution is to use better names and do it right. Name doesn&amp;#039;t tell you anything. This is more of a bad practice than an anti-pattern.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;The Slow Poke&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Takes a lot of time to run. Could potentially run in parallel. Or potentially break it up. Set up your own environment and make it aware. Make not put in CI or CD. Only run on release candidate instead of a daily build. It&amp;#039;s likely to be an integration test, but it could be at any level.  Database dependencies and network latencies. Can&amp;#039;t always run integration test.  Could you mock something?  &lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;The Giant / God Complex Test / Boss Hog&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
If it&amp;#039;s a big test, that is consuming. Way too much code, and may be a part of a chain gang. It&amp;#039;s very complex&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Wait &amp;amp; See&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Using Sleep. Love and hate relationship with sleep. Press button, and sleep. You&amp;#039;re not checking the validity of the system. You&amp;#039;re going to race condition. It&amp;#039;ll cause flickering. Solution is to not use sleep. If you have sleeps everywhere, and make sure you don&amp;#039;t have interrupts.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;China Vase&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Code is fragile. Selenium is too fragile. Biggest issue in industry. Every one keeps complaining about that&amp;#039;s it&amp;#039;s too long or it&amp;#039;s too fragile.  How do we deal with it? Break down into more stable pieces. Might have some other anti-patterns happens.  More concerned with the China passes than fails.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Flickering Lights&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Flickers between passing and failing. Didn&amp;#039;t write the test correctly. Too much mockery or too much golddigging. Had two different load balances, and direct to a working and not working breaking. It can also usually it&amp;#039;s an environmental issue. It can be demoralizing, and testers get used to living with red lights. Psychologically keep pushing button until it passes, which is a bad habit. If I hit restart it 3 times, and then I&amp;#039;ll investigate it. [laugh] If it doesn&amp;#039;t pass 1st time, then investigate it.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;The Pig&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Tests that don&amp;#039;t clean up after themselves, which can lead to flickering lights. Dependencies that relate to each other.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Edge Play&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Playing on the edges too much. Waste the test cycles of testing things that the user doesn&amp;#039;t do. High-risk, and you might only run it once. Reduce it form main test suites or take it out.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Customer Don&amp;#039;t Do that&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Testing things that customer doesn&amp;#039;t actually do.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Fear the automator&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
The fear that manual testers that automation will eliminate their jobs. Deliberate sabotage, and party when automated test fails. It&amp;#039;s a management issue, and will loose morale and testing cycles.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;The Metrics Lie&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Management will want to know how much test cycles are saving. Wanting ROI metrics, and sacrifice. Opening tickets on small tasks just to get the metrics to be higher. &amp;quot;Get lots of bugs now!&amp;quot; to justify bugs. &amp;quot;If you find a bug, then cover it up&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Test doesn&amp;#039;t test anything&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Happens in unit a lot. Who&amp;#039;s responsible for that? Developer or tester? Whomever wrote it or is maintaining it. Who&amp;#039;s&amp;#039; responsible for what? SOMEONE is responsible. If it doesn&amp;#039;t do what it&amp;#039;s supposed to do, then someone will need to take responsible.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Who owns this?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
No one owns it, and it gets ignored. Transparency and communication is the solution. Project team leads report to each other, and they have to fight it out. Denying responsibility, and have to prove it&amp;#039;s not yours. Manager should know, but sometimes there&amp;#039;s no management structure.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;How are these related?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Boss hog and slow poke are connected&lt;br /&gt;
Inspector and Gold Digger are connected.&lt;br /&gt;
China Vase and Flickering Lights are connected&lt;br /&gt;
If you&amp;#039;re seeing flickering lights, then root cause could be Pig&lt;br /&gt;
2nd class citizen and the Flickering lights would be related.&lt;br /&gt;
Gold Digger and Inspector are the same thing.&lt;br /&gt;
Chain gang would lead to Boss Hog&lt;br /&gt;
Mockery is connected to flickering lights&lt;br /&gt;
Local hero goes with flickering lights. Works fine in staging, but not production&lt;br /&gt;
Ice cream cone is independent. Break it down by risk is the answer to anything.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Bad practices&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Test with no name, wait and see, 2nd class citizen&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Notes by Kent Bye&amp;quot;&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Anti-Automated_Test_Patterns&amp;diff=14581</id>
		<title>Anti-Automated Test Patterns</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Anti-Automated_Test_Patterns&amp;diff=14581"/>
		<updated>2012-09-22T22:08:01Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Anti-Automated Test Patterns&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;Ice cream cone&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Inverse of triangle of having lots of manual testing at the top, and not a lot of unit tests at the bottom. Break it down by risk.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Happy Path&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Basic function of the system, but it&amp;#039;s not really testing anything serious. False sense of being complete. Don&amp;#039;t have complete code coverage that you need. Need to do more comprehensive testing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Local Hero&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Wrote something that will always pass in environment because of how you&amp;#039;ve written it. Seems risky because you expect one thing, but explodes in the real world. Learn from the regressions that you get out of it. Get input from customer service. May not be looking at the business requirements properly. &lt;br /&gt;
&lt;br /&gt;
Perhaps deploy to a staging environment? May have lost the customer focus. Have customer service to testing.&lt;br /&gt;
&lt;br /&gt;
Beautiful thing in real world, but you&amp;#039;re not testing it in the same way that the users are using it. They may be doing more complicated tasks than you&amp;#039;re testing for.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2nd Class Citizen&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
A lot of duplicate code, and so the code bloats up, and it becomes a maintenance headache. Same thing over and over again.  You might need to refactor your automated test.  May need to reuse some of the code that you&amp;#039;ve written.&lt;br /&gt;
&lt;br /&gt;
If it&amp;#039;s a high-risk thing, then perhaps refactor. If it&amp;#039;s used often, then use re-use.  Do evaluation before you start to refactor. Don&amp;#039;t pay off all of your tech debt all at once.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Chain Gang&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Set up. Task and then Tear down. Merge two tests into one set up and tear down. Create dependencies within a chain of tests. Set up and tear down can be a PITA.&lt;br /&gt;
Can be acceptable in some time if the tests are passing and you feel comfortable that they&amp;#039;re valid. Advice is to evaluate the risk in the gang, and if there are problems, then split it up.&lt;br /&gt;
Avoid it, but when you need it, then use it.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The Mockery&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Use the real stuff. Over mock everything. Need something. Mock it. Not using the real world or the real servers you&amp;#039;ll be working with. If mocking REST and SOAP calls, and then it fails on the live data.  If there&amp;#039;s a lot of mockery. &lt;br /&gt;
&lt;br /&gt;
The opposite of mockery is the local hero. &lt;br /&gt;
&lt;br /&gt;
For example, with a maven repo, grab top layer, and it&amp;#039;ll work. Customer pulls from inside, and then it broke the code, and weren&amp;#039;t testing it. Purpose of mock objects is to avoid inspector.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The Inspector&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
It knows everything about the system. Object knows everything and tightly coupled. If changes, then it breaks. Joined models together, and if you remove it, then it breaks.&lt;br /&gt;
&lt;br /&gt;
White box testing is something that if you test it, then it breaks. Perhaps only use it on edge cases.&lt;br /&gt;
&lt;br /&gt;
Break it up, and make it less dependent and decoupled.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The Golddigger&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Greedy and they want everything in terms of resources. They need to have 50 things set up. Lots of time required. Can you really break it up? It takes 2 hours to set up. Pre-set up while doing a deployment. Deploy what the things that the gold digger needs.&lt;br /&gt;
&lt;br /&gt;
At what point in the process do you right that test? Do it at the end? Or do it at the beginning?&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Anal probe / Contract Violator&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Write test and get to the internals, and then override OO fundamentals with private and public stuff. Heavily white, and get into internals of anything. Are you testing it in a realistic way. Playing with the innards of the code, and if things change, then you&amp;#039;re screwed. Violating an object. &lt;br /&gt;
&lt;br /&gt;
Would exploratory or ad hoc testing be enough for this test? Might be blinded by real part of the test.&lt;br /&gt;
&lt;br /&gt;
If something is inside that you need to test, then it&amp;#039;s a design problem where it needs to exposed of whatever needs to be available. Need to reword code.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Test with no name&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
There&amp;#039;s a bug and name it non-sensical name like &amp;quot;Test CR2386.&amp;quot; Solution is to use better names and do it right. Name doesn&amp;#039;t tell you anything. This is more of a bad practice than an anti-pattern.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The Slow Poke&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Takes a lot of time to run. Could potentially run in parallel. Or potentially break it up. Set up your own environment and make it aware. Make not put in CI or CD. Only run on release candidate instead of a daily build. It&amp;#039;s likely to be an integration test, but it could be at any level.  Database dependencies and network latencies. Can&amp;#039;t always run integration test.  Could you mock something?  &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The Giant / God Complex Test / Boss Hog&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
If it&amp;#039;s a big test, that is consuming. Way too much code, and may be a part of a chain gang. It&amp;#039;s very complex&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wait &amp;amp; See&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Using Sleep. Love and hate relationship with sleep. Press button, and sleep. You&amp;#039;re not checking the validity of the system. You&amp;#039;re going to race condition. It&amp;#039;ll cause flickering. Solution is to not use sleep. If you have sleeps everywhere, and make sure you don&amp;#039;t have interrupts.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;China Vase&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Code is fragile. Selenium is too fragile. Biggest issue in industry. Every one keeps complaining about that&amp;#039;s it&amp;#039;s too long or it&amp;#039;s too fragile.  How do we deal with it? Break down into more stable pieces. Might have some other anti-patterns happens.  More concerned with the China passes than fails.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Flickering Lights&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Flickers between passing and failing. Didn&amp;#039;t write the test correctly. Too much mockery or too much golddigging. Had two different load balances, and direct to a working and not working breaking. It can also usually it&amp;#039;s an environmental issue. It can be demoralizing, and testers get used to living with red lights. Psychologically keep pushing button until it passes, which is a bad habit. If I hit restart it 3 times, and then I&amp;#039;ll investigate it. [laugh] If it doesn&amp;#039;t pass 1st time, then investigate it.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The Pig&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Tests that don&amp;#039;t clean up after themselves, which can lead to flickering lights. Dependencies that relate to each other.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Edge Play&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Playing on the edges too much. Waste the test cycles of testing things that the user doesn&amp;#039;t do. High-risk, and you might only run it once. Reduce it form main test suites or take it out.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Customer Don&amp;#039;t Do that&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Testing things that customer doesn&amp;#039;t actually do.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Fear the automator&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
The fear that manual testers that automation will eliminate their jobs. Deliberate sabotage, and party when automated test fails. It&amp;#039;s a management issue, and will loose morale and testing cycles.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The Metrics Lie&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Management will want to know how much test cycles are saving. Wanting ROI metrics, and sacrifice. Opening tickets on small tasks just to get the metrics to be higher. &amp;quot;Get lots of bugs now!&amp;quot; to justify bugs. &amp;quot;If you find a bug, then cover it up&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Test doesn&amp;#039;t test anything&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Happens in unit a lot. Who&amp;#039;s responsible for that? Developer or tester? Whomever wrote it or is maintaining it. Who&amp;#039;s&amp;#039; responsible for what? SOMEONE is responsible. If it doesn&amp;#039;t do what it&amp;#039;s supposed to do, then someone will need to take responsible.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Who owns this?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
No one owns it, and it gets ignored. Transparency and communication is the solution. Project team leads report to each other, and they have to fight it out. Denying responsibility, and have to prove it&amp;#039;s not yours. Manager should know, but sometimes there&amp;#039;s no management structure.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;How are these related?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Boss hog and slow poke are connected&lt;br /&gt;
Inspector and Gold Digger are connected.&lt;br /&gt;
China Vase and Flickering Lights are connected&lt;br /&gt;
If you&amp;#039;re seeing flickering lights, then root cause could be Pig&lt;br /&gt;
2nd class citizen and the Flickering lights would be related.&lt;br /&gt;
Gold Digger and Inspector are the same thing.&lt;br /&gt;
Chain gang would lead to Boss Hog&lt;br /&gt;
Mockery is connected to flickering lights&lt;br /&gt;
Local hero goes with flickering lights. Works fine in staging, but not production&lt;br /&gt;
Ice cream cone is independent. Break it down by risk is the answer to anything.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Bad practices&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Test with no name, wait and see, 2nd class citizen&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Notes by Kent Bye&amp;quot;&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Anti-Automated_Test_Patterns&amp;diff=14580</id>
		<title>Anti-Automated Test Patterns</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Anti-Automated_Test_Patterns&amp;diff=14580"/>
		<updated>2012-09-22T22:07:21Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: Created page with &amp;quot;Anti-Automated Test Patterns  &amp;#039;&amp;#039;&amp;#039;Ice cream cone&amp;#039;&amp;#039;&amp;#039; Inverse of triangle of having lots of manual testing at the top, and not a lot of unit tests at the bottom. Break it down by...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Anti-Automated Test Patterns&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Ice cream cone&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Inverse of triangle of having lots of manual testing at the top, and not a lot of unit tests at the bottom. Break it down by risk.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Happy Path&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Basic function of the system, but it&amp;#039;s not really testing anything serious. False sense of being complete. Don&amp;#039;t have complete code coverage that you need. Need to do more comprehensive testing.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Local Hero&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Wrote something that will always pass in environment because of how you&amp;#039;ve written it. Seems risky because you expect one thing, but explodes in the real world. Learn from the regressions that you get out of it. Get input from customer service. May not be looking at the business requirements properly. &lt;br /&gt;
&lt;br /&gt;
Perhaps deploy to a staging environment? May have lost the customer focus. Have customer service to testing.&lt;br /&gt;
&lt;br /&gt;
Beautiful thing in real world, but you&amp;#039;re not testing it in the same way that the users are using it. They may be doing more complicated tasks than you&amp;#039;re testing for.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2nd Class Citizen&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
A lot of duplicate code, and so the code bloats up, and it becomes a maintenance headache. Same thing over and over again.  You might need to refactor your automated test.  May need to reuse some of the code that you&amp;#039;ve written.&lt;br /&gt;
&lt;br /&gt;
If it&amp;#039;s a high-risk thing, then perhaps refactor. If it&amp;#039;s used often, then use re-use.  Do evaluation before you start to refactor. Don&amp;#039;t pay off all of your tech debt all at once.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Chain Gang&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Set up. Task and then Tear down. Merge two tests into one set up and tear down. Create dependencies within a chain of tests. Set up and tear down can be a PITA.&lt;br /&gt;
Can be acceptable in some time if the tests are passing and you feel comfortable that they&amp;#039;re valid. Advice is to evaluate the risk in the gang, and if there are problems, then split it up.&lt;br /&gt;
Avoid it, but when you need it, then use it.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The Mockery&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Use the real stuff. Over mock everything. Need something. Mock it. Not using the real world or the real servers you&amp;#039;ll be working with. If mocking REST and SOAP calls, and then it fails on the live data.  If there&amp;#039;s a lot of mockery. &lt;br /&gt;
&lt;br /&gt;
The opposite of mockery is the local hero. &lt;br /&gt;
&lt;br /&gt;
For example, with a maven repo, grab top layer, and it&amp;#039;ll work. Customer pulls from inside, and then it broke the code, and weren&amp;#039;t testing it. Purpose of mock objects is to avoid inspector.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The Inspector&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
It knows everything about the system. Object knows everything and tightly coupled. If changes, then it breaks. Joined models together, and if you remove it, then it breaks.&lt;br /&gt;
&lt;br /&gt;
White box testing is something that if you test it, then it breaks. Perhaps only use it on edge cases.&lt;br /&gt;
&lt;br /&gt;
Break it up, and make it less dependent and decoupled.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The Golddigger&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Greedy and they want everything in terms of resources. They need to have 50 things set up. Lots of time required. Can you really break it up? It takes 2 hours to set up. Pre-set up while doing a deployment. Deploy what the things that the gold digger needs.&lt;br /&gt;
&lt;br /&gt;
At what point in the process do you right that test? Do it at the end? Or do it at the beginning?&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Anal probe / Contract Violator&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Write test and get to the internals, and then override OO fundamentals with private and public stuff. Heavily white, and get into internals of anything. Are you testing it in a realistic way. Playing with the innards of the code, and if things change, then you&amp;#039;re screwed. Violating an object. &lt;br /&gt;
&lt;br /&gt;
Would exploratory or ad hoc testing be enough for this test? Might be blinded by real part of the test.&lt;br /&gt;
&lt;br /&gt;
If something is inside that you need to test, then it&amp;#039;s a design problem where it needs to exposed of whatever needs to be available. Need to reword code.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Test with no name&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
There&amp;#039;s a bug and name it non-sensical name like &amp;quot;Test CR2386.&amp;quot; Solution is to use better names and do it right. Name doesn&amp;#039;t tell you anything. This is more of a bad practice than an anti-pattern.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The Slow Poke&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Takes a lot of time to run. Could potentially run in parallel. Or potentially break it up. Set up your own environment and make it aware. Make not put in CI or CD. Only run on release candidate instead of a daily build. It&amp;#039;s likely to be an integration test, but it could be at any level.  Database dependencies and network latencies. Can&amp;#039;t always run integration test.  Could you mock something?  &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The Giant / God Complex Test / Boss Hog&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
If it&amp;#039;s a big test, that is consuming. Way too much code, and may be a part of a chain gang. It&amp;#039;s very complex&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wait &amp;amp; See&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Using Sleep. Love and hate relationship with sleep. Press button, and sleep. You&amp;#039;re not checking the validity of the system. You&amp;#039;re going to race condition. It&amp;#039;ll cause flickering. Solution is to not use sleep. If you have sleeps everywhere, and make sure you don&amp;#039;t have interrupts.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;China Vase&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Code is fragile. Selenium is too fragile. Biggest issue in industry. Every one keeps complaining about that&amp;#039;s it&amp;#039;s too long or it&amp;#039;s too fragile.  How do we deal with it? Break down into more stable pieces. Might have some other anti-patterns happens.  More concerned with the China passes than fails.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Flickering Lights&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Flickers between passing and failing. Didn&amp;#039;t write the test correctly. Too much mockery or too much golddigging. Had two different load balances, and direct to a working and not working breaking. It can also usually it&amp;#039;s an environmental issue. It can be demoralizing, and testers get used to living with red lights. Psychologically keep pushing button until it passes, which is a bad habit. If I hit restart it 3 times, and then I&amp;#039;ll investigate it. [laugh] If it doesn&amp;#039;t pass 1st time, then investigate it.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The Pig&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Tests that don&amp;#039;t clean up after themselves, which can lead to flickering lights. Dependencies that relate to each other.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Edge Play&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Playing on the edges too much. Waste the test cycles of testing things that the user doesn&amp;#039;t do. High-risk, and you might only run it once. Reduce it form main test suites or take it out.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Customer Don&amp;#039;t Do that&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Testing things that customer doesn&amp;#039;t actually do.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Fear the automator&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
The fear that manual testers that automation will eliminate their jobs. Deliberate sabotage, and party when automated test fails. It&amp;#039;s a management issue, and will loose morale and testing cycles.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The Metrics Lie&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Management will want to know how much test cycles are saving. Wanting ROI metrics, and sacrifice. Opening tickets on small tasks just to get the metrics to be higher. &amp;quot;Get lots of bugs now!&amp;quot; to justify bugs. &amp;quot;If you find a bug, then cover it up&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Test doesn&amp;#039;t test anything&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Happens in unit a lot. Who&amp;#039;s responsible for that? Developer or tester? Whomever wrote it or is maintaining it. Who&amp;#039;s&amp;#039; responsible for what? SOMEONE is responsible. If it doesn&amp;#039;t do what it&amp;#039;s supposed to do, then someone will need to take responsible.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Who owns this?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
No one owns it, and it gets ignored. Transparency and communication is the solution. Project team leads report to each other, and they have to fight it out. Denying responsibility, and have to prove it&amp;#039;s not yours. Manager should know, but sometimes there&amp;#039;s no management structure.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;How are these related?&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Boss hog and slow poke are connected&lt;br /&gt;
Inspector and Gold Digger are connected.&lt;br /&gt;
China Vase and Flickering Lights are connected&lt;br /&gt;
If you&amp;#039;re seeing flickering lights, then root cause could be Pig&lt;br /&gt;
2nd class citizen and the Flickering lights would be related.&lt;br /&gt;
Gold Digger and Inspector are the same thing.&lt;br /&gt;
Chain gang would lead to Boss Hog&lt;br /&gt;
Mockery is connected to flickering lights&lt;br /&gt;
Local hero goes with flickering lights. Works fine in staging, but not production&lt;br /&gt;
Ice cream cone is independent. Break it down by risk is the answer to anything.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Bad practices&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Test with no name, wait and see, 2nd class citizen&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Notes by Kent Bye&amp;quot;&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONNA2012Sessions&amp;diff=14577</id>
		<title>CITCONNA2012Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONNA2012Sessions&amp;diff=14577"/>
		<updated>2012-09-22T22:02:47Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CITCON United States Portland 2012 Sessions&lt;br /&gt;
&lt;br /&gt;
Back to the [[Main Page]]&lt;br /&gt;
&lt;br /&gt;
== 10:00 Topics ==&lt;br /&gt;
&lt;br /&gt;
#[[CI Anti-Patterns]]&lt;br /&gt;
#&lt;br /&gt;
#[[Test Scope unit vs functional vs dev vs QA]]&lt;br /&gt;
#[[Consolidated dashboard reporting of unit tests]]&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
== 11:15 Topics ==&lt;br /&gt;
&lt;br /&gt;
#[[Detox the testing pyramid]]&lt;br /&gt;
#[[Bringing Automation to Manual Testers]]&lt;br /&gt;
#[[Out of the quagmire]]&lt;br /&gt;
#[[Sani Opinions pros cons]]&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
== 2:00 Topics ==&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
# [[Anti-Automated Test Patterns]]&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
== 3:15 Topics ==&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
== 4:30 Topics ==&lt;br /&gt;
&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
#&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Table View ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! #&lt;br /&gt;
! 10:00&lt;br /&gt;
! 11:15&lt;br /&gt;
! 2:00&lt;br /&gt;
! 3:15&lt;br /&gt;
! 4:30&lt;br /&gt;
|-&lt;br /&gt;
| 1&lt;br /&gt;
| [[CI Anti-Patterns]]&lt;br /&gt;
| [[Detox the testing pyramid]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 2&lt;br /&gt;
|&lt;br /&gt;
| [[Bringing Automation to Manual Testers]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 3&lt;br /&gt;
| [[Test Scope unit vs functional vs dev vs QA]]&lt;br /&gt;
| [[Out of the quagmire]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 4&lt;br /&gt;
| [[Consolidated dashboard reporting of unit tests]]&lt;br /&gt;
| [[Sani Opinions pros cons]]&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
| 5&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CI_Anti-Patterns&amp;diff=14576</id>
		<title>CI Anti-Patterns</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CI_Anti-Patterns&amp;diff=14576"/>
		<updated>2012-09-22T22:01:37Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;What are you going to monitor? &lt;br /&gt;
How will you know what you&amp;#039;re monitoring happens? &lt;br /&gt;
What will you do if it happens?&lt;br /&gt;
&lt;br /&gt;
Continuous deployment - Will the site work while you deploy the version? Or take the site down while you deploy? What happens to the users on the site while you&amp;#039;re deploying?&lt;br /&gt;
&lt;br /&gt;
If you&amp;#039;re deploying manually, then you could think about these questions, but it&amp;#039;s not mandatory.&lt;br /&gt;
&lt;br /&gt;
Ops job was to be a buffer between devs and remote system administrators.&lt;br /&gt;
&lt;br /&gt;
This particular Ops team were not sysadmins and there no monitoring in place. They were doing market fixes, and provide a buffer between devs and remote sysadmins.&lt;br /&gt;
&lt;br /&gt;
Internal staging site, but it has a different topology. Production has 3 machines behind a load balancer. B/c of remote relationship, could change the app, except through the database.&lt;br /&gt;
&lt;br /&gt;
Company is successful, but there&amp;#039;s a lack of growing up. They plug gaps with people instead of systems.&lt;br /&gt;
&lt;br /&gt;
Co-workers checked in code that didn&amp;#039;t work. Code should compile before you check it in. Read Scott Adams, &amp;quot;Goals are for losers, winners build systems.&amp;quot; Now have common deployment contract across deployments, and it can run transformations and run rules. They now have blue/green deployments. Ask it, are you ready to shut down? Now have init scripts to start and stop the service. Now have metrics in place.&lt;br /&gt;
&lt;br /&gt;
Internal API to load data warehouse, and the traffic is much higher than live site. Run extract of data, and it slows down the production site. Run extract during running hours, and be able to monitor that performance isn&amp;#039;t impact.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
PJ session on Anti-patterns&lt;br /&gt;
Anti-patterns: Read book CI, and they want a plan. Provide a 2-year plan. Work with them for a year and half, and after 1.5 years, then they say that need 5 years. What they have in place, and what do they need to have in place. If they don&amp;#039;t have it in place, then it&amp;#039;s an anti-pattern.&lt;br /&gt;
What have you seen where people think they have it&amp;#039;s right, but it&amp;#039;s actually wrong.&lt;br /&gt;
&lt;br /&gt;
Want to do continuous delivery. Deliver to production every 2 weeks. Have a build script? CI running? Have developers check in frequently? 300-400 devs, and they want a roadmap. One year into it, they kind of have a CI in place. Check in, build happens and it&amp;#039;s red or green. No industry standard CI practices, like not always include unit tests. Not doing CI if not running unit tests. If you say CI, then assume including unit tests in the build. Lack of commitment to a green build. Can check in, and have red for weeks of a time, and it&amp;#039;s acceptable within the org. You&amp;#039;ll never achieve continuous delivery if you have don&amp;#039;t commit to green build. &lt;br /&gt;
&lt;br /&gt;
Need to fix very quickly within a couple of hours. Whoever broke it is responsible to fix it.&lt;br /&gt;
&lt;br /&gt;
If you can&amp;#039;t break your CI process. Run your build before you check in. Use rubber chicken.&lt;br /&gt;
&lt;br /&gt;
USB nerd control that will target developer.&lt;br /&gt;
&lt;br /&gt;
Break build, but if not fixed in 15 minutes, then we&amp;#039;ll roll back.&lt;br /&gt;
If unit tests fail, then it&amp;#039;s reverted.&lt;br /&gt;
&lt;br /&gt;
How to prevent people from not checking in. Can&amp;#039;t solve stupidity or malice. Only build systems that support good intentions.&lt;br /&gt;
&lt;br /&gt;
Unstable test problems. Build will fail. Elaborate build radiator and mark test as flickering. Then call it a &amp;quot;bad test.&amp;quot; Not obviously my problem.  Next time runs, then it&amp;#039;ll go green. Run 3-4 times to get different results. Tests that have non-deterministic behavior is an anti-pattern.&lt;br /&gt;
&lt;br /&gt;
How to determine a non-deterministic test? Run again and it works.&lt;br /&gt;
&lt;br /&gt;
Data-dependencies like data or class dependencies. Create object and it&amp;#039;s not created first time. Race conditions is another cause. Tests that don&amp;#039;t clean up after themselves. Run the tests in a random order can help make sure there&amp;#039;s no dependencies.&lt;br /&gt;
&lt;br /&gt;
Suggestion to write less end-to-end and write more unit tests.&lt;br /&gt;
&lt;br /&gt;
Run forwards. Run backwards. Run in random order. Detect if not clean-up. Sometimes leak database connections. Detect when database leakages were happening. Red, yellow and green systems. &lt;br /&gt;
Build scripts? CI? Frequent check-ins?&lt;br /&gt;
Couldn&amp;#039;t do deployment consistently. Needed to solidify system. Started with monthly deployments. Do a server every two days, and then move forward. Then moved to bi-weekly. Bring ops and developers and build their own deployment system.&lt;br /&gt;
&lt;br /&gt;
Needed ops and dev collaboration in order to get to CI.&lt;br /&gt;
&lt;br /&gt;
What makes CD unique from CI?&lt;br /&gt;
&lt;br /&gt;
Why create user stories because tech spec was huge. Create spec and meet it, but it didn&amp;#039;t bring any value. Then needed user story. Similarly, developer would meet requirement, but it still wouldn&amp;#039;t get to CD.&lt;br /&gt;
&lt;br /&gt;
Dev build something useful to the ops team.&lt;br /&gt;
&lt;br /&gt;
Cucumber and nagios that provide ops-friendly output. Is it useful to bridge the gap between ops and devs? Yes. &lt;br /&gt;
Ops Not familiar with Chef or Puppet. Only familiar with web sphere and native web sphere tools.&lt;br /&gt;
&lt;br /&gt;
Being able to reproduce infrastructure from the command-line. Need to collaborate with script automation and site operations. Way to communicated with them was to write cucumber test ATDD. Collaborated with them to create tests. Cucumber were for the infrastructure&lt;br /&gt;
&amp;quot;Give have a VM with an operating system with a Chef, when I run install_websphere.rv, then I go to this URL and should see an admin screen.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Use cucumber to monitor and do a virtual install. More often you&amp;#039;re install an application, and with web sphere installers they were monitoring how well it was going. Given deploy_foo.sh, then I should NOT see X message. Or I should see Y message. Checking the log for details if something failed should not be a person. Use cucumber. Put cucumber output to nagios for ops people.&lt;br /&gt;
&lt;br /&gt;
Will human every check log? Just for exploratory purposes. From systems POV, then look at log and know what&amp;#039;s up.  Suggestion to look at log to see if we&amp;#039;re blind to anything not testing is covering. Showing all logs all the time is an anti-pattern. Don&amp;#039;t plug gaps with people, do it with systems.&lt;br /&gt;
&lt;br /&gt;
Deployment monitoring was an issue. Monitoring failed from the beginning. Eventually did hooks within system. Ping system for health check. Put output into a nagios alert.&lt;br /&gt;
&lt;br /&gt;
Direction: Should only do manual testing as exploratory testing to discover unknown things that might be wrong. Regression testing is a confirmation that it works. See places that only 5% of unit testing and 95% of testing is manual regression testing. &amp;quot;Testing&amp;quot; could either be &amp;quot;checking&amp;quot; or &amp;quot;exploring.&amp;quot; Jeff would insist that &amp;quot;testing&amp;quot; means exploring, but can&amp;#039;t change industry systems. Only thing should not already be automated is looking at system or new ways to understand the system.&lt;br /&gt;
&lt;br /&gt;
Continual thing, then have automated ways to find it. If a human is testing, then find out what needs to be tested. If new feature, then have humans who didn&amp;#039;t design it, then there&amp;#039;s usability testing. Can&amp;#039;t do UX testing in CI. Do regular checks in system. If roll out new feature, then have humans use it in order to figure out what needs checking.&lt;br /&gt;
&lt;br /&gt;
There is a test framework by Lou Wellon Falco to test visual appearance. &amp;quot;Approval testing.&amp;quot; Do test, and then it records state of system. If it does change, then you detect it. Hybrid between manual and automated testing.&lt;br /&gt;
&lt;br /&gt;
Area that&amp;#039;s &amp;quot;hard&amp;quot; to test. You should test everything. Hard to test, then there could be tests. Layout in the browser isn&amp;#039;t done well.&lt;br /&gt;
&lt;br /&gt;
Take current build that&amp;#039;s a golden build. Create a number of test cases, and take a snapshot of current state. New version of code on the other side, and then compare the test results according to the DOM. Spot the differences. Then a human can detect a CSS problem. Can do this cross-browser as well.  Much less to spot UI issues.&lt;br /&gt;
&lt;br /&gt;
Identify stuff that&amp;#039;s in way of CI, and then identify ingenuity of solutions because of commitment to CD. Non-deterministic tests usually have bugs. If code is right, then why try to write tests. There&amp;#039;s a barrier to commitment to CD/CI.  Need to share ingenuity so that it&amp;#039;s easier to do CD.&lt;br /&gt;
&lt;br /&gt;
Treat test code as serious as production code. If the test is MORE difficult to write than production code, then it becomes hard to justify.&lt;br /&gt;
&lt;br /&gt;
Shore: &amp;quot;Agile doesn&amp;#039;t work if you don&amp;#039;t have self-discipline.&amp;quot; If you have a non-deterministic failure, then within a couple of weeks. Then you&amp;#039;re accumulating debt. &lt;br /&gt;
&lt;br /&gt;
But if you find a non-determinsitic failure, then put in another hour into it. Then they will eventually give up. May put 6-8 hours into a non-deterministic issue, and then give up.&lt;br /&gt;
&lt;br /&gt;
1/2 of flicking tests are poorly written test and 1/2 are really difficult problem in code.&lt;br /&gt;
&lt;br /&gt;
Turn on Code coverage before and after to detect issues.&lt;br /&gt;
&lt;br /&gt;
Writing a book on How to detect flickering tests would be a best seller.&lt;br /&gt;
a&lt;br /&gt;
Database needs follow evolutionary design pattern. Duplicate data, maintain it, and then migrate it. Book on &amp;quot;Database continuous integration: Evolutionary database design.&amp;quot; If make change to database, then write a delta script. ~Liqui-base.&lt;br /&gt;
&lt;br /&gt;
Don&amp;#039;t want downtime. Need to decouple structure of database. Address field. Split into two address field. Write migration that creates new stuff. Write new data, but only read old data. Need to have multiple versions of your code talking to the database. Mention of a &amp;quot;Refactoring databases&amp;quot; book.&lt;br /&gt;
&lt;br /&gt;
Anti-pattern: Ivory tower DBA. Submit ticket to make change to database. DBA is a bottleneck to the organization. Very hard to reproduce the database. In order to reproduce the database, then you have to take db and reproduce it entirely. Takes a lot of time.  If use a evolutionary database, then it&amp;#039;s easier to grab 3% of database or a specific portion of db.&lt;br /&gt;
&lt;br /&gt;
Instead of integrate with database, then integrate with services. Decouple database per service. Avoid having JOIN is reporting software. Pretend have persistent memory would it be okay objects to break database. Have a well-defined API. Have code that could read the alpha database.&lt;br /&gt;
Versioned by class. Was there code complexity to deal with it? No, there were abstractions that dealt with it.&lt;br /&gt;
&lt;br /&gt;
NoSQL databases will defend them on a version. NoSQL migrations can be really difficult.&lt;br /&gt;
&lt;br /&gt;
Collaboration between Dev and Ops&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Failure Mode and Effects Analysis&amp;quot; -- Failure analysis: What could go wrong with system? If it went wrong, then how would we know? How quickly could we fix it? Do risk and impact analysis, and add issues to the backlog.&lt;br /&gt;
&lt;br /&gt;
Blue/Green deployments as a prerequisite? CD is defined differently per organization.&lt;br /&gt;
&lt;br /&gt;
Do releases at 11 pm, after US market close and before AU market open.&lt;br /&gt;
&lt;br /&gt;
Do releases under load on the site while users are on the system.&lt;br /&gt;
&lt;br /&gt;
Have to be testing what you&amp;#039;re releasing. Need the packages of what you&amp;#039;re going to deploy. If not testing it, then you&amp;#039;d have put a commit to bump version number. Made and snapshots. Code signing process brings ambiguity to if it&amp;#039;s the same.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Notes by Kent Bye.&amp;#039;&amp;#039;&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Bringing_Automation_to_Manual_Testers&amp;diff=14573</id>
		<title>Bringing Automation to Manual Testers</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Bringing_Automation_to_Manual_Testers&amp;diff=14573"/>
		<updated>2012-09-22T21:05:09Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: Created page with &amp;quot;Bringing automation to manual testers (with no budget)  There are 9 development cross-functional agile teams. 2 testers and 5 developers. Tester vs. developer? Staff of tester...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Bringing automation to manual testers (with no budget)&lt;br /&gt;
&lt;br /&gt;
There are 9 development cross-functional agile teams. 2 testers and 5 developers. Tester vs. developer? Staff of testers who only do manual testers. Defect leakage is lower.  No problem form an efficiency POV.&lt;br /&gt;
&lt;br /&gt;
If not have automation training, then job satisfaction is down b/c Google says manual testing is dead. Have no training budget for tools or training?&lt;br /&gt;
&lt;br /&gt;
What could he do? Do a number of activities to create hand-on experience to do small tests on their own. Brian isn&amp;#039;t from a testing background. It should really come from the testers. Would love to have someone from testing department to step up and lead it. Brian is being a facilitator.&lt;br /&gt;
&lt;br /&gt;
Potential Tactics &lt;br /&gt;
* Technology association Oregon to have a panel discussion to host at their office. Invite local experts who have implemented automation, and get testers to come to hear success stories. Hear about tools to use, Learning Curve.&lt;br /&gt;
* Brown bags: Do a series of internal presentations about automation with the tools and code base.&lt;br /&gt;
* Academy classes: Talk for a topic and ask questions.&lt;br /&gt;
* Use power shell as a testing tool. Invoke domain business objects and do some testing on. &lt;br /&gt;
&lt;br /&gt;
Need to foster hands-on experience&lt;br /&gt;
* Record and play automation tool. 3 classes to work through an application of record and playback -- all doing the same one. Everyone who wanted to do testing could do that. See scripts that was generated, and talk about the script after the fact&lt;br /&gt;
* Then train the testers in programming over 6-8 weeks, and have homework. Train control structures and basic OO programming.  Could buy books.&lt;br /&gt;
* Organize a user group to show others. &lt;br /&gt;
&lt;br /&gt;
* Identify places where automation would be helpful.&lt;br /&gt;
* Create opportunities&lt;br /&gt;
&lt;br /&gt;
How much free time do testers have? Are they storming or forming?&lt;br /&gt;
If they have homework, then they&amp;#039;d have to do extra time.&lt;br /&gt;
&lt;br /&gt;
Introduced test automation, and cut regression test time by a certain amount of %.&lt;br /&gt;
Personnel turn-over.&lt;br /&gt;
If need 20 hours to automate test, then is there going to get pushback.&lt;br /&gt;
&lt;br /&gt;
Commit to a story that&amp;#039;s focused on quality during each sprint.&lt;br /&gt;
Tag stories that you can track and report on them.&lt;br /&gt;
&lt;br /&gt;
Did some training classes where the students train each other in Java. People who are novices are better at training each other. Need one person who knows what they&amp;#039;re doing.&lt;br /&gt;
&lt;br /&gt;
Did training of OO analysis and design at a bank. Cultural differences, and they didn&amp;#039;t take well to outside training. Start small and engage testers at developing their own curriculum. Take into account the culture, and it&amp;#039;s more likely to succeed. Start small and grow it rather than starting with a big bang. Time is ripe, and there&amp;#039;s enough momentum out there that needs to be pushed a bit.&lt;br /&gt;
&lt;br /&gt;
Testing new feature, is it difficult and take significantly more time to test it automatically.&lt;br /&gt;
&lt;br /&gt;
Anti-pattern is short-term thinking rather than long-term planning. If product manager isn&amp;#039;t supportive from a timeline perspective, then the long-term benefits will be the first to go.&lt;br /&gt;
&lt;br /&gt;
Has to be a commitment to the long-term to get out of the short-term, fire-fighting mentality. And slowly quality slowly degrades.&lt;br /&gt;
&lt;br /&gt;
IT is on board for Agile, but didn&amp;#039;t educate business about agile. Need to slow down in the short-term in order to go faster in the long-term.&lt;br /&gt;
&lt;br /&gt;
Implemented agile, and the deliverable times is down to 2 weeks, and the QA became the bottleneck. Then had to go back up to 3 weeks.  Need to automate the testing.  Review the tools. Have the engineer go to QA staff and train them.  QA staff is heads down on their 3-week cycles.&lt;br /&gt;
&lt;br /&gt;
Fostering the sense of leadership, and have the developers start to evaluate&lt;br /&gt;
&lt;br /&gt;
Gauge their interest. How easy for them to pick up.&lt;br /&gt;
&lt;br /&gt;
In choosing tools: Easier to train on? Or better to get a tool best for their technology stack?&lt;br /&gt;
Potentially have a BDD.&lt;br /&gt;
&lt;br /&gt;
Developers will train the QA staff to take the test for Java certification course done by Oracle.&lt;br /&gt;
&lt;br /&gt;
Strategy for training people: Local tech user group to do their own training. Struggling how to teach novices who don&amp;#039;t have full time to learning this. Start with 1-day workshop to get development environment set up. Added a 2nd user group meeting for new users with 1/2 trainers and 1/2 people who are novices.  Ask experts how to do a specific task.  As an expert, you do them so much that don&amp;#039;t think about them as concepts.  As a practice, then you write down the question so that you can teach it to them the next time.&lt;br /&gt;
&lt;br /&gt;
If someone asks it, then others are thinking it.&lt;br /&gt;
&lt;br /&gt;
* Python and Ruby groups are having a 2nd meeting every month with an occasional workshop. Beginning Ruby meet-up.  Lots of people who want to learn. Experienced people know that you can learn bad things, and so testing helps to learn the good ones. Went over xUnit and BDD and some Cucumber stuff, and there will be more sessions like that in the future.&lt;br /&gt;
&lt;br /&gt;
A ladder of tasks and competencies for people contribute and start to being able to contribute. In Drupal, there&amp;#039;s the Drupal ladder.&lt;br /&gt;
&lt;br /&gt;
* Tester pull down developer code, and build it and run it.&lt;br /&gt;
* Look at unit test, and be able to read it. Start to turn Blackbox testers into Whitebox testers&lt;br /&gt;
* Open hatch: One Barrier to entry is using version control. How to make a commit to git, and it&amp;#039;s like a video game. Make a pull, and make a change. And then they get points or stars.&lt;br /&gt;
&lt;br /&gt;
* Automate low-hanging fruit. Start with some easy tests to get experience so that they can get through rough spots. Start small. Case studies: Need to have small projects where automation written didn&amp;#039;t need to be preserved beyond task that they could do within the course of their regular testing.&lt;br /&gt;
* Optimizing whatever is really repetitive and whatever would do the most impact in the least amount of work. Automate what you&amp;#039;re doing a lot, and look for something has a decent bang for your buck.&lt;br /&gt;
&lt;br /&gt;
* SQA User Group is just getting started up -- http://www.sqaug.org/&lt;br /&gt;
* Work on acceptance criteria as a cross-functional team, and help business see the benefit so that we can invest more time and energy into QA.&lt;br /&gt;
&lt;br /&gt;
* Started using Fitness tool to automated business-facing acceptance testing, and then the product managers would look at it within English. But the product manager didn&amp;#039;t care or look at the test. Testers and developers had lots of conversations which were really helpful.&lt;br /&gt;
&lt;br /&gt;
Business object layer exposed through an API, and the testers could give a scripting language so that they could reach the test objects. Testers could write business-facing acceptance tests with Cucumber, and then it&amp;#039;d be a good step towards automation.&lt;br /&gt;
&lt;br /&gt;
Java has a lot of odd rules for non-programmers, and it&amp;#039;d be intimidating because it&amp;#039;s not intuitive.&lt;br /&gt;
&lt;br /&gt;
Programmers know more than one language. Learn the easiest things to learn, and once you have the concepts, then going to more complicated languages like Java become easier. Don&amp;#039;t try to learn everything at once. Make it cumulative. &lt;br /&gt;
&lt;br /&gt;
If you need budget, then ask &amp;quot;How much is QA saving you?&amp;quot; instead of &amp;quot;How much is QA costing?&amp;quot;&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CI_Anti-Patterns&amp;diff=14572</id>
		<title>CI Anti-Patterns</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CI_Anti-Patterns&amp;diff=14572"/>
		<updated>2012-09-22T21:04:26Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: Created page with &amp;quot;What are you going to monitor?  How will you know what you&amp;#039;re monitoring happens?  What will you do if it happens?  Continuous deployment - Will the site work while you deploy...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;What are you going to monitor? &lt;br /&gt;
How will you know what you&amp;#039;re monitoring happens? &lt;br /&gt;
What will you do if it happens?&lt;br /&gt;
&lt;br /&gt;
Continuous deployment - Will the site work while you deploy the version? Or take the site down while you deploy? What happens to the users on the site while you&amp;#039;re deploying?&lt;br /&gt;
&lt;br /&gt;
If you&amp;#039;re deploying manually, then you could think about these questions, but it&amp;#039;s not mandatory.&lt;br /&gt;
&lt;br /&gt;
Ops job was to be a buffer between devs and remote system administrators.&lt;br /&gt;
&lt;br /&gt;
This particular Ops team were not sysadmins and there no monitoring in place. They were doing market fixes, and provide a buffer between devs and remote sysadmins.&lt;br /&gt;
&lt;br /&gt;
Internal staging site, but it has a different topology. Production has 3 machines behind a load balancer. B/c of remote relationship, could change the app, except through the database.&lt;br /&gt;
&lt;br /&gt;
Company is successful, but there&amp;#039;s a lack of growing up. They plug gaps with people instead of systems.&lt;br /&gt;
&lt;br /&gt;
Co-workers checked in code that didn&amp;#039;t work. Code should compile before you check it in. Read Scott Adams, &amp;quot;Goals are for losers, winners build systems.&amp;quot; Now have common deployment contract across deployments, and it can run transformations and run rules. They now have blue/green deployments. Ask it, are you ready to shut down? Now have init scripts to start and stop the service. Now have metrics in place.&lt;br /&gt;
&lt;br /&gt;
Internal API to load data warehouse, and the traffic is much higher than live site. Run extract of data, and it slows down the production site. Run extract during running hours, and be able to monitor that performance isn&amp;#039;t impact.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
PJ session on Anti-patterns&lt;br /&gt;
Anti-patterns: Read book CI, and they want a plan. Provide a 2-year plan. Work with them for a year and half, and after 1.5 years, then they say that need 5 years. What they have in place, and what do they need to have in place. If they don&amp;#039;t have it in place, then it&amp;#039;s an anti-pattern.&lt;br /&gt;
What have you seen where people think they have it&amp;#039;s right, but it&amp;#039;s actually wrong.&lt;br /&gt;
&lt;br /&gt;
Want to do continuous delivery. Deliver to production every 2 weeks. Have a build script? CI running? Have developers check in frequently? 300-400 devs, and they want a roadmap. One year into it, they kind of have a CI in place. Check in, build happens and it&amp;#039;s red or green. No industry standard CI practices, like not always include unit tests. Not doing CI if not running unit tests. If you say CI, then assume including unit tests in the build. Lack of commitment to a green build. Can check in, and have red for weeks of a time, and it&amp;#039;s acceptable within the org. You&amp;#039;ll never achieve continuous delivery if you have don&amp;#039;t commit to green build. &lt;br /&gt;
&lt;br /&gt;
Need to fix very quickly within a couple of hours. Whoever broke it is responsible to fix it.&lt;br /&gt;
&lt;br /&gt;
If you can&amp;#039;t break your CI process. Run your build before you check in. Use rubber chicken.&lt;br /&gt;
&lt;br /&gt;
USB nerd control that will target developer.&lt;br /&gt;
&lt;br /&gt;
Break build, but if not fixed in 15 minutes, then we&amp;#039;ll roll back.&lt;br /&gt;
If unit tests fail, then it&amp;#039;s reverted.&lt;br /&gt;
&lt;br /&gt;
How to prevent people from not checking in. Can&amp;#039;t solve stupidity or malice. Only build systems that support good intentions.&lt;br /&gt;
&lt;br /&gt;
Unstable test problems. Build will fail. Elaborate build radiator and mark test as flickering. Then call it a &amp;quot;bad test.&amp;quot; Not obviously my problem.  Next time runs, then it&amp;#039;ll go green. Run 3-4 times to get different results. Tests that have non-deterministic behavior is an anti-pattern.&lt;br /&gt;
&lt;br /&gt;
How to determine a non-deterministic test? Run again and it works.&lt;br /&gt;
&lt;br /&gt;
Data-dependencies like data or class dependencies. Create object and it&amp;#039;s not created first time. Race conditions is another cause. Tests that don&amp;#039;t clean up after themselves. Run the tests in a random order can help make sure there&amp;#039;s no dependencies.&lt;br /&gt;
&lt;br /&gt;
Suggestion to write less end-to-end and write more unit tests.&lt;br /&gt;
&lt;br /&gt;
Run forwards. Run backwards. Run in random order. Detect if not clean-up. Sometimes leak database connections. Detect when database leakages were happening. Red, yellow and green systems. &lt;br /&gt;
Build scripts? CI? Frequent check-ins?&lt;br /&gt;
Couldn&amp;#039;t do deployment consistently. Needed to solidify system. Started with monthly deployments. Do a server every two days, and then move forward. Then moved to bi-weekly. Bring ops and developers and build their own deployment system.&lt;br /&gt;
&lt;br /&gt;
Needed ops and dev collaboration in order to get to CI.&lt;br /&gt;
&lt;br /&gt;
What makes CD unique from CI?&lt;br /&gt;
&lt;br /&gt;
Why create user stories because tech spec was huge. Create spec and meet it, but it didn&amp;#039;t bring any value. Then needed user story. Similarly, developer would meet requirement, but it still wouldn&amp;#039;t get to CD.&lt;br /&gt;
&lt;br /&gt;
Dev build something useful to the ops team.&lt;br /&gt;
&lt;br /&gt;
Cucumber and nagios that provide ops-friendly output. Is it useful to bridge the gap between ops and devs? Yes. &lt;br /&gt;
Ops Not familiar with Chef or Puppet. Only familiar with web sphere and native web sphere tools.&lt;br /&gt;
&lt;br /&gt;
Being able to reproduce infrastructure from the command-line. Need to collaborate with script automation and site operations. Way to communicated with them was to write cucumber test ATDD. Collaborated with them to create tests. Cucumber were for the infrastructure&lt;br /&gt;
&amp;quot;Give have a VM with an operating system with a Chef, when I run install_websphere.rv, then I go to this URL and should see an admin screen.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Use cucumber to monitor and do a virtual install. More often you&amp;#039;re install an application, and with web sphere installers they were monitoring how well it was going. Given deploy_foo.sh, then I should NOT see X message. Or I should see Y message. Checking the log for details if something failed should not be a person. Use cucumber. Put cucumber output to nagios for ops people.&lt;br /&gt;
&lt;br /&gt;
Will human every check log? Just for exploratory purposes. From systems POV, then look at log and know what&amp;#039;s up.  Suggestion to look at log to see if we&amp;#039;re blind to anything not testing is covering. Showing all logs all the time is an anti-pattern. Don&amp;#039;t plug gaps with people, do it with systems.&lt;br /&gt;
&lt;br /&gt;
Deployment monitoring was an issue. Monitoring failed from the beginning. Eventually did hooks within system. Ping system for health check. Put output into a nagios alert.&lt;br /&gt;
&lt;br /&gt;
Direction: Should only do manual testing as exploratory testing to discover unknown things that might be wrong. Regression testing is a confirmation that it works. See places that only 5% of unit testing and 95% of testing is manual regression testing. &amp;quot;Testing&amp;quot; could either be &amp;quot;checking&amp;quot; or &amp;quot;exploring.&amp;quot; Jeff would insist that &amp;quot;testing&amp;quot; means exploring, but can&amp;#039;t change industry systems. Only thing should not already be automated is looking at system or new ways to understand the system.&lt;br /&gt;
&lt;br /&gt;
Continual thing, then have automated ways to find it. If a human is testing, then find out what needs to be tested. If new feature, then have humans who didn&amp;#039;t design it, then there&amp;#039;s usability testing. Can&amp;#039;t do UX testing in CI. Do regular checks in system. If roll out new feature, then have humans use it in order to figure out what needs checking.&lt;br /&gt;
&lt;br /&gt;
There is a test framework by Lou Wellon Falco to test visual appearance. &amp;quot;Approval testing.&amp;quot; Do test, and then it records state of system. If it does change, then you detect it. Hybrid between manual and automated testing.&lt;br /&gt;
&lt;br /&gt;
Area that&amp;#039;s &amp;quot;hard&amp;quot; to test. You should test everything. Hard to test, then there could be tests. Layout in the browser isn&amp;#039;t done well.&lt;br /&gt;
&lt;br /&gt;
Take current build that&amp;#039;s a golden build. Create a number of test cases, and take a snapshot of current state. New version of code on the other side, and then compare the test results according to the DOM. Spot the differences. Then a human can detect a CSS problem. Can do this cross-browser as well.  Much less to spot UI issues.&lt;br /&gt;
&lt;br /&gt;
Identify stuff that&amp;#039;s in way of CI, and then identify ingenuity of solutions because of commitment to CD. Non-deterministic tests usually have bugs. If code is right, then why try to write tests. There&amp;#039;s a barrier to commitment to CD/CI.  Need to share ingenuity so that it&amp;#039;s easier to do CD.&lt;br /&gt;
&lt;br /&gt;
Treat test code as serious as production code. If the test is MORE difficult to write than production code, then it becomes hard to justify.&lt;br /&gt;
&lt;br /&gt;
Shore: &amp;quot;Agile doesn&amp;#039;t work if you don&amp;#039;t have self-discipline.&amp;quot; If you have a non-deterministic failure, then within a couple of weeks. Then you&amp;#039;re accumulating debt. &lt;br /&gt;
&lt;br /&gt;
But if you find a non-determinsitic failure, then put in another hour into it. Then they will eventually give up. May put 6-8 hours into a non-deterministic issue, and then give up.&lt;br /&gt;
&lt;br /&gt;
1/2 of flicking tests are poorly written test and 1/2 are really difficult problem in code.&lt;br /&gt;
&lt;br /&gt;
Turn on Code coverage before and after to detect issues.&lt;br /&gt;
&lt;br /&gt;
Writing a book on How to detect flickering tests would be a best seller.&lt;br /&gt;
a&lt;br /&gt;
Database needs follow evolutionary design pattern. Duplicate data, maintain it, and then migrate it. Book on &amp;quot;Database continuous integration: Evolutionary database design.&amp;quot; If make change to database, then write a delta script. ~Liqui-base.&lt;br /&gt;
&lt;br /&gt;
Don&amp;#039;t want downtime. Need to decouple structure of database. Address field. Split into two address field. Write migration that creates new stuff. Write new data, but only read old data. Need to have multiple versions of your code talking to the database. Mention of a &amp;quot;Refactoring databases&amp;quot; book.&lt;br /&gt;
&lt;br /&gt;
Anti-pattern: Ivory tower DBA. Submit ticket to make change to database. DBA is a bottleneck to the organization. Very hard to reproduce the database. In order to reproduce the database, then you have to take db and reproduce it entirely. Takes a lot of time.  If use a evolutionary database, then it&amp;#039;s easier to grab 3% of database or a specific portion of db.&lt;br /&gt;
&lt;br /&gt;
Instead of integrate with database, then integrate with services. Decouple database per service. Avoid having JOIN is reporting software. Pretend have persistent memory would it be okay objects to break database. Have a well-defined API. Have code that could read the alpha database.&lt;br /&gt;
Versioned by class. Was there code complexity to deal with it? No, there were abstractions that dealt with it.&lt;br /&gt;
&lt;br /&gt;
NoSQL databases will defend them on a version. NoSQL migrations can be really difficult.&lt;br /&gt;
&lt;br /&gt;
Collaboration between Dev and Ops&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Failure Mode and Effects Analysis&amp;quot; -- Failure analysis: What could go wrong with system? If it went wrong, then how would we know? How quickly could we fix it? Do risk and impact analysis, and add issues to the backlog.&lt;br /&gt;
&lt;br /&gt;
Blue/Green deployments as a prerequisite? CD is defined differently per organization.&lt;br /&gt;
&lt;br /&gt;
Do releases at 11 pm, after US market close and before AU market open.&lt;br /&gt;
&lt;br /&gt;
Do releases under load on the site while users are on the system.&lt;br /&gt;
&lt;br /&gt;
Have to be testing what you&amp;#039;re releasing. Need the packages of what you&amp;#039;re going to deploy. If not testing it, then you&amp;#039;d have put a commit to bump version number. Made and snapshots. Code signing process brings ambiguity to if it&amp;#039;s the same.&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONNA2012Sessions&amp;diff=14571</id>
		<title>CITCONNA2012Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONNA2012Sessions&amp;diff=14571"/>
		<updated>2012-09-22T21:03:28Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: Created page with &amp;quot;CITCON United States Portland 2012 Sessions  Back to the Main Page  == 10:00 Topics ==  #CI Anti-Patterns  == 11:15 Topics ==  #[[Bringing Automation to Manual Testers...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CITCON United States Portland 2012 Sessions&lt;br /&gt;
&lt;br /&gt;
Back to the [[Main Page]]&lt;br /&gt;
&lt;br /&gt;
== 10:00 Topics ==&lt;br /&gt;
&lt;br /&gt;
#[[CI Anti-Patterns]]&lt;br /&gt;
&lt;br /&gt;
== 11:15 Topics ==&lt;br /&gt;
&lt;br /&gt;
#[[Bringing Automation to Manual Testers]]&lt;br /&gt;
&lt;br /&gt;
== 2:00 Topics ==&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONNA2012Registrants&amp;diff=14564</id>
		<title>CITCONNA2012Registrants</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONNA2012Registrants&amp;diff=14564"/>
		<updated>2012-09-21T06:24:20Z</updated>

		<summary type="html">&lt;p&gt;Kentbye: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is generated from registration data. For changes or corrections please post to the CITCON mailing list. &lt;br /&gt;
&lt;br /&gt;
[[Aaron Rhodes]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Adam Yuret]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Alex Yamashita]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Amy Julius]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[An Doan]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Andrew Parker]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Antony Marcano]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Balaji Subramanian]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Brett Jones]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Brian Myers]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Bruce R Smith]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Dan Ivy]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Dan Pape]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Dan Post]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Daniel Johnson]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[David Amick]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Diana]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Elisabeth Hendrickson]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Elizabeth Flanagan]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Fred Obermann]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Geoff Goodman]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[George Breedlove]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Hang Dao]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Igal Koshevoy]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[James Eisenhauer]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[James Rucker]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[James Shore]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Jason LaPier]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Jason Larsen]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Jay Riddle]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Jeff Rogers]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Jeffrey Fredrick]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Jeremy Haage]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Jesse Cooke]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Jesse Joe Bernardo]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Jim Dewson]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[John Wilger]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Julias Shaw]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Justin Myers]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Kent Bye]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Khai Do]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Kirsten Comandich]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Kurt Ruff]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Lucian]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Lynn Scaglione]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Maggie Leake]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Maher M Hawash]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Marcos David Vacca]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Merlyn]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Michael P Karlovich]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Monica Farrell]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Nigel Syhaphonh]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Novak Banda]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Paul Giron]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Paul Julius]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Paul Jungwirth]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Paula Hannan]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Ravi Gadad]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Ruth Struck]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Ryan Souza]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Sabrina Rusi]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Saran Chinnaraj]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Shawn Dowling]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[srinivasarao]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Steve Boone]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Stuart Celarier]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Thai Hai Pham]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Tom Miller]] &amp;lt;br/&amp;gt;&lt;br /&gt;
[[Ward Cunningham]] &amp;lt;br/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Kentbye</name></author>
	</entry>
</feed>