<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://citconf.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=MarkEWaite</id>
	<title>CitconWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://citconf.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=MarkEWaite"/>
	<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Special:Contributions/MarkEWaite"/>
	<updated>2026-04-25T02:22:56Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.11</generator>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Mark_Waite&amp;diff=5751</id>
		<title>Mark Waite</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Mark_Waite&amp;diff=5751"/>
		<updated>2008-04-10T16:39:38Z</updated>

		<summary type="html">&lt;p&gt;MarkEWaite: New page: Mark Waite is a software development manager at [http://www.ptc.com Parametric Technology Corporation], working in the Fort Collins, CO office.  Prior to working for PTC, Mark worked for [...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Mark Waite is a software development manager at [http://www.ptc.com Parametric Technology Corporation], working in the Fort Collins, CO office.  Prior to working for PTC, Mark worked for [http://www.cocreate.com CoCreate Software].  CoCreate was purchased by PTC in November 2007.&lt;br /&gt;
&lt;br /&gt;
Mark is interested in software testing, software development, and in agile techniques.  He was the team manager when CoCreate switched to Extreme Programming in March 2003 (XP by 3/03).  His [http://blog.360.yahoo.com/markwaite business blog] usually contains notes about things learned while managing, testing, or programming.  His [http://markwaite.blogspot.com/ personal blog] may become the new home for business information, since Yahoo (the home for the business blog) has stopped development and fixes on Yahoo 360.&lt;/div&gt;</summary>
		<author><name>MarkEWaite</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Functional_tests_take_a_long_time&amp;diff=5678</id>
		<title>Functional tests take a long time</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Functional_tests_take_a_long_time&amp;diff=5678"/>
		<updated>2008-04-06T23:20:49Z</updated>

		<summary type="html">&lt;p&gt;MarkEWaite: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Long Running Functional Tests&lt;br /&gt;
&lt;br /&gt;
These are the few notes I took from the &amp;quot;long running functional tests&amp;quot; discussions.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Problems&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
* Complete features take 1 day&lt;br /&gt;
* Functional test takes 15 hours&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Alternatives, Risks, and Trade-offs&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
* Parallel testing vs. pipelined testing&lt;br /&gt;
** Compile&lt;br /&gt;
** Fast unit tests&lt;br /&gt;
** Slow unit tests&lt;br /&gt;
** Functional tests&lt;br /&gt;
&lt;br /&gt;
* Incremental feedback during test runs&lt;br /&gt;
** Show failures sooner, but&lt;br /&gt;
** Does not typically lead to stopping the tests because we want to know all the results from that set of code&lt;/div&gt;</summary>
		<author><name>MarkEWaite</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Functional_tests_take_a_long_time&amp;diff=5677</id>
		<title>Functional tests take a long time</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Functional_tests_take_a_long_time&amp;diff=5677"/>
		<updated>2008-04-06T23:20:25Z</updated>

		<summary type="html">&lt;p&gt;MarkEWaite: New page: Long Running Functional Tests  These are the few notes I took from the &amp;quot;long running functional tests&amp;quot; discussions.  Problems  * Complete features take 1 day * Functional test takes 15 hou...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Long Running Functional Tests&lt;br /&gt;
&lt;br /&gt;
These are the few notes I took from the &amp;quot;long running functional tests&amp;quot; discussions.&lt;br /&gt;
&lt;br /&gt;
Problems&lt;br /&gt;
&lt;br /&gt;
* Complete features take 1 day&lt;br /&gt;
* Functional test takes 15 hours&lt;br /&gt;
&lt;br /&gt;
Alternatives, Risks, and Trade-offs&lt;br /&gt;
&lt;br /&gt;
* Parallel testing vs. pipelined testing&lt;br /&gt;
** Compile&lt;br /&gt;
** Fast unit tests&lt;br /&gt;
** Slow unit tests&lt;br /&gt;
** Functional tests&lt;br /&gt;
&lt;br /&gt;
* Incremental feedback during test runs&lt;br /&gt;
** Show failures sooner, but&lt;br /&gt;
** Does not typically lead to stopping the tests because we want to know all the results from that set of code&lt;/div&gt;</summary>
		<author><name>MarkEWaite</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONDenver2008Sessions&amp;diff=5676</id>
		<title>CITCONDenver2008Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONDenver2008Sessions&amp;diff=5676"/>
		<updated>2008-04-06T23:18:59Z</updated>

		<summary type="html">&lt;p&gt;MarkEWaite: /* 11:15 Topics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;CITCON Denver Sessions&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== 10:00 Topics ==&lt;br /&gt;
&lt;br /&gt;
#[[What&amp;#039;s an agile tester?]]&lt;br /&gt;
#[[Scaling continuous integration to the enterprise]]&lt;br /&gt;
&lt;br /&gt;
== 11:15 Topics ==&lt;br /&gt;
&lt;br /&gt;
#[[Making builds faster, more efficient]]&lt;br /&gt;
#[[Functional tests take a long time]]&lt;/div&gt;</summary>
		<author><name>MarkEWaite</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Scaling_continuous_integration_to_the_enterprise&amp;diff=5675</id>
		<title>Scaling continuous integration to the enterprise</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Scaling_continuous_integration_to_the_enterprise&amp;diff=5675"/>
		<updated>2008-04-06T23:17:42Z</updated>

		<summary type="html">&lt;p&gt;MarkEWaite: /* Enterprise Scale Continuous Integration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Enterprise Scale Continuous Integration ==&lt;br /&gt;
&lt;br /&gt;
These were the notes captured from the discussions.  We started by describing the problems we had seen in trying to run continuous integration on very large code bases, and in large organizations which are coming from monthly or weekly integration cycles to continuous integration cycles.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Problem Definition&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
* 300 devs, 1 build break per year per developer means:&lt;br /&gt;
** build will be broken every day&lt;br /&gt;
** slows us down&lt;br /&gt;
** distrust of the source master&lt;br /&gt;
** various defensive behaviors so programmers can get work done even though the source master is frequently broken&lt;br /&gt;
*** Only sync from the master when it is &amp;quot;known good&amp;quot;&lt;br /&gt;
*** Private branches to insulate from noise on the master&lt;br /&gt;
*** Project branches to insulate from noise on the master&lt;br /&gt;
&lt;br /&gt;
* 2 hour build, 3 day acceptance test&lt;br /&gt;
** A 2 hour build means that the time between a submit and feedback on the result of the build could be as much as 4 hours (I just missed the start of the current build, my build will start in almost two hours and will require two hours to confirm success or failure)&lt;br /&gt;
&lt;br /&gt;
* hard to assign failure due to multiple commits per build (many developers may submit during the 2 hour build window, so it becomes harder to diagnose which submit caused a build failure)&lt;br /&gt;
&lt;br /&gt;
* long cycle time on failure (hours before you know you broke something)&lt;br /&gt;
&lt;br /&gt;
* failures affect more people, are more expensive&lt;br /&gt;
&lt;br /&gt;
* Understand root cause of failure is not obvious&lt;br /&gt;
&lt;br /&gt;
* How to handle 300 applications, each with a few devs, how to scale to many projects and still manage level&lt;br /&gt;
&lt;br /&gt;
* How to manage many branches to many mains&lt;br /&gt;
&lt;br /&gt;
* Managing build time dependencies (unexpected, undetected coupling)&lt;br /&gt;
** incorrect incremental builds&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Addressing the problems, alternatives, risks, and trade-offs&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
* subcomponents&lt;br /&gt;
** reduces build time, but&lt;br /&gt;
** increases integration time&lt;br /&gt;
* build acceleration technology&lt;br /&gt;
** parallel build, multi-machine, multi-core (Electric Accelerator, for instance)&lt;br /&gt;
** bug fast machines (although disc I/O may dominate)&lt;br /&gt;
* modularize to get recent successful build, not compile&lt;br /&gt;
** faster, less built (narrow the impact to a smaller team)&lt;br /&gt;
* Use &amp;quot;pre-flight&amp;quot; build (production build with many changes, not yet on the source master)&lt;br /&gt;
** integration race conditions&lt;br /&gt;
** faster hardware&lt;br /&gt;
** parallel builds&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Alternatives (2)&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
* 3 day acceptance test&lt;br /&gt;
** throw bodies at the problem (but it is not scalable)&lt;br /&gt;
** review the acceptance process for automation opportunities&lt;br /&gt;
** increase automated testing inside the application (at the interfaces)&lt;br /&gt;
** modularize tests, make them independent so they can run in parallel&lt;br /&gt;
** accept human tests less frequently, automation running continuously&lt;br /&gt;
** use assistive automation to support more effective exploratory testing&lt;br /&gt;
*** Brian Marick has some work going in this area&lt;br /&gt;
*** Michael Bolton describes his use of Watir as assistive automation&lt;br /&gt;
&lt;br /&gt;
Lisa Crispin suggested that Jared Richardson had done the continuous&lt;br /&gt;
integration work for SAS and might share insights and ideas.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Alternatives (3)&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
300 applications, small teams on each&lt;br /&gt;
&lt;br /&gt;
* Either many independent CI systems or an enterprise CI system&lt;br /&gt;
** unified view&lt;br /&gt;
** shared configuration&lt;br /&gt;
** reuse between teams&lt;br /&gt;
** security&lt;br /&gt;
** usable for small teams&lt;br /&gt;
&lt;br /&gt;
* Dependency management&lt;br /&gt;
** component level dependencies managed by tools&lt;br /&gt;
*** Anthill / Codestation&lt;br /&gt;
*** maven&lt;br /&gt;
*** ivy&lt;br /&gt;
** scheduling builds, which build should be run first&lt;br /&gt;
** how do I express the rules by which I select a component&lt;br /&gt;
*** version (specific version, pattern match a version, relational operator to version string, etc.)&lt;br /&gt;
*** acceptance test results&lt;/div&gt;</summary>
		<author><name>MarkEWaite</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Scaling_continuous_integration_to_the_enterprise&amp;diff=5674</id>
		<title>Scaling continuous integration to the enterprise</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Scaling_continuous_integration_to_the_enterprise&amp;diff=5674"/>
		<updated>2008-04-06T23:13:56Z</updated>

		<summary type="html">&lt;p&gt;MarkEWaite: /* Enterprise Scale Continuous Integration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Enterprise Scale Continuous Integration ==&lt;br /&gt;
&lt;br /&gt;
These were the notes captured from the discussions.  We started by describing the problems we had seen in trying to run continuous integration on very large code bases, and in large organizations which are coming from monthly or weekly integration cycles to continuous integration cycles.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Problem Definition&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
* 300 devs, 1 build break / year&lt;br /&gt;
** build will be broken every day&lt;br /&gt;
** slows us down&lt;br /&gt;
** creates distrust of the source master&lt;br /&gt;
&lt;br /&gt;
* 2 hour build, 3 day acceptance test&lt;br /&gt;
&lt;br /&gt;
* hard to assign failure due to multiple commits per build&lt;br /&gt;
&lt;br /&gt;
* long cycle time on failure (hours before you know you broke something)&lt;br /&gt;
&lt;br /&gt;
* failures affect more people, are more expensive&lt;br /&gt;
&lt;br /&gt;
* Understand root cause of failure is not obvious&lt;br /&gt;
&lt;br /&gt;
* How to handle 300 applications, each with a few devs, how to scale to many projects and still manage level&lt;br /&gt;
&lt;br /&gt;
* How to manage many branches to many mains&lt;br /&gt;
&lt;br /&gt;
* Managing build time dependencies (unexpected, undetected coupling)&lt;br /&gt;
** incorrect incremental builds&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Addressing the problems, alternatives, risks, and trade-offs&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
* subcomponents&lt;br /&gt;
** reduces build time, but&lt;br /&gt;
** increases integration time&lt;br /&gt;
* build acceleration technology&lt;br /&gt;
** parallel build, multi-machine, multi-core (Electric Accelerator, for instance)&lt;br /&gt;
** bug fast machines (although disc I/O may dominate)&lt;br /&gt;
* modularize to get recent successful build, not compile&lt;br /&gt;
** faster, less built (narrow the impact to a smaller team)&lt;br /&gt;
* Use &amp;quot;pre-flight&amp;quot; build (production build with many changes, not yet on the source master)&lt;br /&gt;
** integration race conditions&lt;br /&gt;
** faster hardware&lt;br /&gt;
** parallel builds&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Alternatives (2)&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
* 3 day acceptance test&lt;br /&gt;
** throw bodies at the problem (but it is not scalable)&lt;br /&gt;
** review the acceptance process for automation opportunities&lt;br /&gt;
** increase automated testing inside the application (at the interfaces)&lt;br /&gt;
** modularize tests, make them independent so they can run in parallel&lt;br /&gt;
** accept human tests less frequently, automation running continuously&lt;br /&gt;
** use assistive automation to support more effective exploratory testing&lt;br /&gt;
*** Brian Marick has some work going in this area&lt;br /&gt;
*** Michael Bolton describes his use of Watir as assistive automation&lt;br /&gt;
&lt;br /&gt;
Lisa Crispin suggested that Jared Richardson had done the continuous&lt;br /&gt;
integration work for SAS and might share insights and ideas.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Alternatives (3)&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
300 applications, small teams on each&lt;br /&gt;
&lt;br /&gt;
* Either many independent CI systems or an enterprise CI system&lt;br /&gt;
** unified view&lt;br /&gt;
** shared configuration&lt;br /&gt;
** reuse between teams&lt;br /&gt;
** security&lt;br /&gt;
** usable for small teams&lt;br /&gt;
&lt;br /&gt;
* Dependency management&lt;br /&gt;
** component level dependencies managed by tools&lt;br /&gt;
*** Anthill / Codestation&lt;br /&gt;
*** maven&lt;br /&gt;
*** ivy&lt;br /&gt;
** scheduling builds, which build should be run first&lt;br /&gt;
** how do I express the rules by which I select a component&lt;br /&gt;
*** version (specific version, pattern match a version, relational operator to version string, etc.)&lt;br /&gt;
*** acceptance test results&lt;/div&gt;</summary>
		<author><name>MarkEWaite</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Scaling_continuous_integration_to_the_enterprise&amp;diff=5673</id>
		<title>Scaling continuous integration to the enterprise</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Scaling_continuous_integration_to_the_enterprise&amp;diff=5673"/>
		<updated>2008-04-06T23:11:10Z</updated>

		<summary type="html">&lt;p&gt;MarkEWaite: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Enterprise Scale Continuous Integration ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Problem Definition&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
* 300 devs, 1 build break / year&lt;br /&gt;
** build will be broken every day&lt;br /&gt;
** slows us down&lt;br /&gt;
** creates distrust of the source master&lt;br /&gt;
&lt;br /&gt;
* 2 hour build, 3 day acceptance test&lt;br /&gt;
&lt;br /&gt;
* hard to assign failure due to multiple commits per build&lt;br /&gt;
&lt;br /&gt;
* long cycle time on failure (hours before you know you broke something)&lt;br /&gt;
&lt;br /&gt;
* failures affect more people, are more expensive&lt;br /&gt;
&lt;br /&gt;
* Understand root cause of failure is not obvious&lt;br /&gt;
&lt;br /&gt;
* How to handle 300 applications, each with a few devs, how to scale to many projects and still manage level&lt;br /&gt;
&lt;br /&gt;
* How to manage many branches to many mains&lt;br /&gt;
&lt;br /&gt;
* Managing build time dependencies (unexpected, undetected coupling)&lt;br /&gt;
** incorrect incremental builds&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Addressing the problems, alternatives, risks, and trade-offs&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
* subcomponents&lt;br /&gt;
** reduces build time, but&lt;br /&gt;
** increases integration time&lt;br /&gt;
* build acceleration technology&lt;br /&gt;
** parallel build, multi-machine, multi-core (Electric Accelerator, for instance)&lt;br /&gt;
** bug fast machines (although disc I/O may dominate)&lt;br /&gt;
* modularize to get recent successful build, not compile&lt;br /&gt;
** faster, less built (narrow the impact to a smaller team)&lt;br /&gt;
* Use &amp;quot;pre-flight&amp;quot; build (production build with many changes, not yet on the source master)&lt;br /&gt;
** integration race conditions&lt;br /&gt;
** faster hardware&lt;br /&gt;
** parallel builds&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Alternatives (2)&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
* 3 day acceptance test&lt;br /&gt;
** throw bodies at the problem (but it is not scalable)&lt;br /&gt;
** review the acceptance process for automation opportunities&lt;br /&gt;
** increase automated testing inside the application (at the interfaces)&lt;br /&gt;
** modularize tests, make them independent so they can run in parallel&lt;br /&gt;
** accept human tests less frequently, automation running continuously&lt;br /&gt;
** use assistive automation to support more effective exploratory testing&lt;br /&gt;
*** Brian Marick has some work going in this area&lt;br /&gt;
*** Michael Bolton describes his use of Watir as assistive automation&lt;br /&gt;
&lt;br /&gt;
Lisa Crispin suggested that Jared Richardson had done the continuous&lt;br /&gt;
integration work for SAS and might share insights and ideas.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Alternatives (3)&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
300 applications, small teams on each&lt;br /&gt;
&lt;br /&gt;
* Either many independent CI systems or an enterprise CI system&lt;br /&gt;
** unified view&lt;br /&gt;
** shared configuration&lt;br /&gt;
** reuse between teams&lt;br /&gt;
** security&lt;br /&gt;
** usable for small teams&lt;br /&gt;
&lt;br /&gt;
* Dependency management&lt;br /&gt;
** component level dependencies managed by tools&lt;br /&gt;
*** Anthill / Codestation&lt;br /&gt;
*** maven&lt;br /&gt;
*** ivy&lt;br /&gt;
** scheduling builds, which build should be run first&lt;br /&gt;
** how do I express the rules by which I select a component&lt;br /&gt;
*** version (specific version, pattern match a version, relational operator to version string, etc.)&lt;br /&gt;
*** acceptance test results&lt;/div&gt;</summary>
		<author><name>MarkEWaite</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Scaling_continuous_integration_to_the_enterprise&amp;diff=5672</id>
		<title>Scaling continuous integration to the enterprise</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Scaling_continuous_integration_to_the_enterprise&amp;diff=5672"/>
		<updated>2008-04-06T23:07:16Z</updated>

		<summary type="html">&lt;p&gt;MarkEWaite: Notes from the enterprise scale conitnuous integration discussions&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Enterprise Scale Continuous Integration ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Problem Definition&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
- 300 devs, 1 build break / year&lt;br /&gt;
  = build will be broken every day&lt;br /&gt;
  = slows us down&lt;br /&gt;
  = creates distrust of the source master&lt;br /&gt;
&lt;br /&gt;
- 2 hour build, 3 day acceptance test&lt;br /&gt;
&lt;br /&gt;
- hard to assign failure due to multiple commits per build&lt;br /&gt;
&lt;br /&gt;
- long cycle time on failure (hours before you know you broke something)&lt;br /&gt;
&lt;br /&gt;
- failures affect more people, are more expensive&lt;br /&gt;
&lt;br /&gt;
- Understand root cause of failure is not obvious&lt;br /&gt;
&lt;br /&gt;
- How to handle 300 applications, each with a few devs, how to scale to many projects and still manage level&lt;br /&gt;
&lt;br /&gt;
- How to manage many branches to many mains&lt;br /&gt;
&lt;br /&gt;
- Managing build time dependencies (unexpected, undetected coupling)&lt;br /&gt;
  = incorrect incremental builds&lt;br /&gt;
&lt;br /&gt;
Addressing the problems, alternatives, risks, and trade-offs&lt;br /&gt;
&lt;br /&gt;
- subcomponents&lt;br /&gt;
  = reduces build time, but&lt;br /&gt;
  = increases integration time&lt;br /&gt;
- build acceleration technology&lt;br /&gt;
  = parallel build, multi-machine, multi-core (Electric Accelerator, for instance)&lt;br /&gt;
  = bug fast machines (although disc I/O may dominate)&lt;br /&gt;
- modularize to get recent successful build, not compile&lt;br /&gt;
  = faster, less built (narrow the impact to a smaller team)&lt;br /&gt;
- Use &amp;quot;pre-flight&amp;quot; build (production build with many changes, not yet on the source master)&lt;br /&gt;
  = integration race conditions&lt;br /&gt;
  = faster hardware&lt;br /&gt;
  = parallel builds&lt;br /&gt;
&lt;br /&gt;
Alternatives (2)&lt;br /&gt;
&lt;br /&gt;
- 3 day acceptance test&lt;br /&gt;
  = throw bodies at the problem (but it is not scalable)&lt;br /&gt;
  = review the acceptance process for automation opportunities&lt;br /&gt;
  = increase automated testing inside the application (at the interfaces)&lt;br /&gt;
  = modularize tests, make them independent so they can run in parallel&lt;br /&gt;
  = accept human tests less frequently, automation running continuously&lt;br /&gt;
  = use assistive automation to support more effective exploratory testing&lt;br /&gt;
    * Brian Marick has some work going in this area&lt;br /&gt;
    * Michael Bolton describes his use of Watir as assistive automation&lt;br /&gt;
&lt;br /&gt;
Lisa Crispin suggested that Jared Richardson had done the continuous&lt;br /&gt;
integration work for SAS and might share insights and ideas.&lt;br /&gt;
&lt;br /&gt;
Alternatives (3)&lt;br /&gt;
&lt;br /&gt;
300 applications, small teams on each&lt;br /&gt;
&lt;br /&gt;
- Either many independent CI systems or an enterprise CI system&lt;br /&gt;
  = unified view&lt;br /&gt;
  = shared configuration&lt;br /&gt;
  = reuse between teams&lt;br /&gt;
  = security&lt;br /&gt;
  = usable for small teams&lt;br /&gt;
&lt;br /&gt;
- Dependency management&lt;br /&gt;
  = component level dependencies managed by tools&lt;br /&gt;
    * Anthill / Codestation&lt;br /&gt;
    * maven&lt;br /&gt;
    * ivy&lt;br /&gt;
  = scheduling builds, which build should be run first&lt;br /&gt;
  = how do I express the rules by which I select a component&lt;br /&gt;
    * version (specific version, pattern match a version, relational operator to version string, etc.)&lt;br /&gt;
    * acceptance test results&lt;/div&gt;</summary>
		<author><name>MarkEWaite</name></author>
	</entry>
</feed>