<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://citconf.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Anderew</id>
	<title>CitconWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://citconf.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Anderew"/>
	<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Special:Contributions/Anderew"/>
	<updated>2026-04-24T17:07:52Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.11</generator>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Continuous_performance_test&amp;diff=7970</id>
		<title>Continuous performance test</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Continuous_performance_test&amp;diff=7970"/>
		<updated>2010-11-19T21:24:42Z</updated>

		<summary type="html">&lt;p&gt;Anderew: New page: I cant really do this excellent session justice so hopefully somebody will come along and update these notes!  This is what I took away:  == Performance testers as first class citizens == ...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;I cant really do this excellent session justice so hopefully somebody will come along and update these notes!&lt;br /&gt;
&lt;br /&gt;
This is what I took away:&lt;br /&gt;
&lt;br /&gt;
== Performance testers as first class citizens ==&lt;br /&gt;
&lt;br /&gt;
There was a mixed group of performance test architects (!), developers, testers.&lt;br /&gt;
&lt;br /&gt;
The performance testers felt that they were misunderstood and poorly supported sometimes by the developers.&lt;br /&gt;
&lt;br /&gt;
In an agile environment most people agreed that when constructing stories you must create stories that feature the performance test analyst as a stakeholder explicitly. E.g.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;The performance test analyst would like to be able to view real time measures of the latency requests to each type of downstream system so that they can identify if there is a bottleneck and tune accordingly.&amp;#039;&amp;#039; &lt;br /&gt;
&lt;br /&gt;
There was also a feeling from several developers that the performance testers sometimes were overly protective of their knowledge (for various reasons) and did not want to involve developers or have their contribution in any way automated. This reflects my own experience on several projects. &lt;br /&gt;
&lt;br /&gt;
[What follows was never discussed in the session apologies but I think is relevant].&lt;br /&gt;
&lt;br /&gt;
For example, I was involved in high volume, high concurrency, low latency mobile application. The dedicated perf test team insisted on working in isolation. At great expense (license costs for tools, consultancy fees) they produced a test suite. I insisted on being closely involved as the tech arch with delivery responsibility. I found that the perf tests being executed did not mirror the user stories supported by the system, did not stress the parts of the architecture that had been identified as potentially weak (e.g. do not spend 80% of your effort load testing the download of a resource which is actually static and ignore the resource which is protected by a call to a concurrency constrained authentication system). I also found the scripting tools themselves were a blast from the past. They were as complex as any Java code we ran but were written without structure or rigor in an obtuse dialect of pseudo-C. I rebuilt the load tests from scratch using a simple Open Source tool that developers were able to run all the time and maintain themselves. Our new load tests were empirically proved fit for purpose. The load testing team were made redundant and replaced with a combination of developers and operations staff (in true devops style).&lt;br /&gt;
&lt;br /&gt;
== There is a difference between component perf testing and system testing ==&lt;br /&gt;
&lt;br /&gt;
We struggled for a while with terminology. Developers were keen to test component and measure that components relative performance as development progressed. They were also keen not to optimize early and produce end up fixing something that when the system was exercised holistically, was irrelevant. &lt;br /&gt;
&lt;br /&gt;
Perf testers were keen that we focus on complete end to end system testing as at the end of the day, that was the only measure that actually counted and that could be relied on to reflect reality.&lt;br /&gt;
&lt;br /&gt;
I think in summary we were all in violent agreement.&lt;br /&gt;
&lt;br /&gt;
== Only one person appeared to be really doing continuous performance testing ==&lt;br /&gt;
&lt;br /&gt;
Many people in the room were doing performance testing. Several people had done performance testing early and often but still essentially executed manual tests with manual analysis. One person was lucky enough to have a perf test environment (in EC2) that was identical to production. That environment was torn down and recreated every day, several perf tests based on use cases executed and the results graphed. Nirvana!&lt;br /&gt;
&lt;br /&gt;
However, that system was not yet live. Several of the group questioned how the perf tests would evolve to reflect a post-launch database and wheter the use cases would be reworked to reflect real usage of the system.&lt;br /&gt;
&lt;br /&gt;
The group member with the perfect environment admitted that even with this toy he was still going to do a perf test exercise towards the end of development.&lt;br /&gt;
&lt;br /&gt;
== I don&amp;#039;t think we addressed Arnaud&amp;#039;s point! ==&lt;br /&gt;
&lt;br /&gt;
The submitter of the session was looking for something different to what we eventually discussed I suspect. He wanted some instantaneous measure of the cost of execution of a piece of code, possibly from static analysis of code. He felt that this was the only true way to really progress high performance code development. I have not done his suggestions justice at all here - sorry!&lt;/div&gt;</summary>
		<author><name>Anderew</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONEurope2010Sessions&amp;diff=7969</id>
		<title>CITCONEurope2010Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONEurope2010Sessions&amp;diff=7969"/>
		<updated>2010-11-19T20:52:41Z</updated>

		<summary type="html">&lt;p&gt;Anderew: /* 11:15 - 12:15 Sessions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===10:00-11:00 Sessions===&lt;br /&gt;
[[Large Scale CI]]&lt;br /&gt;
&lt;br /&gt;
[[Pairing Techniques]]&lt;br /&gt;
&lt;br /&gt;
[[Performance Testing in-the-small]]&lt;br /&gt;
&lt;br /&gt;
[[Database testing and deployment]]&lt;br /&gt;
&lt;br /&gt;
[[Scaling agile]]&lt;br /&gt;
&lt;br /&gt;
===11:15 - 12:15 Sessions===&lt;br /&gt;
[[Managing Multiple Dependencies]]&lt;br /&gt;
&lt;br /&gt;
[[Narrative Framework]]&lt;br /&gt;
&lt;br /&gt;
[[Continuous performance test]]&lt;br /&gt;
&lt;br /&gt;
===14:00 - 15:00 Sessions===&lt;br /&gt;
[[Long Term Value of Acceptance Tests]]&lt;br /&gt;
&lt;br /&gt;
[[Overcoming Organisational Defensiveness]]&lt;br /&gt;
&lt;br /&gt;
=== 15:15-16:15 Sessions ===&lt;br /&gt;
[[Continuous Deployment]]&lt;br /&gt;
&lt;br /&gt;
[[.NET CI Stack]]&lt;br /&gt;
&lt;br /&gt;
[[Using KPIs/Getting a data-driven org.]]&lt;br /&gt;
&lt;br /&gt;
=== 16:30-17:30 Sessions ===&lt;br /&gt;
&lt;br /&gt;
[[MultipleClientDeployment]]&lt;br /&gt;
&lt;br /&gt;
[[Beyond Basic TDD]]&lt;/div&gt;</summary>
		<author><name>Anderew</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Scaling_agile&amp;diff=7968</id>
		<title>Scaling agile</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Scaling_agile&amp;diff=7968"/>
		<updated>2010-11-19T20:50:55Z</updated>

		<summary type="html">&lt;p&gt;Anderew: New page: Attended by a small group from various backgrounds but everybody was practicing agile AND had experienced scaling up (not much CI in this session).  == Distributed Agile ==  Several partic...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Attended by a small group from various backgrounds but everybody was practicing agile AND had experienced scaling up (not much CI in this session).&lt;br /&gt;
&lt;br /&gt;
== Distributed Agile ==&lt;br /&gt;
&lt;br /&gt;
Several participants described described the issues when running a distributed team across several time zones. These were mostly practical issues and we discussed pragmatic solutions that had been tried.&lt;br /&gt;
&lt;br /&gt;
One had a team with members in Vietnam and Reading. They managed to find a time of days where both teams were working (or at least awake!) but ran into problems as one team was finishing day X of the sprint whilst the other was just starting. They had had confusion around reporting progress on tasks and filling in the turndown. Several others in the group reported similar issues when the timezone difference was significant (usually transatlantic or to India). The consensus seemed to be that these were teething issues and that once an approach had been agreed the problem went away.&lt;br /&gt;
&lt;br /&gt;
When dealing with a distributed team everybody agreed that everybody had to use the same medium for communication. Where one part of the team was sitting in close proximity with a number of others working remotely they had tried having a task board which some of the group could see and others used a webcam to watch.  This did not work well as it always put somebody at disadvantage and lead to that person or persons not participating fully. Everyone (I think - my notes a vague!) agreed that if one part of the team was remote then everybody had to use headsets and an electronic task board. Communication channels had to be homogenous.&lt;br /&gt;
&lt;br /&gt;
We discussed the practical issues of scheduling a demo when the team and various stakeholders are distributed around the planet. Everybody seemed to use Webex or some other desktop sharing style application. One member described their procedure where the scrum master would schedule a 15 minute preparation period before the demo where they would ring people up and get them connected. This sounded like a big investment but from my own experience is a great idea as it avoids a fifteen minute delay eating into the start of the demo.&lt;br /&gt;
&lt;br /&gt;
== Scaling up ==&lt;br /&gt;
&lt;br /&gt;
We talked about scaling up agile teams. Success was reported by a group member who had commissioned a distributed team in eastern europe. He had flown out and run a mini-sprint with the remote team so they could experience all phases of the lifecycle, the artefacts produced and the vocabulary used. This sounded like an excellent approach to me.&lt;br /&gt;
&lt;br /&gt;
Another group member used rotation where an experienced member of a team would be sent to work overseas for a significant period in order to seed a new team with the culture of the initial group in the UK. This had proved effective when creating new teams in Boston and the seeding period had been three months.&lt;br /&gt;
&lt;br /&gt;
We talked about experiences one group member had had with larger agile teams. In one instance the large team had been a group of committed and experienced agilsts. They had sustained a team of fifteen people effectively. Everyone was collocated and the team took responsibility for organising themselves in such a way that the agile processes (e.g. planning, standups) did not become unwieldy. &lt;br /&gt;
&lt;br /&gt;
In the second instance where the team was a mix of contractors, consultants and permies at client site a different approach was required (see [http://www.slideshare.net/stephenellliott/agile-methods-experience-report-by-a-technical-architect-andrew-rendell-valtech report] and [http://www.slideshare.net/stephenellliott/the-role-of-architect-in-an-agile-organisation-andrew-rendell slides].  Here the team had to be split to keep it effective and the architecture of the application refactored so that it better reflected the teams. One of the group pointed out that this is an interesting twist on [http://en.wikipedia.org/wiki/Conway%27s_Law Conway&amp;#039;s law].&lt;/div&gt;</summary>
		<author><name>Anderew</name></author>
	</entry>
</feed>