<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://citconf.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Max.pimm</id>
	<title>CitconWiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://citconf.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Max.pimm"/>
	<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Special:Contributions/Max.pimm"/>
	<updated>2026-04-24T21:37:56Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.11</generator>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Multi_Agent_AI_Personas&amp;diff=17034</id>
		<title>Multi Agent AI Personas</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Multi_Agent_AI_Personas&amp;diff=17034"/>
		<updated>2025-09-28T06:34:50Z</updated>

		<summary type="html">&lt;p&gt;Max.pimm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* Explains intro, but I missed it&lt;br /&gt;
&lt;br /&gt;
* Talked about having a UI that you can talk to different agents.&lt;br /&gt;
&lt;br /&gt;
* Specialized agents trained on different data could talk to each other.  &lt;br /&gt;
&lt;br /&gt;
* How do you create a persona&lt;br /&gt;
&lt;br /&gt;
* You tell the agent to behave like a persona to don&amp;#039;t actually train it&lt;br /&gt;
&lt;br /&gt;
* It&amp;#039;s about refining a base model rather than training a new model&lt;br /&gt;
&lt;br /&gt;
* AI&amp;#039;s generally agree with you. In order to get them to critique each other we&amp;#039;d need to prompt them to be critical of each other.&lt;br /&gt;
&lt;br /&gt;
* After an long painful interaction I&amp;#039;ve tried asking what was the prompt I should have used to have got there qiucker. However the AI often just tells me that &lt;br /&gt;
I already took the quickest route.&lt;br /&gt;
* We should be experimenting now because my CEO thinks there will be a step change in pricing.&lt;br /&gt;
&lt;br /&gt;
* We tried using a single AI with a prompt telling them to assume two personas but it wasn&amp;#039;t succesful. They didn&amp;#039;t switch persona cleanly.&lt;br /&gt;
&lt;br /&gt;
* Crew AI - already does something similar&lt;br /&gt;
&lt;br /&gt;
* We have a lot of customer data we could use to make specialist AI&amp;#039;s but we don&amp;#039;t have legal persmission to use it.&lt;br /&gt;
&lt;br /&gt;
* GDPR concerns. User data is being sent to LLM&amp;#039;s&lt;br /&gt;
&lt;br /&gt;
* Can you include a human in the loop to evaluate responses and add some reinforcement learning&lt;br /&gt;
&lt;br /&gt;
* Gell Mann Amnesia Effect. When you are an expert it produces low quality content.&lt;br /&gt;
&lt;br /&gt;
* How can you validate an agent?&lt;br /&gt;
&lt;br /&gt;
* You have a persona that forces agents to validate their sources. This still does not eliminate false positives but reduces the probability.&lt;br /&gt;
&lt;br /&gt;
* If you still have to check I don&amp;#039;t see the value for anything that it is critical to get right&lt;br /&gt;
&lt;br /&gt;
* The utility is that everyone can ask the CEO questions at the same time.&lt;br /&gt;
&lt;br /&gt;
* If that frees up 10x the CEO&amp;#039;s time and he can validate the responses in less than that, it is still net positive for the CEO&amp;#039;s time.&lt;br /&gt;
&lt;br /&gt;
* We replaced functional tests with metrics and rolled back deployments quickly when there was a problem. Perhaps you can do the same with unverified decisions.&lt;br /&gt;
&lt;br /&gt;
* You&amp;#039;d have to apply this not only revenue metrics but also security and other dimensions that might be hard to measure.&lt;br /&gt;
&lt;br /&gt;
* Can agents maintain agent generated code&lt;br /&gt;
&lt;br /&gt;
* When I&amp;#039;m vibe coding I often ask the agent to critique and refactor the code. This could be done by another agent.&lt;br /&gt;
&lt;br /&gt;
* If HR is represented by an agent this could be legally difficult.&lt;br /&gt;
&lt;br /&gt;
* The agent is only an advisor the human has to be accountable&lt;br /&gt;
&lt;br /&gt;
* When humans are in a room and have to come to consensus they usually make good decisions. Would the same not happen with agents?&lt;/div&gt;</summary>
		<author><name>Max.pimm</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Multi_Agent_AI_Personas&amp;diff=17033</id>
		<title>Multi Agent AI Personas</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Multi_Agent_AI_Personas&amp;diff=17033"/>
		<updated>2025-09-28T06:31:33Z</updated>

		<summary type="html">&lt;p&gt;Max.pimm: /* Multi Agent AI Personas */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Tom: Explains intro, but I missed it&lt;br /&gt;
&lt;br /&gt;
PJ: Talked about having a UI that you can talk to different agents.&lt;br /&gt;
&lt;br /&gt;
Max: Specialized agents trained on different data could talk to each other.  &lt;br /&gt;
&lt;br /&gt;
Anton: How do you create a persona&lt;br /&gt;
&lt;br /&gt;
Tom: You tell the agent to behave like a persona to don&amp;#039;t actually train it&lt;br /&gt;
&lt;br /&gt;
Xing: It&amp;#039;s about refining a base model rather than training a new model&lt;br /&gt;
&lt;br /&gt;
Max: AI&amp;#039;s generally agree with you. In order to get them to critique each other we&amp;#039;d need to prompt them to be critical of each other.&lt;br /&gt;
&lt;br /&gt;
Tom: After an long painful interaction I&amp;#039;ve tried asking what was the prompt I should have used to have got there qiucker. However the AI often just tells me that &lt;br /&gt;
I already took the quickest route.&lt;br /&gt;
Tom: We should be experimenting now because my CEO thinks there will be a step change in pricing.&lt;br /&gt;
&lt;br /&gt;
Tom: We tried using a single AI with a prompt telling them to assume two personas but it wasn&amp;#039;t succesful. They didn&amp;#039;t switch persona cleanly.&lt;br /&gt;
&lt;br /&gt;
Tim: Crew AI - already does something similar&lt;br /&gt;
&lt;br /&gt;
Ade: We have a lot of customer data we could use to make specialist AI&amp;#039;s but we don&amp;#039;t have legal persmission to use it.&lt;br /&gt;
&lt;br /&gt;
Everyone: GDPR concerns. User data is being sent to LLM&amp;#039;s&lt;br /&gt;
&lt;br /&gt;
Graham: Can you include a human in the loop to evaluate responses and add some reinforcement learning&lt;br /&gt;
&lt;br /&gt;
Tom: Gell Mann Amnesia Effect. When you are an expert it produces low quality content.&lt;br /&gt;
&lt;br /&gt;
Anton: How can you validate an agent?&lt;br /&gt;
&lt;br /&gt;
Tom/Max: You have a persona that forces agents to validate their sources. This still does not eliminate false positives but reduces the probability.&lt;br /&gt;
&lt;br /&gt;
Anton: If you still have to check I don&amp;#039;t see the value for anything that it is critical to get right&lt;br /&gt;
&lt;br /&gt;
PJ: The utility is that everyone can ask the CEO questions at the same time.&lt;br /&gt;
&lt;br /&gt;
Tom: If that frees up 10x the CEO&amp;#039;s time and he can validate the responses in less than that, it is still net positive for the CEO&amp;#039;s time.&lt;br /&gt;
&lt;br /&gt;
PJ: We replaced functional tests with metrics and rolled back deployments quickly when there was a problem. Perhaps you can do the same with unverified decisions.&lt;br /&gt;
&lt;br /&gt;
Graham: You&amp;#039;d have to apply this not only revenue metrics but also security and other dimensions that might be hard to measure.&lt;br /&gt;
&lt;br /&gt;
Tom: Can agents maintain agent generated code&lt;br /&gt;
&lt;br /&gt;
PJ: When I&amp;#039;m vibe coding I often ask the agent to critique and refactor the code. This could be done by another agent.&lt;br /&gt;
&lt;br /&gt;
Graham: If HR is represented by an agent this could be legally difficult.&lt;br /&gt;
&lt;br /&gt;
Tom: The agent is only an advisor the human has to be accountable&lt;br /&gt;
&lt;br /&gt;
Xing: When humans are in a room and have to come to consensus they usually make good decisions. Would the same not happen with agents?&lt;/div&gt;</summary>
		<author><name>Max.pimm</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=Multi_Agent_AI_Personas&amp;diff=17032</id>
		<title>Multi Agent AI Personas</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=Multi_Agent_AI_Personas&amp;diff=17032"/>
		<updated>2025-09-28T06:31:10Z</updated>

		<summary type="html">&lt;p&gt;Max.pimm: Created page with &amp;quot;=Multi Agent AI Personas=  Tom: Explains intro, but I missed it PJ: Talked about having a UI that you can talk to different agents. Max: Specialized agents trained on differen...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Multi Agent AI Personas=&lt;br /&gt;
&lt;br /&gt;
Tom: Explains intro, but I missed it&lt;br /&gt;
PJ: Talked about having a UI that you can talk to different agents.&lt;br /&gt;
Max: Specialized agents trained on different data could talk to each other.  &lt;br /&gt;
Anton: How do you create a persona&lt;br /&gt;
Tom: You tell the agent to behave like a persona to don&amp;#039;t actually train it&lt;br /&gt;
&lt;br /&gt;
Xing: It&amp;#039;s about refining a base model rather than training a new model&lt;br /&gt;
&lt;br /&gt;
Max: AI&amp;#039;s generally agree with you. In order to get them to critique each other we&amp;#039;d need to prompt them to be critical of each other.&lt;br /&gt;
&lt;br /&gt;
Tom: After an long painful interaction I&amp;#039;ve tried asking what was the prompt I should have used to have got there qiucker. However the AI often just tells me that &lt;br /&gt;
I already took the quickest route.&lt;br /&gt;
Tom: We should be experimenting now because my CEO thinks there will be a step change in pricing.&lt;br /&gt;
&lt;br /&gt;
Tom: We tried using a single AI with a prompt telling them to assume two personas but it wasn&amp;#039;t succesful. They didn&amp;#039;t switch persona cleanly.&lt;br /&gt;
&lt;br /&gt;
Tim: Crew AI - already does something similar&lt;br /&gt;
&lt;br /&gt;
Ade: We have a lot of customer data we could use to make specialist AI&amp;#039;s but we don&amp;#039;t have legal persmission to use it.&lt;br /&gt;
&lt;br /&gt;
Everyone: GDPR concerns. User data is being sent to LLM&amp;#039;s&lt;br /&gt;
&lt;br /&gt;
Graham: Can you include a human in the loop to evaluate responses and add some reinforcement learning&lt;br /&gt;
&lt;br /&gt;
Tom: Gell Mann Amnesia Effect. When you are an expert it produces low quality content.&lt;br /&gt;
&lt;br /&gt;
Anton: How can you validate an agent?&lt;br /&gt;
&lt;br /&gt;
Tom/Max: You have a persona that forces agents to validate their sources. This still does not eliminate false positives but reduces the probability.&lt;br /&gt;
&lt;br /&gt;
Anton: If you still have to check I don&amp;#039;t see the value for anything that it is critical to get right&lt;br /&gt;
&lt;br /&gt;
PJ: The utility is that everyone can ask the CEO questions at the same time.&lt;br /&gt;
&lt;br /&gt;
Tom: If that frees up 10x the CEO&amp;#039;s time and he can validate the responses in less than that, it is still net positive for the CEO&amp;#039;s time.&lt;br /&gt;
&lt;br /&gt;
PJ: We replaced functional tests with metrics and rolled back deployments quickly when there was a problem. Perhaps you can do the same with unverified decisions.&lt;br /&gt;
&lt;br /&gt;
Graham: You&amp;#039;d have to apply this not only revenue metrics but also security and other dimensions that might be hard to measure.&lt;br /&gt;
&lt;br /&gt;
Tom: Can agents maintain agent generated code&lt;br /&gt;
&lt;br /&gt;
PJ: When I&amp;#039;m vibe coding I often ask the agent to critique and refactor the code. This could be done by another agent.&lt;br /&gt;
&lt;br /&gt;
Graham: If HR is represented by an agent this could be legally difficult.&lt;br /&gt;
&lt;br /&gt;
Tom: The agent is only an advisor the human has to be accountable&lt;br /&gt;
&lt;br /&gt;
Xing: When humans are in a room and have to come to consensus they usually make good decisions. Would the same not happen with agents?&lt;/div&gt;</summary>
		<author><name>Max.pimm</name></author>
	</entry>
	<entry>
		<id>https://citconf.com/wiki/index.php?title=CITCONEurope2025Sessions&amp;diff=17031</id>
		<title>CITCONEurope2025Sessions</title>
		<link rel="alternate" type="text/html" href="https://citconf.com/wiki/index.php?title=CITCONEurope2025Sessions&amp;diff=17031"/>
		<updated>2025-09-28T06:26:59Z</updated>

		<summary type="html">&lt;p&gt;Max.pimm: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Room&lt;br /&gt;
! 10:00 - 11:00&lt;br /&gt;
! 11:15 - 12:15&lt;br /&gt;
! 12:30 - 14:00 (Lunch)&lt;br /&gt;
! 14:00 - 15:00&lt;br /&gt;
! 15:15 - 16:15&lt;br /&gt;
! 16:30 - 17:30&lt;br /&gt;
|-&lt;br /&gt;
| Presentation Area (6th Floor)&lt;br /&gt;
| [[Teaching Next Gen of SW Eng.]]&lt;br /&gt;
| [[Elephant Carpaccio Daily Delivery]]&lt;br /&gt;
| &lt;br /&gt;
| [[Autism++]]&lt;br /&gt;
| [[Controversial but useful practices]]&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| Space 1 (7th Floor)&lt;br /&gt;
| &lt;br /&gt;
| [[What is CBT and how it can help you?]]&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| [[What if your boss doesn’t see/is the problem?]]&lt;br /&gt;
|-&lt;br /&gt;
| Space 2 (7th Floor)&lt;br /&gt;
| [[HardCoreGit]]&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| Space 3 (7th Floor)&lt;br /&gt;
| [[Multi Agent AI Personas]]&lt;br /&gt;
| [[War of the CI Servers]]&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| [[Code until you die / older female devs]]&lt;br /&gt;
|-&lt;br /&gt;
| Kaizen Room (7th Floor)&lt;br /&gt;
| [[How can we accelerate change adoption in users?]]&lt;br /&gt;
| [[Vibe Coding with Glamorous Toolkit]]&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
| [[In the age of the vibe coder, Kata == waste]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Max.pimm</name></author>
	</entry>
</feed>