Difference between revisions of "Multi Agent AI Personas"
(Created page with "=Multi Agent AI Personas= Tom: Explains intro, but I missed it PJ: Talked about having a UI that you can talk to different agents. Max: Specialized agents trained on differen...") |
|||
Line 1: | Line 1: | ||
− | + | Tom: Explains intro, but I missed it | |
− | |||
PJ: Talked about having a UI that you can talk to different agents. | PJ: Talked about having a UI that you can talk to different agents. | ||
+ | |||
Max: Specialized agents trained on different data could talk to each other. | Max: Specialized agents trained on different data could talk to each other. | ||
+ | |||
Anton: How do you create a persona | Anton: How do you create a persona | ||
+ | |||
Tom: You tell the agent to behave like a persona to don't actually train it | Tom: You tell the agent to behave like a persona to don't actually train it | ||
Revision as of 23:31, 27 September 2025
Tom: Explains intro, but I missed it
PJ: Talked about having a UI that you can talk to different agents.
Max: Specialized agents trained on different data could talk to each other.
Anton: How do you create a persona
Tom: You tell the agent to behave like a persona to don't actually train it
Xing: It's about refining a base model rather than training a new model
Max: AI's generally agree with you. In order to get them to critique each other we'd need to prompt them to be critical of each other.
Tom: After an long painful interaction I've tried asking what was the prompt I should have used to have got there qiucker. However the AI often just tells me that I already took the quickest route. Tom: We should be experimenting now because my CEO thinks there will be a step change in pricing.
Tom: We tried using a single AI with a prompt telling them to assume two personas but it wasn't succesful. They didn't switch persona cleanly.
Tim: Crew AI - already does something similar
Ade: We have a lot of customer data we could use to make specialist AI's but we don't have legal persmission to use it.
Everyone: GDPR concerns. User data is being sent to LLM's
Graham: Can you include a human in the loop to evaluate responses and add some reinforcement learning
Tom: Gell Mann Amnesia Effect. When you are an expert it produces low quality content.
Anton: How can you validate an agent?
Tom/Max: You have a persona that forces agents to validate their sources. This still does not eliminate false positives but reduces the probability.
Anton: If you still have to check I don't see the value for anything that it is critical to get right
PJ: The utility is that everyone can ask the CEO questions at the same time.
Tom: If that frees up 10x the CEO's time and he can validate the responses in less than that, it is still net positive for the CEO's time.
PJ: We replaced functional tests with metrics and rolled back deployments quickly when there was a problem. Perhaps you can do the same with unverified decisions.
Graham: You'd have to apply this not only revenue metrics but also security and other dimensions that might be hard to measure.
Tom: Can agents maintain agent generated code
PJ: When I'm vibe coding I often ask the agent to critique and refactor the code. This could be done by another agent.
Graham: If HR is represented by an agent this could be legally difficult.
Tom: The agent is only an advisor the human has to be accountable
Xing: When humans are in a room and have to come to consensus they usually make good decisions. Would the same not happen with agents?