Difference between revisions of "Multi Agent AI Personas"
From CitconWiki
Jump to navigationJump to search (Created page with "=Multi Agent AI Personas= Tom: Explains intro, but I missed it PJ: Talked about having a UI that you can talk to different agents. Max: Specialized agents trained on differen...") |
|||
(One intermediate revision by the same user not shown) | |||
Line 1: | Line 1: | ||
− | + | * Explains intro, but I missed it | |
− | + | * Talked about having a UI that you can talk to different agents. | |
− | |||
− | |||
− | |||
− | |||
− | + | * Specialized agents trained on different data could talk to each other. | |
− | + | * How do you create a persona | |
− | + | * You tell the agent to behave like a persona to don't actually train it | |
+ | |||
+ | * It's about refining a base model rather than training a new model | ||
+ | |||
+ | * AI's generally agree with you. In order to get them to critique each other we'd need to prompt them to be critical of each other. | ||
+ | |||
+ | * After an long painful interaction I've tried asking what was the prompt I should have used to have got there qiucker. However the AI often just tells me that | ||
I already took the quickest route. | I already took the quickest route. | ||
− | + | * We should be experimenting now because my CEO thinks there will be a step change in pricing. | |
− | + | * We tried using a single AI with a prompt telling them to assume two personas but it wasn't succesful. They didn't switch persona cleanly. | |
− | + | * Crew AI - already does something similar | |
− | + | * We have a lot of customer data we could use to make specialist AI's but we don't have legal persmission to use it. | |
− | + | * GDPR concerns. User data is being sent to LLM's | |
− | + | * Can you include a human in the loop to evaluate responses and add some reinforcement learning | |
− | + | * Gell Mann Amnesia Effect. When you are an expert it produces low quality content. | |
− | + | * How can you validate an agent? | |
− | + | * You have a persona that forces agents to validate their sources. This still does not eliminate false positives but reduces the probability. | |
− | + | * If you still have to check I don't see the value for anything that it is critical to get right | |
− | + | * The utility is that everyone can ask the CEO questions at the same time. | |
− | + | * If that frees up 10x the CEO's time and he can validate the responses in less than that, it is still net positive for the CEO's time. | |
− | + | * We replaced functional tests with metrics and rolled back deployments quickly when there was a problem. Perhaps you can do the same with unverified decisions. | |
− | + | * You'd have to apply this not only revenue metrics but also security and other dimensions that might be hard to measure. | |
− | + | * Can agents maintain agent generated code | |
− | + | * When I'm vibe coding I often ask the agent to critique and refactor the code. This could be done by another agent. | |
− | + | * If HR is represented by an agent this could be legally difficult. | |
− | + | * The agent is only an advisor the human has to be accountable | |
− | + | * When humans are in a room and have to come to consensus they usually make good decisions. Would the same not happen with agents? |
Latest revision as of 23:34, 27 September 2025
- Explains intro, but I missed it
- Talked about having a UI that you can talk to different agents.
- Specialized agents trained on different data could talk to each other.
- How do you create a persona
- You tell the agent to behave like a persona to don't actually train it
- It's about refining a base model rather than training a new model
- AI's generally agree with you. In order to get them to critique each other we'd need to prompt them to be critical of each other.
- After an long painful interaction I've tried asking what was the prompt I should have used to have got there qiucker. However the AI often just tells me that
I already took the quickest route.
- We should be experimenting now because my CEO thinks there will be a step change in pricing.
- We tried using a single AI with a prompt telling them to assume two personas but it wasn't succesful. They didn't switch persona cleanly.
- Crew AI - already does something similar
- We have a lot of customer data we could use to make specialist AI's but we don't have legal persmission to use it.
- GDPR concerns. User data is being sent to LLM's
- Can you include a human in the loop to evaluate responses and add some reinforcement learning
- Gell Mann Amnesia Effect. When you are an expert it produces low quality content.
- How can you validate an agent?
- You have a persona that forces agents to validate their sources. This still does not eliminate false positives but reduces the probability.
- If you still have to check I don't see the value for anything that it is critical to get right
- The utility is that everyone can ask the CEO questions at the same time.
- If that frees up 10x the CEO's time and he can validate the responses in less than that, it is still net positive for the CEO's time.
- We replaced functional tests with metrics and rolled back deployments quickly when there was a problem. Perhaps you can do the same with unverified decisions.
- You'd have to apply this not only revenue metrics but also security and other dimensions that might be hard to measure.
- Can agents maintain agent generated code
- When I'm vibe coding I often ask the agent to critique and refactor the code. This could be done by another agent.
- If HR is represented by an agent this could be legally difficult.
- The agent is only an advisor the human has to be accountable
- When humans are in a room and have to come to consensus they usually make good decisions. Would the same not happen with agents?