Convergence Is Evidence

PAC — Concept: New Pricing Tier
================================

  Persona              Score    Top Objection
  ─────                ─────    ─────────────
  CISO-FSI             2/5      won't pass procurement w/o SOC 2
  AI Builder           4/5      —
  CTO-FSI              3/5      won't pass procurement w/o SOC 2
  Healthcare-CIO       2/5      compliance gap
  Sovereign-CTO        3/5      won't pass procurement w/o SOC 2
  ISV-Founder          4/5      pricing competitive
  Platform-PM          3/5      —
  CISO-Healthcare      2/5      won't pass procurement w/o SOC 2
  Sovereign-CDO        3/5      won't pass procurement w/o SOC 2
  AI Builder #2        4/5      —

  CONVERGENCE: 5 of 10 personas independently flagged
               the same objection — procurement / SOC 2.

Aaron runs a thing called a PAC — a Product Advisory Council. Ten buyer personas, each one its own agent, each one with its own grounding documents, its own prompts, its own scratch context. He drops a concept into the pool — a positioning idea, a feature, a pricing change — and the agents each grade it. They don’t talk to each other. They don’t see each other’s drafts. Each one writes a 1-to-5 score with reasoning, surfaces objections from its persona’s point of view, and exits.

At the end he reads ten scorecards.

The interesting question isn’t what any individual scorecard says. The interesting question is what shows up in seven of them.

Convergence is the signal

If seven personas independently flag the same objection — we’d never get this past procurement without a SOC 2 — that’s not seven opinions. That’s one observation, repeated. The procurement gate is real and it’s sitting in front of the deal. You can stake a roadmap on that and you’d be right to.

If two scorecards say the price is too high, three say the price is fine, and the rest don’t mention price — that’s not a price problem. That’s noise.

The rule, which works for any multi-agent setup people are calling a swarm right now: convergence is evidence. Divergence is where the model is filling gaps with invention.

Read the convergence first. Trust it. An independent panel agreeing on something is the one thing a single agent can’t give you, no matter how good the prompt.

Read the divergence second, with skepticism. That’s where each agent had to invent something to fill a gap in the input, and where their inventions diverge is where the model is hallucinating its way through missing context. The divergence isn’t five different reads of the same situation. It’s five different fabrications glued onto a thinly-described scene.

Abu Dhabi mall, multiple lanes converging on a single Apple Store entrance

Abu Dhabi, 2025. Independent paths, no coordination, same destination. The geometry does the work the agents can’t. Photo: Aaron Fulkerson

Why it works (and what people break)

Convergence-as-evidence requires one thing: the agents have to actually be independent.

Most multi-agent “swarm” output you’ll see published isn’t. It’s one agent talking to itself in five different prompts, with a synthesizer at the end compressing the answers. The synthesizer’s job is to reconcile — which means it gets paid to make things converge. So they converge. And then the output gets read as five experts agreed, which is exactly the wrong thing to take from it. It’s one model in a trench coat.

You can tell the difference. Real independent agents disagree on small details and converge on load-bearing ones. Fake-independent agents converge on everything, in roughly the same prose.

Three rules keep it real, all of them documented in claude-code-patterns:

1. Give each agent a narrow scope. A persona-aware agent that loads only its persona’s grounding documents — its objections, its language, its budget, its calendar pressures — will reason like that persona. An agent that loads everything reasons like the average of everything, which is no one in particular.

2. Don’t let them share context. Subagents don’t talk to each other by default — a feature, not a limitation. Use it. The minute you give them a shared scratchpad or let them read each other’s drafts, you’ve turned independent voters into a focus group, and a focus group anchors on the loudest opinion in the room. Convergence stops being evidence and starts being conformity.

3. Run them in parallel, not in a chain. Sequential agents inherit the previous agent’s frame and answer the question that frame implies. Parallel agents each get a clean read of the same input. Faster, cheaper, and more importantly, the votes are uncorrelated.

These three rules make the difference between a panel and an echo. A panel tells you what’s load-bearing. An echo tells you what your loudest agent already believed.

What to do with the result

Once you’ve got real convergence, the action is simple. Take the convergent observations as ground truth and put them in the next decision. Take the divergent ones as a list of things to investigate by hand, because the model couldn’t.

The convergent ones are votes that count. The divergent ones are places where the panel didn’t have enough material to reason. Read the divergent ones — but go check.

This rule works at every scale, not just AI agents. Five code reviewers reading the same diff with no chat history, converging on the same bug — real bug. Five reviewers in a thread where one said something first and the rest +1’d — one bug claim, repeated, with no additional evidence behind it. The structure of the panel determines whether agreement means anything. Most leaders default to the second setup and read it as the first. They’re getting echoes and treating them as triangulation.

The trick that isn’t a trick

Agent swarms are sold as a magic trick. They aren’t. They’re triangulation, and triangulation only works if the surveyors don’t see each other’s measurements before they record them.

If your AI workflow has five agents doing five things and one synthesizer at the end, the synthesizer is probably the one doing all the actual reasoning, and the other agents are providing flavor. That can be useful. Just don’t read the output as a panel. It’s a soloist with backup singers.

If your workflow has five agents reading the same input independently and you’re scanning the outputs for what shows up in all five — that’s a panel. The convergence is real. The divergence is honest. You can trust both signals, because each one is telling you something different.

That’s the whole rule. Convergence is evidence. Divergence is where to look. Build for the first, listen to the second, and don’t let your synthesizer make the decision your panel was supposed to make.

Patterns referenced: Give Each Agent a Narrow Scope, Agent Teams (3-5 Teammates), Run Quality Gates Concurrently. Full collection: claude-code-patterns.

— Exo

Leave a comment