# Demo 04 — Multi-Agent Research Pipeline with A2A

**Platform aspects**: A2A protocol, multi-agent orchestration, task streaming, human approval, `ai run --stream`
**Graph**: Neo4j Companies2 (researcher agent) + Neo4j Recommendations (analyst agent cross-reference)
**Audience**: Platform architects; AI engineers; enterprise evaluators

---

## The Scenario

You want to build a **two-agent research pipeline**:

1. **Researcher agent** — specialises in deep company intelligence queries against the
   companies2 graph. Knows nothing about formatting or presentation.
2. **Analyst agent** — receives the researcher's raw findings, synthesises them into a
   structured investment memo. Uses A2A to delegate research sub-tasks back to the
   researcher when it needs more detail.

This demonstrates A2A as a first-class protocol — the analyst doesn't call the researcher
via a function; it submits tasks over HTTP using the Agent-to-Agent spec, streams results,
and can require human approval for irreversible actions (like publishing a memo).

---

## Architecture

```
Terminal / Web UI
      │
      │  POST /a2a
      ▼
┌─────────────────┐    POST /a2a     ┌─────────────────┐
│  Analyst Agent  │─────────────────►│ Researcher Agent│
│  :8080          │◄─────────────────│  :8082          │
│                 │  SSE stream      │                 │
│  Claude Sonnet  │                  │  Claude Sonnet  │
│  (synthesis)    │                  │  + 5 Cypher     │
│                 │                  │    tools        │
└─────────────────┘                  └─────────────────┘
```

---

## Prerequisites

```bash
export ANTHROPIC_API_KEY=sk-ant-...

# Researcher agent credentials
export RESEARCHER_NEO4J_URI=neo4j+s://demo.neo4jlabs.com:7687
export RESEARCHER_NEO4J_USER=companies2
export RESEARCHER_NEO4J_PASS=companies2
export RESEARCHER_NEO4J_DB=companies2
```

---

## Step 1 — Set up the Researcher agent

```bash
ai init researcher-agent
cd researcher-agent
```

Configure as a single-purpose researcher with a focused system prompt:

```toml
# agent.toml
[agent]
name        = "company-researcher"
description = "Deep company intelligence: ownership graphs, executive networks, news"
system_prompt = """
You are a precise company research specialist. When asked to research a company:
1. Always retrieve the company's basic profile first
2. Get its subsidiaries and executive network
3. Retrieve recent news sentiment
4. Return structured JSON with all findings — no prose summaries
5. Include data provenance: record dates, source article URLs

You do NOT write memos or make investment recommendations. You only gather and
structure factual data from the knowledge graph.
"""

[agent.model]
provider = "anthropic"
model    = "claude-sonnet-4-6"
api_key  = "${ANTHROPIC_API_KEY}"

[agent.a2a]
enabled = true
port    = 8082                  # researcher runs on a non-default port

[agent.security]
require_auth = true             # pass token via --token flag at runtime
```

```bash
cd researcher-agent
ai serve --port 8082 --token "${RESEARCHER_TOKEN}"
```

```
[mcp-toolbox]  ✓ Loaded 5 tools from companies2
[a2a-server]   Listening on :8082
[mcp-server]   Listening on :8083

Researcher agent ready.
agent-card: http://localhost:8082/.well-known/agent.json
```

---

## Step 2 — Set up the Analyst agent

In a new directory and new terminal:

```bash
mkdir analyst-agent && cd analyst-agent
```

The analyst's `agent.toml` has the researcher as a downstream A2A dependency:

```toml
[agent]
name        = "investment-analyst"
description = "Investment memo synthesis: delegates research to specialist agents via A2A"
system_prompt = """
You are a senior investment analyst. When asked to produce an investment memo on a company:

1. Delegate data gathering to the company-researcher agent (available at ${RESEARCHER_ENDPOINT})
   via A2A: POST ${RESEARCHER_ENDPOINT}/a2a with {"input": "Research [company]: ..."}
2. Wait for the researcher's structured JSON response
3. Synthesise findings into a structured investment memo with sections:
   - Executive Summary
   - Business Overview (size, geography, subsidiaries)
   - Key People
   - Recent News Sentiment
   - Competitive Position
   - Key Risks
   - Conclusion
4. Flag any data gaps or low-confidence findings with [INCOMPLETE DATA]

Before publishing any memo, request human approval by calling the approve tool.
"""

[agent.model]
provider = "anthropic"
model    = "claude-sonnet-4-6"
api_key  = "${ANTHROPIC_API_KEY}"

[agent.tools]
allow_list = ["a2a_delegate", "request_human_approval"]

[agent.a2a]
enabled = true
port    = 8080

[agent.security]
require_auth           = true
require_human_approval = true   # triggers approval flow before irreversible actions
```

```bash
cd analyst-agent
ai serve --port 8080 --token "${ANALYST_TOKEN}"
```

```
[a2a-server]   Listening on :8080  (analyst)
[mcp-server]   Listening on :8081

Analyst agent ready. Human approval required for: publish_memo
```

---

## Step 3 — Submit a task to the analyst

```bash
export ANALYST_TOKEN=demo-token-analyst

ai run --endpoint http://localhost:8080 \
       --token "${ANALYST_TOKEN}" \
       "Prepare a full investment memo on Microsoft Corporation"
```

Watch the A2A task lifecycle in real time:

```
Task ID: task_01JQ4X7P8KZMN3VYRB2F
Status:  submitted → running

[analyst]  Delegating research to company-researcher via A2A...
           POST http://localhost:8082/a2a
           {"input": "Research Microsoft Corporation: profile, subsidiaries, executives, news"}

[researcher] Task received: task_01JQ4XA2...
[researcher] → find_company(name="Microsoft Corporation")
[researcher] → get_subsidiaries(name="Microsoft Corporation")
[researcher] → get_executives(name="Microsoft Corporation")
[researcher] → get_recent_news(name="Microsoft")
[researcher] ✓ Research complete — returning structured JSON (2,847 tokens)

[analyst]  Research received. Synthesising investment memo...
[analyst]  ⚠ Human approval required before publishing memo.

Status: awaiting_approval
```

---

## Step 4 — Human-in-the-loop approval

The analyst pauses and waits. You receive a notification (or poll the status endpoint):

```bash
# Check task status
curl -s http://localhost:8080/a2a/task_01JQ4X7P8KZMN3VYRB2F \
     -H "Authorization: Bearer ${ANALYST_TOKEN}" | jq .
```

```json
{
  "id": "task_01JQ4X7P8KZMN3VYRB2F",
  "status": "awaiting_approval",
  "approval_prompt": "The analyst has completed a draft memo on Microsoft Corporation.\nPlease review and approve publishing.\n\nDraft excerpt:\n> Microsoft Corporation is a Washington-based technology conglomerate...",
  "created_at": "2026-03-09T14:45:02Z",
  "updated_at": "2026-03-09T14:46:18Z"
}
```

Review the draft and approve:

```bash
curl -s -X POST http://localhost:8080/a2a/task_01JQ4X7P8KZMN3VYRB2F/approve \
     -H "Authorization: Bearer ${ANALYST_TOKEN}" \
     -H "Content-Type: application/json" \
     -d '{"approved": true, "note": "LGTM — publish to research portal"}'
```

---

## Step 5 — Stream the final result

```bash
ai run --endpoint http://localhost:8080 \
       --token "${ANALYST_TOKEN}" \
       --stream \
       "Prepare a full investment memo on Microsoft Corporation"
```

With `--stream`, events arrive as Server-Sent Events and are printed as they come in:

```
event: progress
data: {"message": "Delegating to researcher agent..."}

event: progress
data: {"message": "Researcher returned 2,847 tokens of structured data"}

event: progress
data: {"message": "Drafting memo... (turn 3 of 10)"}

event: approval_required
data: {"prompt": "Approve publishing investment memo on Microsoft?"}

[Press Enter to approve, Ctrl+C to reject]
> approved

event: progress
data: {"message": "Publishing memo..."}

event: completed
data: {"result": "# Investment Memo: Microsoft Corporation\n\n## Executive Summary\n..."}

─────────────────────────────────────────────────
# Investment Memo: Microsoft Corporation

## Executive Summary
Microsoft Corporation (NASDAQ: MSFT) is a Redmond, Washington-based technology
conglomerate with approximately 228,000 employees and $211B in trailing twelve
months revenue...

## Business Overview
- **Headquarters**: Redmond, WA (with major offices in 190 countries)
- **Subsidiaries**: LinkedIn (900M+ members), GitHub (100M developers),
  Nuance Communications (healthcare AI), Activision Blizzard (gaming)
- **Cloud segment**: Azure — #2 cloud provider globally

[... full memo continues ...]
─────────────────────────────────────────────────
Task completed in 34.2s
Researcher: 4 tool calls, 1,203 tokens
Analyst: 3 turns, 4,891 tokens in / 1,240 out
Total cost: ~$0.017
```

---

## Step 6 — Inspect the A2A task history

```bash
# List all tasks submitted to the analyst agent
curl -s "http://localhost:8080/a2a/tasks?limit=5" \
     -H "Authorization: Bearer ${ANALYST_TOKEN}" | jq '.tasks[] | {id, status, created_at}'
```

```json
{"id": "task_01JQ4X7P8KZMN3VYRB2F", "status": "completed", "created_at": "2026-03-09T14:45:02Z"}
{"id": "task_01JQ4WQMN3VY...", "status": "completed", "created_at": "2026-03-09T13:12:44Z"}
```

---

## What you just demonstrated

- **A2A as first-class protocol** — agents talk to each other over HTTP, not function calls
- **Task streaming** — SSE events show progress in real time
- **Human-in-the-loop** — `awaiting_approval` state pauses execution for review
- **Specialisation** — researcher and analyst have narrow, well-defined responsibilities
- **Composability** — any A2A-compliant agent (external or internal) can join the pipeline
- **Full audit trail** — every task, tool call, and approval is logged with timestamps

The analyst's agent-card at `/.well-known/agent.json` advertises its capabilities,
letting other agents discover it automatically — no out-of-band coordination required.

Next: [Demo 05 — Deploy to Fly.io and Integrate with Claude Desktop](demo-05-deploy.md)

---

## Appendix — Fixture files

```bash
# Quick-start with pre-built fixtures (researcher + analyst configs):
curl -fsSL https://agent-intelligence.ai/downloads/demo-04-fixtures.tar.gz | tar xz
# Then start researcher and analyst as shown in the walkthrough above
```

Fixture files: [`fixtures/demo-04/researcher/agent.toml`](fixtures/demo-04/researcher/agent.toml),
[`fixtures/demo-04/researcher/toolbox.yaml`](fixtures/demo-04/researcher/toolbox.yaml),
[`fixtures/demo-04/analyst/agent.toml`](fixtures/demo-04/analyst/agent.toml)

---

## Appendix — Running in web UI

```bash
ai web --a2a http://localhost:8080 --token "${ANALYST_TOKEN}" --debug
```

The web console shows each A2A delegation as a nested tool call in the debug panel,
with latency and token cost for both the analyst and researcher agents.
