CLI Reference

The ai binary is a single statically-linked Go binary (<25 MB, <200 ms cold start). All commands read from agent.toml in the current directory by default.

Installation

One-line install (macOS & Linux)

curl -fsSL https://agent-intelligence.ai/install.sh | sh

Go install

go install github.com/agent-intelligence-ai/agent-intelligence/cmd/ai@latest

Homebrew

brew install agent-intelligence-ai/tap/ai

Global Flags

Flags placed before the subcommand name, applying to every command.

ai <version>

ai [global flags] <subcommand> [subcommand flags] [args]
FlagType / DefaultDescription
--configstring / agent.tomlPath to the agent config file
--verbosebool / falseEnable verbose output (tool call traces, debug logs)
--quiet, -qbool / falseSuppress non-essential output; errors only (useful for scripting / CI)
--no-colorbool / falseDisable ANSI color (useful for CI / piped output)

ai init

Run the onboarding interview and generate agent.toml + toolbox.yaml.

ai init [flags] [intent]
FlagType / DefaultDescription
--no-aibool / falseSkip Claude interview; prompt each field manually

Optional intent string describes what the agent should do. Claude uses it to scaffold a relevant config, toolbox.yaml, and system prompt.

Basic init with intent

$ai init "a research agent for my companies database"
✓ scaffolded: my-research-agent/
✓ intent detected → research agent
✓ config generated: agent.toml
✓ toolbox.yaml generated

Manual init (no AI interview)

$ai init --no-ai
? agent name: my-agent
? description: researches companies
✓ config generated: agent.toml

ai showtask-052

Display agent.toml with syntax highlighting, or show the full config schema.

ai show [flags] [file]
FlagType / DefaultDescription
--specbool / falseShow full config schema with all fields, types, and defaults
--jsonbool / falseOutput as JSON instead of colored TOML

Show current config

$ai show
─── agent.toml ──────────────────────────────
[agent]
name = "my-research-agent"
[agent.model]
provider = "anthropic"
model = "claude-sonnet-4-6"
[agent.a2a]
enabled = true port = 8080

Show config schema

$ai show --spec
agent.name string required
agent.model.provider string "anthropic"|"openai-compat"
agent.budget.max_turns int default: 10
agent.a2a.port int default: 8080
...

ai servetask-051

Start the agent runtime server (A2A + MCP) and all managed sidecars.

ai serve [flags]
FlagType / DefaultDescription
--portint / 8080Port for the A2A + HTTP server
ProcessPortDescription
A2A server:8080POST /a2a, POST /a2a/stream, GET /a2a/tasks/{id}
MCP server:8081tools/list, tools/call, prompts/list (if enabled)
genai-toolbox:15000MCP tool server subprocess
CypherMCP:15001Custom Neo4j Cypher MCP server subprocess
Python sidecars:8090–:8093GraphRAG, graph construction, memory, eval

Default serve

$ai serve
✓ toolbox: 14 tools loaded
✓ A2A → :8080
✓ MCP → :8081
ready — http://localhost:8080

Custom port

$ai serve --port 9090
ready — http://localhost:9090

ai runtask-051

Submit a task to an agent and stream the response to stdout.

ai run [flags] <task>
FlagType / DefaultDescription
--agentstring / localhost:8080Target A2A endpoint URL
--verbosebool / falseShow tool call trace (name, input, result)

Simple query

$ai run "top 5 companies by revenue"
1. Apple Inc. — $394B
2. Amazon — $513B
3. Alphabet — $307B
4. Microsoft — $211B
5. Meta — $134B

Remote agent + verbose tool trace

$ai run --agent http://remote:8080 --verbose "find Acme competitors"
[tool] company_lookup({"query":"Acme Corp"})
[tool] graphrag_search({"query":"competitors similar to Acme"})
Acme's top competitors: WidgetCo, TechBase, BuildIt

ai deploytask-100

Deploy the agent runtime to a cloud provider (Fly.io or Cloud Run).

ai deploy [flags]
ai deploy status
ai deploy logs
FlagType / DefaultDescription
--targetstring / flyDeployment target: fly or cloudrun
--domainstring / —Custom domain override
CommandDescription
statusShow deployment health and instance count
logsStream deployment logs to stdout

Deploy to Fly.io

$ai deploy
✓ built Go binary (4.2MB)
✓ deployed → https://my-research-agent.agent-intelligence.ai
✓ A2A + MCP endpoints live

Check status & stream logs

$ai deploy status
✓ running · 2 instances · :8080
$ai deploy logs
2026-03-08T22:01:15Z INFO agent=companies-researcher turn=1 tokens=1243

ai graphtask-020

Manage the knowledge graph backend — connect, introspect, build, and promote.

ai graph <subcommand> [flags]
CommandDescription
connect <uri>Validate a Neo4j URI and print node/relationship counts
introspectOutput structured JSON schema (node labels, rel types, cardinality)
initCreate a local Kuzu database in .agint/graph/
build <path>Ingest documents into the graph (starts construction sidecar)
promoteExport local Kuzu graph to Neo4j Aura

Connect and introspect

$ai graph connect neo4j+s://demo.neo4jlabs.com:7687
✓ connected · 47,821 nodes · 89,302 rels · 12 labels
$ai graph introspect
{"nodes":["Company","Person","Article"],"rels":["WORKS_AT","MENTIONS"]}

Build from documents

$ai graph build ./docs/
ingesting 47 documents...
✓ 47 docs → 892 chunks → 891 nodes · 1,203 rels

ai evaltask-082

Run evaluations against agents and produce quality / cost reports.

ai eval <subcommand> [flags]
CommandDescription
runRun LLM-as-judge evaluation against a YAML dataset (--dataset)
reportGenerate a markdown summary of evaluation results
cost-reportShow per-agent cost breakdown from execution trace data

Three evaluation levels: structural checks (no LLM, every commit), LLM-as-judge (on PR), and trace replay (weekly regression detection).

$ai eval run --dataset eval/qa-pairs.yaml
running 12 evaluations...
✓ 10 passed · 2 failed · avg score 4.2 / 5.0
$ai eval cost-report
my-research-agent avg $0.12/session · 14 sessions · $1.68 total

ai skilltask-060

Manage reusable agent skills — prompt fragments bundled with tool requirements.

ai skill <subcommand> [flags]
CommandDescription
add <ref>Download and register a skill from a local path or GitHub ref
listShow installed skills with name, version, and description
remove <name>Unregister a skill
$ai skill add github.com/agent-intelligence-ai/skills/graph-search
✓ installed graph-search v1.0.0
$ai skill list
graph-search 1.0.0 Search the knowledge graph using hybrid retrieval
memory-recall 0.9.1 Cross-session entity and context recall
$ai skill remove graph-search
✓ removed graph-search

ai sidecartask-070

Manage Python sidecar services (GraphRAG retrieval, graph construction, memory, evaluation).

ai sidecar <subcommand> [flags]
CommandDescription
installDownload and install all Python sidecar packages into the managed venv
statusShow running/stopped/error state for each sidecar service
PortService
:8090Graph construction (document ingestion pipeline)
:8091GraphRAG retrieval (HybridCypherRetriever)
:8092Agent memory service
:8093Evaluation bridge (Opik / Arize)
$ai sidecar install
✓ graphrag-retrieval installed
✓ graph-construction installed
✓ agent-memory installed
✓ eval-bridge installed
$ai sidecar status
graphrag-retrieval :8091 ✓ running
graph-construction :8090 ✓ running
agent-memory :8092 ✓ running
eval-bridge :8093 ✗ stopped

ai webtask-053

Start a local web console for agentic chat with an optional debug panel (tool calls, token usage, OTel spans).

ai web [flags]
FlagType / DefaultDescription
--portint / 8888Port for the web console server
--no-openbool / falseDon't open the browser automatically
--debugbool / falseShow debug panel by default (tool calls, token usage, traces)
PanelContent
ChatConversation history with streaming responses, dark terminal theme
DebugTool calls (name + input/output), token counters, context utilization bar, OTel spans
System promptCurrent resolved system prompt (base + skill fragments)

The web console connects to the agent running via ai serve. Run both in separate terminals or use ai serve && ai web.

$ai web
✓ web console → http://localhost:8888
opening browser...
$ai web --debug --port 9999
✓ web console → http://localhost:9999 (debug mode)
opening browser...