CLI Reference
The ai binary is a single statically-linked Go binary (<25 MB, <200 ms cold start).
All commands read from agent.toml in the current directory by default.
Installation
One-line install (macOS & Linux)
curl -fsSL https://agent-intelligence.ai/install.sh | sh
Go install
go install github.com/agent-intelligence-ai/agent-intelligence/cmd/ai@latest
Homebrew
brew install agent-intelligence-ai/tap/ai
Global Flags
Flags placed before the subcommand name, applying to every command.
ai <version> ai [global flags] <subcommand> [subcommand flags] [args]
| Flag | Type / Default | Description |
|---|---|---|
| --config | string / agent.toml | Path to the agent config file |
| --verbose | bool / false | Enable verbose output (tool call traces, debug logs) |
| --quiet, -q | bool / false | Suppress non-essential output; errors only (useful for scripting / CI) |
| --no-color | bool / false | Disable ANSI color (useful for CI / piped output) |
ai init
Run the onboarding interview and generate agent.toml + toolbox.yaml.
ai init [flags] [intent]
Flags
| Flag | Type / Default | Description |
|---|---|---|
| --no-ai | bool / false | Skip Claude interview; prompt each field manually |
Optional intent string describes what the agent should do. Claude uses it to scaffold a relevant config, toolbox.yaml, and system prompt.
Basic init with intent
Manual init (no AI interview)
ai showtask-052
Display agent.toml with syntax highlighting, or show the full config schema.
ai show [flags] [file]
Flags
| Flag | Type / Default | Description |
|---|---|---|
| --spec | bool / false | Show full config schema with all fields, types, and defaults |
| --json | bool / false | Output as JSON instead of colored TOML |
Show current config
Show config schema
ai servetask-051
Start the agent runtime server (A2A + MCP) and all managed sidecars.
ai serve [flags]
Flags
| Flag | Type / Default | Description |
|---|---|---|
| --port | int / 8080 | Port for the A2A + HTTP server |
What it starts
| Process | Port | Description |
|---|---|---|
| A2A server | :8080 | POST /a2a, POST /a2a/stream, GET /a2a/tasks/{id} |
| MCP server | :8081 | tools/list, tools/call, prompts/list (if enabled) |
| genai-toolbox | :15000 | MCP tool server subprocess |
| CypherMCP | :15001 | Custom Neo4j Cypher MCP server subprocess |
| Python sidecars | :8090–:8093 | GraphRAG, graph construction, memory, eval |
Default serve
Custom port
ai runtask-051
Submit a task to an agent and stream the response to stdout.
ai run [flags] <task>
Flags
| Flag | Type / Default | Description |
|---|---|---|
| --agent | string / localhost:8080 | Target A2A endpoint URL |
| --verbose | bool / false | Show tool call trace (name, input, result) |
Simple query
Remote agent + verbose tool trace
ai deploytask-100
Deploy the agent runtime to a cloud provider (Fly.io or Cloud Run).
ai deploy [flags] ai deploy status ai deploy logs
Flags
| Flag | Type / Default | Description |
|---|---|---|
| --target | string / fly | Deployment target: fly or cloudrun |
| --domain | string / — | Custom domain override |
Subcommands
| Command | Description |
|---|---|
| status | Show deployment health and instance count |
| logs | Stream deployment logs to stdout |
Deploy to Fly.io
Check status & stream logs
ai graphtask-020
Manage the knowledge graph backend — connect, introspect, build, and promote.
ai graph <subcommand> [flags]
Subcommands
| Command | Description |
|---|---|
| connect <uri> | Validate a Neo4j URI and print node/relationship counts |
| introspect | Output structured JSON schema (node labels, rel types, cardinality) |
| init | Create a local Kuzu database in .agint/graph/ |
| build <path> | Ingest documents into the graph (starts construction sidecar) |
| promote | Export local Kuzu graph to Neo4j Aura |
Connect and introspect
Build from documents
ai evaltask-082
Run evaluations against agents and produce quality / cost reports.
ai eval <subcommand> [flags]
Subcommands
| Command | Description |
|---|---|
| run | Run LLM-as-judge evaluation against a YAML dataset (--dataset) |
| report | Generate a markdown summary of evaluation results |
| cost-report | Show per-agent cost breakdown from execution trace data |
Three evaluation levels: structural checks (no LLM, every commit), LLM-as-judge (on PR), and trace replay (weekly regression detection).
ai skilltask-060
Manage reusable agent skills — prompt fragments bundled with tool requirements.
ai skill <subcommand> [flags]
Subcommands
| Command | Description |
|---|---|
| add <ref> | Download and register a skill from a local path or GitHub ref |
| list | Show installed skills with name, version, and description |
| remove <name> | Unregister a skill |
ai sidecartask-070
Manage Python sidecar services (GraphRAG retrieval, graph construction, memory, evaluation).
ai sidecar <subcommand> [flags]
Subcommands
| Command | Description |
|---|---|
| install | Download and install all Python sidecar packages into the managed venv |
| status | Show running/stopped/error state for each sidecar service |
Services
| Port | Service |
|---|---|
| :8090 | Graph construction (document ingestion pipeline) |
| :8091 | GraphRAG retrieval (HybridCypherRetriever) |
| :8092 | Agent memory service |
| :8093 | Evaluation bridge (Opik / Arize) |
ai webtask-053
Start a local web console for agentic chat with an optional debug panel (tool calls, token usage, OTel spans).
ai web [flags]
Flags
| Flag | Type / Default | Description |
|---|---|---|
| --port | int / 8888 | Port for the web console server |
| --no-open | bool / false | Don't open the browser automatically |
| --debug | bool / false | Show debug panel by default (tool calls, token usage, traces) |
Web console features
| Panel | Content |
|---|---|
| Chat | Conversation history with streaming responses, dark terminal theme |
| Debug | Tool calls (name + input/output), token counters, context utilization bar, OTel spans |
| System prompt | Current resolved system prompt (base + skill fragments) |
The web console connects to the agent running via ai serve. Run both in separate terminals or use ai serve && ai web.