Introduction

Open-Source AI Agent CLI
InitRunner is an open-source CLI that turns a YAML file into a complete AI agent. Define the model, tools, and behavior in a single role.yaml — InitRunner handles the rest: tool execution, guardrails, memory, RAG, and multi-provider routing. No framework to learn, no boilerplate to write.
LLM-friendly docs — This documentation is also available as /llms.txt and /llms-full.txt for LLM consumption.
Key Features
Define
- YAML-first — Agents are defined with a Kubernetes-style
apiVersion/kind/metadata/specschema. Check them into git, diff them in PRs, deploy them anywhere. - Multi-provider — OpenAI, Anthropic, Google, Groq, Mistral, Cohere, xAI, Bedrock, and Ollama. Swap providers by changing one line.
- 27 tool types — Filesystem, HTTP, MCP, shell, SQL, custom Python, audio, web reader, and more. Add them to your YAML and they just work.
- Multimodal input — Attach images, audio, video, and documents to prompts via CLI, REPL, API, or dashboard. See Multimodal.
- Skills — Bundled tool+prompt packages that agents load on demand. Think plugins, but defined in YAML. See Skills.
- Structured output — Type-safe responses with JSON Schema validation. See Structured Output.
Chat
- Zero-config chat — Run
initrunner runwith no YAML file. Auto-detects your API key and starts an interactive session. - CLI-driven RAG — Add
--ingest ./docs/to search your documents directly from the command line. - Tool profiles — Use
--tool-profile allto enable every built-in tool, or--tools git --tools shellto cherry-pick. - Memory flags —
--memory(default),--no-memory, and--resumecontrol chat memory from the CLI.
Remember
- Built-in RAG — Ingest documents, chunk, embed, and vector-search with LanceDB. No external database required. In chat mode, just add
--ingest ./docs/. - Memory — Three types: semantic, episodic, and procedural. Auto-consolidation distills episodes into durable facts. On by default in chat mode.
Automate
- Triggers — Run agents on a cron schedule, file change, incoming webhook, or as a Telegram/Discord bot. Daemon mode included.
- Team mode — Define multiple personas in one YAML for sequential multi-agent collaboration.
- Multi-agent flow — Orchestrate multiple agents with delegate sinks and startup ordering.
- Autonomy — Plan-execute-adapt loops that let agents work through multi-step tasks independently.
Ship
- API server —
initrunner run --serveexposes any agent as an OpenAI-compatible API with streaming. - Web dashboard + desktop app — Build agents, watch runs in real time, and browse audit logs from a browser or native window.
- One-click cloud deploy — Deploy to Railway, Render, or Fly.io with pre-loaded example roles and persistent storage.
- Guardrails & audit — Token budgets, tool limits, content filtering, PII redaction, and full action logging to SQLite.
- MCP gateway — Expose agents as MCP servers for integration with other tools. Includes an MCP Hub dashboard for server discovery, health monitoring, and tool testing. See MCP Gateway.
- OCI distribution — Package and distribute agents as OCI artifacts. See OCI Distribution.
- Evals & testing — Test agents against expected outputs and score them automatically. See Evals.
Quick Install
curl -fsSL https://initrunner.ai/install.sh | shOr with a package manager:
uv tool install "initrunner[recommended]"
pipx install "initrunner[recommended]"
pip install "initrunner[recommended]"Or run with Docker:
docker run --rm -e OPENAI_API_KEY vladkesler/initrunner:latest --versionNext Steps
- Quickstart — Get your first agent running in minutes
- Concepts & Architecture — High-level mental model and execution lifecycle
- Configuration — Full YAML schema reference
- Providers — Provider setup and model configuration
- Tools — All built-in tool types
- Examples — Complete, runnable agents for common use cases
- Troubleshooting & FAQ — Common issues and solutions
All topics are in the sidebar.