Introduction

Open-Source AI Agent CLI
InitRunner is an open-source CLI that turns a YAML file into a complete AI agent. Define the model, tools, and behavior in a single role.yaml — InitRunner handles the rest: tool execution, guardrails, memory, RAG, and multi-provider routing. No framework to learn, no boilerplate to write.
LLM-friendly docs — This documentation is also available as /llms.txt and /llms-full.txt for LLM consumption.
Key Features
Define
- YAML-first — Declare agents with a Kubernetes-style
apiVersion/kind/metadata/specschema. Readable, portable, version-controllable. - Multi-provider — OpenAI, Anthropic, Google, Groq, Mistral, and Ollama. Swap providers by changing one line.
- 18 tool types — Filesystem, HTTP, MCP, shell, SQL, custom Python, audio, web reader, and more. Give agents the capabilities they need.
- Multimodal input — Attach images, audio, video, and documents to prompts via CLI, REPL, API, or dashboard. See Multimodal.
Chat
- Zero-config chat — Run
initrunner chatwith no YAML file. Auto-detects your API key and starts an interactive session. - CLI-driven RAG — Add
--ingest ./docs/to search your documents directly from the command line. - Tool profiles — Use
--tool-profile allto enable every built-in tool, or--tools git --tools shellto cherry-pick. - Memory flags —
--memory(default),--no-memory, and--resumecontrol chat memory from the CLI.
Remember
- Built-in RAG — Ingest documents, chunk, embed, and vector-search with Zvec. No external database required. In chat mode, just add
--ingest ./docs/. - Memory — Three types: semantic, episodic, and procedural. Auto-consolidation distills episodes into durable facts. On by default in chat mode.
Automate
- Triggers — Run agents on a cron schedule, file change, incoming webhook, or as a Telegram/Discord bot. Daemon mode included.
- Team mode — Define multiple personas in one YAML for sequential multi-agent collaboration.
- Multi-agent compose — Orchestrate multiple agents with delegate sinks and startup ordering.
- Autonomy — Plan-execute-adapt loops that let agents work through multi-step tasks independently.
Ship
- API server —
initrunner serveexposes any agent as an OpenAI-compatible API with streaming. - TUI + Web dashboard — Monitor, inspect, and interact with agents visually.
- One-click cloud deploy — Deploy to Railway, Render, or Fly.io with pre-loaded example roles and persistent storage.
- Guardrails & audit — Token budgets, tool limits, content filtering, PII redaction, and full action logging to SQLite.
Quick Install
pip install initrunnerOr use the install script:
curl -fsSL https://initrunner.ai/install.sh | shOr run with Docker:
docker run --rm -e OPENAI_API_KEY vladkesler/initrunner:latest --versionNext Steps
- Quickstart — Get your first agent running in minutes
- Concepts & Architecture — High-level mental model, diagrams, and execution lifecycle
- Examples — Complete, runnable agents for common use cases
- Installation — All install methods, extras, and platform notes
- Configuration — Full YAML schema reference
- Providers — Provider setup and model configuration
- Tools — All 18 tool types
- Memory — Session persistence and long-term memory (semantic, episodic, procedural)
- Ingestion — Document ingestion and RAG
- Chat — Zero-config chat, role-based REPL, and one-command bot launching
- Telegram Bot — Get a Telegram bot agent running in three steps
- Discord Bot — Get a Discord bot agent running in five steps
- Triggers — Cron, file watch, webhook, Telegram, and Discord triggers
- Autonomy — Autonomous plan-execute-adapt loops
- Guardrails — Token budgets, tool limits, and automatic enforcement
- CLI — Complete CLI reference
- Security — Security hardening guide
- Team Mode — Single-file multi-persona collaboration
- Compose — Multi-agent orchestration
- Multimodal Input — Attach images, audio, video, and documents to prompts
- API Server — OpenAI-compatible HTTP API
- Cloud Deploy — One-click deployment to Railway, Render, and Fly.io
- Troubleshooting & FAQ — Common issues and frequently asked questions