InitRunner

Introduction

InitRunner mascot

Open-Source AI Agent CLI

InitRunner is an open-source CLI that turns a YAML file into a complete AI agent. Define the model, tools, and behavior in a single role.yaml — InitRunner handles the rest: tool execution, guardrails, memory, RAG, and multi-provider routing. No framework to learn, no boilerplate to write.

LLM-friendly docs — This documentation is also available as /llms.txt and /llms-full.txt for LLM consumption.

Key Features

Define

  • YAML-first — Agents are defined with a Kubernetes-style apiVersion/kind/metadata/spec schema. Check them into git, diff them in PRs, deploy them anywhere.
  • Multi-provider — OpenAI, Anthropic, Google, Groq, Mistral, Cohere, xAI, Bedrock, and Ollama. Swap providers by changing one line.
  • 27 tool types — Filesystem, HTTP, MCP, shell, SQL, custom Python, audio, web reader, and more. Add them to your YAML and they just work.
  • Multimodal input — Attach images, audio, video, and documents to prompts via CLI, REPL, API, or dashboard. See Multimodal.
  • Skills — Bundled tool+prompt packages that agents load on demand. Think plugins, but defined in YAML. See Skills.
  • Structured output — Type-safe responses with JSON Schema validation. See Structured Output.

Chat

  • Zero-config chat — Run initrunner run with no YAML file. Auto-detects your API key and starts an interactive session.
  • CLI-driven RAG — Add --ingest ./docs/ to search your documents directly from the command line.
  • Tool profiles — Use --tool-profile all to enable every built-in tool, or --tools git --tools shell to cherry-pick.
  • Memory flags--memory (default), --no-memory, and --resume control chat memory from the CLI.

Remember

  • Built-in RAG — Ingest documents, chunk, embed, and vector-search with LanceDB. No external database required. In chat mode, just add --ingest ./docs/.
  • Memory — Three types: semantic, episodic, and procedural. Auto-consolidation distills episodes into durable facts. On by default in chat mode.

Automate

  • Triggers — Run agents on a cron schedule, file change, incoming webhook, or as a Telegram/Discord bot. Daemon mode included.
  • Team mode — Define multiple personas in one YAML for sequential multi-agent collaboration.
  • Multi-agent flow — Orchestrate multiple agents with delegate sinks and startup ordering.
  • Autonomy — Plan-execute-adapt loops that let agents work through multi-step tasks independently.

Ship

  • API serverinitrunner run --serve exposes any agent as an OpenAI-compatible API with streaming.
  • Web dashboard + desktop app — Build agents, watch runs in real time, and browse audit logs from a browser or native window.
  • One-click cloud deploy — Deploy to Railway, Render, or Fly.io with pre-loaded example roles and persistent storage.
  • Guardrails & audit — Token budgets, tool limits, content filtering, PII redaction, and full action logging to SQLite.
  • MCP gateway — Expose agents as MCP servers for integration with other tools. Includes an MCP Hub dashboard for server discovery, health monitoring, and tool testing. See MCP Gateway.
  • OCI distribution — Package and distribute agents as OCI artifacts. See OCI Distribution.
  • Evals & testing — Test agents against expected outputs and score them automatically. See Evals.

Quick Install

curl -fsSL https://initrunner.ai/install.sh | sh

Or with a package manager:

uv tool install "initrunner[recommended]"
pipx install "initrunner[recommended]"
pip install "initrunner[recommended]"

Or run with Docker:

docker run --rm -e OPENAI_API_KEY vladkesler/initrunner:latest --version

Next Steps

All topics are in the sidebar.

On this page