Docker
Run InitRunner in a container without installing Python or managing dependencies. Images ship with all extras pre-installed (EXTRAS="all") — every provider, feature, and interface works out of the box.
Looking for the runtime sandbox? Since v2026.4.16, tool subprocesses run under a pluggable sandbox. See Runtime Sandbox for the overview, Bubblewrap Sandbox for the Linux-native backend, or Docker Sandbox for the container backend.
Tip: Want to skip Docker setup entirely? Cloud Deploy offers one-click deployment to Railway, Render, and Fly.io.
Images
Official images are published to both registries:
| Registry | Image |
|---|---|
| GitHub Container Registry | ghcr.io/vladkesler/initrunner:latest |
| Docker Hub | vladkesler/initrunner:latest |
Both are identical multi-platform images (linux/amd64 and linux/arm64) -- use whichever registry your environment prefers.
Quick Start
One-shot prompt
docker run --rm -e OPENAI_API_KEY \
-v ./roles:/roles \
ghcr.io/vladkesler/initrunner:latest \
run /roles/my-agent.yaml -p "Hello"Interactive chat
docker run --rm -it -e OPENAI_API_KEY \
-v ./roles:/roles \
ghcr.io/vladkesler/initrunner:latest \
run /roles/my-agent.yaml -iCherry-picked tools
docker run --rm -it -e OPENAI_API_KEY \
-v ./roles:/roles \
ghcr.io/vladkesler/initrunner:latest \
run --tools git --tools filesystemDocument ingestion
docker run --rm -it -e OPENAI_API_KEY \
-v ./docs:/docs \
ghcr.io/vladkesler/initrunner:latest \
run --ingest /docsWeb dashboard
docker run -d -e OPENAI_API_KEY \
-v ./roles:/roles \
-v initrunner-data:/data \
-p 8100:8100 \
ghcr.io/vladkesler/initrunner:latest \
dashboard --role-dir /rolesOpen http://localhost:8100 to access the dashboard.
Telegram bot
docker run -d -e OPENAI_API_KEY -e TELEGRAM_BOT_TOKEN \
-v ./roles:/roles \
ghcr.io/vladkesler/initrunner:latest \
run --telegramAPI server
docker run -d -e OPENAI_API_KEY \
-v ./roles:/roles \
-p 8000:8000 \
ghcr.io/vladkesler/initrunner:latest \
run --serveThe API is available at http://localhost:8000.
Docker Compose
Create a docker-compose.yml:
services:
initrunner:
# GHCR (default) — or use vladkesler/initrunner:latest (Docker Hub)
image: ghcr.io/vladkesler/initrunner:latest
# build: . # uncomment to build from source
ports:
- "8100:8100" # Web dashboard
- "8000:8000" # API server (if also running --serve)
volumes:
- ./roles:/roles
- initrunner-data:/data
environment:
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- GOOGLE_API_KEY=${GOOGLE_API_KEY:-}
- INITRUNNER_DASHBOARD_API_KEY=${INITRUNNER_DASHBOARD_API_KEY:-} # persistent dashboard key
restart: unless-stopped
command: ["dashboard", "--role-dir", "/roles"]
volumes:
initrunner-data:Start the stack:
docker compose up -dPolicy Engine
To enable agent policy enforcement in Docker, mount your policy directory and set the environment variable:
volumes:
- ./policies:/data/policies
environment:
- INITRUNNER_POLICY_DIR=/data/policiesSee Agent Policy Engine for policy authoring details.
Building Locally
Build the image from the repository root:
docker build -t initrunner .
docker run --rm initrunner --versionCustomizing extras
The default image includes all extras (EXTRAS="all"). You can narrow it down with a build arg:
docker build --build-arg EXTRAS="dashboard,anthropic" -t initrunner-custom .Environment Variables
Pass API keys and configuration as environment variables:
| Variable | Description |
|---|---|
OPENAI_API_KEY | OpenAI API key |
ANTHROPIC_API_KEY | Anthropic API key |
GOOGLE_API_KEY | Google API key |
INITRUNNER_HOME | Data directory inside the container (defaults to /data) |
INITRUNNER_DASHBOARD_API_KEY | Fixed dashboard API key (persists across container restarts) |
Volumes
| Container Path | Purpose |
|---|---|
/roles | Mount your role YAML files here |
/data | Persistent state — sessions, memory, vector indexes |
Ports
| Port | Service |
|---|---|
8000 | API server (initrunner run --serve) |
8100 | Web dashboard (initrunner dashboard) |
Docker Entrypoint
The Docker image uses a custom entrypoint that automatically seeds 9 curated starter examples into /data/roles/ on first boot. If the directory already contains files, seeding is skipped.
This is the same entrypoint used by the Cloud Deploy platforms (Railway, Render, Fly.io). If you want to disable seeding, mount your own role directory at /data/roles/ before starting the container.
Ollama Integration
If Ollama runs on the host machine, the container cannot reach localhost. Use the Docker host gateway address in your role YAML:
spec:
model:
provider: ollama
base_url: http://host.docker.internal:11434/v1