InitRunner

Docker

Run InitRunner in a container without installing Python or managing dependencies. Images ship with all extras pre-installed (EXTRAS="all") — every provider, feature, and interface works out of the box.

Looking for the runtime sandbox? Since v2026.4.16, tool subprocesses run under a pluggable sandbox. See Runtime Sandbox for the overview, Bubblewrap Sandbox for the Linux-native backend, or Docker Sandbox for the container backend.

Tip: Want to skip Docker setup entirely? Cloud Deploy offers one-click deployment to Railway, Render, and Fly.io.

Images

Official images are published to both registries:

RegistryImage
GitHub Container Registryghcr.io/vladkesler/initrunner:latest
Docker Hubvladkesler/initrunner:latest

Both are identical multi-platform images (linux/amd64 and linux/arm64) -- use whichever registry your environment prefers.

Quick Start

One-shot prompt

docker run --rm -e OPENAI_API_KEY \
  -v ./roles:/roles \
  ghcr.io/vladkesler/initrunner:latest \
  run /roles/my-agent.yaml -p "Hello"

Interactive chat

docker run --rm -it -e OPENAI_API_KEY \
  -v ./roles:/roles \
  ghcr.io/vladkesler/initrunner:latest \
  run /roles/my-agent.yaml -i

Cherry-picked tools

docker run --rm -it -e OPENAI_API_KEY \
  -v ./roles:/roles \
  ghcr.io/vladkesler/initrunner:latest \
  run --tools git --tools filesystem

Document ingestion

docker run --rm -it -e OPENAI_API_KEY \
  -v ./docs:/docs \
  ghcr.io/vladkesler/initrunner:latest \
  run --ingest /docs

Web dashboard

docker run -d -e OPENAI_API_KEY \
  -v ./roles:/roles \
  -v initrunner-data:/data \
  -p 8100:8100 \
  ghcr.io/vladkesler/initrunner:latest \
  dashboard --role-dir /roles

Open http://localhost:8100 to access the dashboard.

Telegram bot

docker run -d -e OPENAI_API_KEY -e TELEGRAM_BOT_TOKEN \
  -v ./roles:/roles \
  ghcr.io/vladkesler/initrunner:latest \
  run --telegram

API server

docker run -d -e OPENAI_API_KEY \
  -v ./roles:/roles \
  -p 8000:8000 \
  ghcr.io/vladkesler/initrunner:latest \
  run --serve

The API is available at http://localhost:8000.

Docker Compose

Create a docker-compose.yml:

services:
  initrunner:
    # GHCR (default) — or use vladkesler/initrunner:latest (Docker Hub)
    image: ghcr.io/vladkesler/initrunner:latest
    # build: .   # uncomment to build from source
    ports:
      - "8100:8100"   # Web dashboard
      - "8000:8000"   # API server (if also running --serve)
    volumes:
      - ./roles:/roles
      - initrunner-data:/data
    environment:
      - OPENAI_API_KEY=${OPENAI_API_KEY:-}
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
      - GOOGLE_API_KEY=${GOOGLE_API_KEY:-}
      - INITRUNNER_DASHBOARD_API_KEY=${INITRUNNER_DASHBOARD_API_KEY:-}  # persistent dashboard key
    restart: unless-stopped
    command: ["dashboard", "--role-dir", "/roles"]

volumes:
  initrunner-data:

Start the stack:

docker compose up -d

Policy Engine

To enable agent policy enforcement in Docker, mount your policy directory and set the environment variable:

volumes:
  - ./policies:/data/policies
environment:
  - INITRUNNER_POLICY_DIR=/data/policies

See Agent Policy Engine for policy authoring details.

Building Locally

Build the image from the repository root:

docker build -t initrunner .
docker run --rm initrunner --version

Customizing extras

The default image includes all extras (EXTRAS="all"). You can narrow it down with a build arg:

docker build --build-arg EXTRAS="dashboard,anthropic" -t initrunner-custom .

Environment Variables

Pass API keys and configuration as environment variables:

VariableDescription
OPENAI_API_KEYOpenAI API key
ANTHROPIC_API_KEYAnthropic API key
GOOGLE_API_KEYGoogle API key
INITRUNNER_HOMEData directory inside the container (defaults to /data)
INITRUNNER_DASHBOARD_API_KEYFixed dashboard API key (persists across container restarts)

Volumes

Container PathPurpose
/rolesMount your role YAML files here
/dataPersistent state — sessions, memory, vector indexes

Ports

PortService
8000API server (initrunner run --serve)
8100Web dashboard (initrunner dashboard)

Docker Entrypoint

The Docker image uses a custom entrypoint that automatically seeds 9 curated starter examples into /data/roles/ on first boot. If the directory already contains files, seeding is skipped.

This is the same entrypoint used by the Cloud Deploy platforms (Railway, Render, Fly.io). If you want to disable seeding, mount your own role directory at /data/roles/ before starting the container.

Ollama Integration

If Ollama runs on the host machine, the container cannot reach localhost. Use the Docker host gateway address in your role YAML:

spec:
  model:
    provider: ollama
    base_url: http://host.docker.internal:11434/v1

On this page