InitRunner agents are configured through YAML role files. Every role follows the apiVersion/kind/metadata/spec structure.
apiVersion: initrunner/v1 # Required — API version
kind: Agent # Required — must be "Agent"
metadata:
name: my-agent # Required — unique agent identifier
description: "" # Optional — human-readable description
tags: [] # Optional — categorization tags
author: "" # Optional — author name
version: "" # Optional — semantic version
dependencies: [] # Optional — pip dependencies
spec:
role: | # Required — system prompt
You are a helpful assistant.
model: # Model configuration
provider: openai # Provider name
name: gpt-4o-mini # Model identifier
temperature: 0.1 # Sampling temperature (0.0-2.0)
max_tokens: 4096 # Max tokens per response
base_url: null # Custom endpoint URL
api_key_env: null # Env var for API key
tools: [] # Tool configurations
guardrails: {} # Resource limits
ingest: null # Document ingestion / RAG
memory: null # Memory system
triggers: [] # Trigger configurations
sinks: [] # Output sink configurations
security: null # Security policy
skills: [] # Skill references
| Field | Type | Default | Description |
|---|
name | str | (required) | Unique agent identifier |
description | str | "" | Human-readable description |
tags | list[str] | [] | Categorization tags |
author | str | "" | Author name |
version | str | "" | Semantic version string |
dependencies | list[str] | [] | pip dependencies for custom tools |
| Field | Type | Default | Description |
|---|
provider | str | "openai" | Provider name (openai, anthropic, google, groq, mistral, ollama) |
name | str | "gpt-4o-mini" | Model identifier |
base_url | str | null | null | Custom endpoint URL (enables OpenAI-compatible mode) |
api_key_env | str | null | null | Environment variable containing the API key |
temperature | float | 0.1 | Sampling temperature (0.0-2.0) |
max_tokens | int | 4096 | Maximum tokens per response (1-128000) |
See Providers for provider-specific setup and Ollama/OpenRouter configuration.
| Field | Type | Default | Description |
|---|
max_tokens_per_run | int | 50000 | Maximum output tokens consumed per agent run |
max_tool_calls | int | 20 | Maximum tool invocations per run |
timeout_seconds | int | 300 | Wall-clock timeout per run |
max_request_limit | int | null | null | Maximum LLM API round-trips per run |
input_tokens_limit | int | null | null | Per-request input token limit |
total_tokens_limit | int | null | null | Per-request combined input+output token limit |
session_token_budget | int | null | null | Cumulative token budget for REPL session (warns at 80%) |
daemon_token_budget | int | null | null | Lifetime token budget for daemon process |
daemon_daily_token_budget | int | null | null | Daily token budget for daemon (resets at UTC midnight) |
See Guardrails for enforcement behavior, daemon budgets, and autonomous limits.
| Section | Description | Docs |
|---|
model | LLM provider and model settings | Providers |
tools | Tool configurations (filesystem, HTTP, MCP, custom, etc.) | Tools |
guardrails | Token limits, timeouts, tool call limits | Guardrails |
autonomy | Autonomous plan-execute-adapt loops | Autonomy |
ingest | Document ingestion and RAG pipeline | Ingestion |
memory | Session persistence and semantic memory | Memory |
triggers | Cron, file watch, and webhook triggers | Triggers |
security | Content policies, rate limiting, tool sandboxing | Security |
| Variable | Description |
|---|
OPENAI_API_KEY | OpenAI API key |
ANTHROPIC_API_KEY | Anthropic API key |
GOOGLE_API_KEY | Google AI API key |
GROQ_API_KEY | Groq API key |
MISTRAL_API_KEY | Mistral API key |
INITRUNNER_HOME | Data directory (default: ~/.initrunner/) |
Resolution order for INITRUNNER_HOME: INITRUNNER_HOME > XDG_DATA_HOME/initrunner > ~/.initrunner.
apiVersion: initrunner/v1
kind: Agent
metadata:
name: support-agent
description: Answers questions from the support knowledge base
tags:
- support
- rag
spec:
role: |
You are a support agent. Use search_documents to find relevant
articles before answering. Always cite your sources.
model:
provider: openai
name: gpt-4o-mini
temperature: 0.1
max_tokens: 4096
ingest:
sources:
- "./knowledge-base/**/*.md"
- "./docs/**/*.pdf"
chunking:
strategy: fixed
chunk_size: 512
chunk_overlap: 50
tools:
- type: filesystem
root_path: ./src
read_only: true
- type: mcp
transport: stdio
command: npx
args: ["-y", "@anthropic/mcp-server-filesystem"]
triggers:
- type: file_watch
paths: ["./knowledge-base"]
extensions: [".html", ".md"]
prompt_template: "Knowledge base updated: {path}. Re-index."
- type: cron
schedule: "0 9 * * 1"
prompt: "Generate weekly support coverage report."
guardrails:
max_tokens_per_run: 50000
max_tool_calls: 20
timeout_seconds: 300