Examples
InitRunner ships with 20+ ready-to-run examples across three categories — single agents, compose pipelines, and reusable skills. You can discover and clone them straight from the CLI, or browse the detailed walkthroughs below to understand how each one works.
Browse and Clone from the CLI
The fastest way to get started is the built-in examples workflow:
- List every example to see what's available:
initrunner examples list- Inspect an example before cloning — see its description, required tools, and environment variables:
initrunner examples info code-reviewer- Clone it into your current directory:
initrunner examples clone code-reviewer- Run it:
initrunner run code-reviewer.yaml -p "Review the last commit"Tip: The walkthroughs below explain every field in detail. If you already know what you need, skip ahead to the Full Example Catalog for a complete list of available examples.
Detailed Walkthroughs
The following examples are explained section by section so you can understand the patterns and adapt them to your own agents.
Code Reviewer
A read-only code review agent that uses git and filesystem tools to examine changes and produce structured reviews.
apiVersion: initrunner/v1
kind: Agent
metadata:
name: code-reviewer
description: An experienced code review agent
tags:
- engineering
- review
spec:
role: |
You are an experienced senior software engineer performing code reviews.
When reviewing code:
1. Start with git_list_files to understand the project structure
2. Use git_changed_files to identify what was modified
3. Use git_diff with specific file paths to examine changes
4. Use git_log to understand the commit history and context
5. Read relevant source files to understand the surrounding code
6. Use git_blame on suspicious lines to understand their history
Review guidelines:
- Focus on correctness, readability, and maintainability
- Identify potential bugs, security issues, and performance problems
- Suggest specific improvements with code examples
- Be constructive and explain the reasoning behind each suggestion
- Prioritize issues by severity: critical > major > minor > style
If a diff is truncated, narrow your search by passing a specific file
path to git_diff.
Format your review as a structured list of findings, each with:
- Severity level
- Location (file/line if applicable)
- Description of the issue
- Suggested fix
model:
provider: openai
name: gpt-4o-mini
temperature: 0.1
max_tokens: 4096
tools:
- type: git
repo_path: .
read_only: true
- type: filesystem
root_path: .
read_only: true
guardrails:
max_tokens_per_run: 50000
max_tool_calls: 30
timeout_seconds: 300
max_request_limit: 50initrunner run code-reviewer.yaml -p "Review the last commit"What to notice: Two read-only tools (
git+filesystem) give the agent everything it needs to navigate a codebase. The low temperature (0.1) keeps reviews consistent, and the structured role prompt produces predictable output formatting.
Data Analyst
A multi-tool agent that queries SQLite databases, runs Python analysis, and writes output files.
apiVersion: initrunner/v1
kind: Agent
metadata:
name: data-analyst
description: Queries a SQLite database and runs Python analysis
tags:
- example
- sql
- python
- analytics
spec:
role: |
You are a data analyst with access to a SQLite database and a Python
execution environment. Help the user explore data, answer questions, and
produce reports.
Workflow:
1. Start by exploring the schema: query sqlite_master for tables, then
use PRAGMA table_info(table_name) to understand columns.
2. Write SQL queries to answer the user's questions. Use aggregate
functions (COUNT, SUM, AVG, GROUP BY) for summaries.
3. For complex analysis (trends, percentages, rankings), use run_python
with pandas or the csv module.
4. Write reports and results to the ./output/ directory using write_file.
Guidelines:
- Always explore the schema before writing queries
- Use LIMIT when exploring large tables
- Explain your SQL logic to the user
- Format numbers with appropriate precision (2 decimal places for currency)
- When using Python, prefer the standard library (csv, statistics) if
pandas is not available
model:
provider: openai
name: gpt-4o-mini
temperature: 0.1
max_tokens: 4096
tools:
- type: sql
database: ./sample.db
read_only: true
max_rows: 100
- type: python
working_dir: .
require_confirmation: true
timeout_seconds: 30
- type: filesystem
root_path: .
read_only: false
allowed_extensions:
- .txt
- .md
- .csv
guardrails:
max_tokens_per_run: 50000
max_tool_calls: 30
timeout_seconds: 300
max_request_limit: 50initrunner run data-analyst.yaml -i -p "What were the top 5 products by revenue last quarter?"What to notice: Three tools working together —
sqlfor queries,pythonfor complex analysis, andfilesystemfor writing reports. Therequire_confirmation: trueon the Python tool adds a safety gate before executing code.
RAG Knowledge Base
A documentation assistant with document ingestion, paragraph chunking, and source citation.
apiVersion: initrunner/v1
kind: Agent
metadata:
name: rag-agent
description: Knowledge base Q&A agent with document ingestion
tags:
- example
- rag
- knowledge-base
spec:
role: |
You are a helpful documentation assistant for AcmeDB. You answer user
questions using the ingested knowledge base.
Rules:
- ALWAYS call search_documents before answering a question
- Base your answers only on information found in the documents
- Cite the source document for each claim (e.g., "Per the Getting Started
guide, ...")
- If search_documents returns no relevant results, say so honestly rather
than guessing
- When a user asks about a topic covered across multiple documents,
synthesize the information and cite all relevant sources
- Use read_file to view a full document when the search snippet is not
enough context
model:
provider: openai
name: gpt-4o-mini
temperature: 0.1
max_tokens: 4096
ingest:
sources:
- ./docs/**/*.md
chunking:
strategy: paragraph
chunk_size: 512
chunk_overlap: 50
embeddings:
provider: openai
model: text-embedding-3-small
api_key_env: OPENAI_API_KEY
tools:
- type: filesystem
root_path: ./docs
read_only: true
allowed_extensions:
- .md
guardrails:
max_tokens_per_run: 30000
max_tool_calls: 15
timeout_seconds: 120
max_request_limit: 30initrunner ingest rag-agent.yaml
initrunner run rag-agent.yaml -p "How do I create a database?"What to notice:
paragraphchunking preserves natural document structure (better for prose thanfixed). The role prompt enforces citation discipline — the agent must callsearch_documentsbefore answering and cite sources. Thefilesystemtool lets it read full documents when snippets aren't enough.
GitHub Project Tracker
A declarative API agent that manages GitHub issues without writing any code — endpoints are defined entirely in YAML.
apiVersion: initrunner/v1
kind: Agent
metadata:
name: github-tracker
description: Manages GitHub issues and repos via declarative API endpoints
tags:
- example
- api
- github
spec:
role: |
You are a GitHub project assistant. You help users track issues, manage
repositories, and stay on top of their projects using the GitHub REST API.
Capabilities:
- List and search issues (filter by state, labels, assignee)
- View issue details including comments and labels
- Create new issues with title, body, and labels
- Add comments to existing issues
- List repositories for any user or organization
Guidelines:
- When listing issues, default to state=open unless the user specifies otherwise
- When creating issues, ask for confirmation before submitting
- Format issue lists as numbered summaries with title, state, and labels
- Include issue URLs in your responses so users can click through
- Use get_current_time for timestamps in comments
model:
provider: openai
name: gpt-4o-mini
temperature: 0.1
max_tokens: 4096
tools:
- type: api
name: github
description: GitHub REST API v3
base_url: https://api.github.com
headers:
Accept: application/vnd.github.v3+json
User-Agent: initrunner-github-tracker
auth:
Authorization: "Bearer ${GITHUB_TOKEN}"
endpoints:
- name: list_issues
method: GET
path: "/repos/{owner}/{repo}/issues"
description: List issues in a repository
parameters:
- name: owner
type: string
required: true
- name: repo
type: string
required: true
- name: state
type: string
required: false
default: open
- name: labels
type: string
required: false
query_params:
state: "{state}"
labels: "{labels}"
per_page: "10"
response_extract: "$[*].{number,title,state,labels[*].name}"
timeout: 15
- name: get_issue
method: GET
path: "/repos/{owner}/{repo}/issues/{issue_number}"
description: Get details of a specific issue
parameters:
- name: owner
type: string
required: true
- name: repo
type: string
required: true
- name: issue_number
type: integer
required: true
timeout: 15
- name: create_issue
method: POST
path: "/repos/{owner}/{repo}/issues"
description: Create a new issue
parameters:
- name: owner
type: string
required: true
- name: repo
type: string
required: true
- name: title
type: string
required: true
- name: body
type: string
required: false
- name: labels
type: string
required: false
body_template:
title: "{title}"
body: "{body}"
labels: "{labels}"
timeout: 15
- name: add_comment
method: POST
path: "/repos/{owner}/{repo}/issues/{issue_number}/comments"
description: Add a comment to an issue
parameters:
- name: owner
type: string
required: true
- name: repo
type: string
required: true
- name: issue_number
type: integer
required: true
- name: body
type: string
required: true
body_template:
body: "{body}"
timeout: 15
- type: datetime
guardrails:
max_tokens_per_run: 50000
max_tool_calls: 20
timeout_seconds: 120
max_request_limit: 30export GITHUB_TOKEN=ghp_...
initrunner run github-tracker.yaml -i -p "List open bugs in myorg/myrepo"What to notice: The
apitool type defines REST endpoints declaratively — no Python code needed.response_extractuses JSONPath to trim verbose API responses down to the fields the agent needs. Environment variables (${GITHUB_TOKEN}) keep secrets out of YAML.
Uptime Monitor
A daemon agent that checks HTTP endpoints on a cron schedule and alerts Slack on failures.
apiVersion: initrunner/v1
kind: Agent
metadata:
name: uptime-monitor
description: Checks HTTP endpoints and alerts Slack on failures
tags:
- example
- http
- slack
- monitoring
spec:
role: |
You are an uptime monitor. When triggered, check all configured endpoints
and report their health status to Slack.
Endpoints to check:
- GET /health — main application health
- GET /api/status — API service status
- GET /readiness — Kubernetes readiness probe
For each endpoint:
1. Make the HTTP request using http_request
2. Record the status code and response time
3. Use get_current_time to timestamp the check
Reporting rules:
- If ALL endpoints return 2xx: send a single green summary to Slack
- If ANY endpoint fails (non-2xx or timeout): send a red alert to Slack
with the failing endpoint, status code, and error details
- Always include the timestamp in the Slack message
model:
provider: openai
name: gpt-4o-mini
temperature: 0.0
max_tokens: 2048
tools:
- type: http
base_url: https://api.example.com
allowed_methods:
- GET
headers:
Accept: application/json
- type: slack
webhook_url: "${SLACK_WEBHOOK_URL}"
default_channel: "#ops-alerts"
username: Uptime Monitor
icon_emoji: ":satellite:"
- type: datetime
sinks:
- type: file
path: ./logs/uptime-results.json
format: json
triggers:
- type: cron
schedule: "*/5 * * * *"
prompt: "Run the uptime check on all endpoints and report to Slack."
timezone: UTC
guardrails:
max_tokens_per_run: 10000
max_tool_calls: 10
timeout_seconds: 60
max_request_limit: 15
daemon_token_budget: 500000
daemon_daily_token_budget: 100000initrunner daemon uptime-monitor.yamlWhat to notice: The
crontrigger runs the agent every 5 minutes without human intervention.daemon_token_budgetanddaemon_daily_token_budgetcap spending for unattended agents. Thefilesink logs every result to JSON for later analysis.
Deployment Checker
An autonomous agent that creates a verification plan, executes checks, adapts on failure, and reports results — all without human intervention. See Autonomous Mode for details.
apiVersion: initrunner/v1
kind: Agent
metadata:
name: deployment-checker
description: Autonomous deployment verification agent
tags: [devops, autonomous, deployment]
spec:
role: |
You are a deployment verification agent. When given one or more URLs to check,
create a verification plan, execute each step, and produce a pass/fail report.
Workflow:
1. Use update_plan to create a checklist — one step per URL to verify
2. Run curl -sSL -o /dev/null -w "%{http_code} %{time_total}s" for each URL
3. Mark each step passed (2xx) or failed (anything else)
4. If a check fails, adapt your plan — add a retry or investigation step
5. When done, send a Slack summary with pass/fail results per URL
6. Call finish_task with the overall status
Keep each plan step concise. Mark steps completed/failed as you go.
model:
provider: openai
name: gpt-4o-mini
temperature: 0.0
tools:
- type: shell
allowed_commands:
- curl
require_confirmation: false
timeout_seconds: 30
- type: slack
webhook_url: "${SLACK_WEBHOOK_URL}"
default_channel: "#deployments"
username: Deploy Checker
icon_emoji: ":white_check_mark:"
autonomy:
max_plan_steps: 6
max_history_messages: 20
iteration_delay_seconds: 1
max_scheduled_per_run: 1
guardrails:
max_iterations: 6
autonomous_token_budget: 30000
max_tokens_per_run: 10000
max_tool_calls: 15
session_token_budget: 100000initrunner run deployment-checker.yaml -a \
-p "Verify https://api.example.com/health and https://api.example.com/ready"What to notice: The
autonomysection enables plan-execute-adapt loops. The agent usesupdate_planto track progress andfinish_taskto signal completion.max_iterationsandautonomous_token_budgetin guardrails prevent runaway execution.
Multi-Agent Delegation
A coordinator that delegates research and writing to specialist sub-agents with shared memory.
coordinator.yaml
apiVersion: initrunner/v1
kind: Agent
metadata:
name: research-coordinator
description: Orchestrator that delegates research and writing tasks
tags:
- example
- multi-agent
- delegation
spec:
role: |
You are a research coordinator. Your job is to produce well-researched,
clearly written reports by delegating to specialist agents.
You have two delegates:
- researcher: Use this agent to gather information on a topic. It can
fetch web pages and extract key facts. Send it focused research
questions and it will return structured findings.
- writer: Use this agent to turn raw research notes into polished prose.
Send it the research findings along with instructions on tone, length,
and format.
Workflow:
1. Break the user's request into research questions
2. Delegate each question to the researcher agent
3. Collect and review the research findings
4. Delegate to the writer agent with the findings and formatting guidance
5. Review the final output and return it to the user
Always delegate — do not research or write long-form content yourself.
model:
provider: openai
name: gpt-4o-mini
temperature: 0.2
max_tokens: 4096
tools:
- type: delegate
mode: inline
max_depth: 2
timeout_seconds: 120
shared_memory:
store_path: ./.initrunner/shared-research.db
max_memories: 500
agents:
- name: researcher
role_file: ./agents/researcher.yaml
description: Gathers information from the web on a given topic
- name: writer
role_file: ./agents/writer.yaml
description: Turns research notes into polished, structured writing
guardrails:
max_tokens_per_run: 100000
max_tool_calls: 30
timeout_seconds: 600
max_request_limit: 50agents/researcher.yaml
apiVersion: initrunner/v1
kind: Agent
metadata:
name: web-researcher
description: Research sub-agent that fetches web pages and extracts key facts
spec:
role: |
You are a focused research assistant. Your job is to find and extract
key facts on a given topic.
Guidelines:
- Use fetch_page to retrieve web content when given URLs or when you
need to look up specific information
- Extract only the most relevant facts — skip boilerplate and ads
- Return your findings as a structured bullet-point list
- Include the source URL for each fact
- If a page is irrelevant, say so and move on
- Do not editorialize or write prose — just report the facts
model:
provider: openai
name: gpt-4o-mini
temperature: 0.1
max_tokens: 2048
tools:
- type: web_reader
timeout_seconds: 15
guardrails:
max_tokens_per_run: 20000
max_tool_calls: 10
timeout_seconds: 120agents/writer.yaml
apiVersion: initrunner/v1
kind: Agent
metadata:
name: content-writer
description: Writing sub-agent that produces polished prose from research notes
spec:
role: |
You are a skilled technical writer. You receive research notes and
produce clear, well-structured content.
Guidelines:
- Organize information with headings, subheadings, and logical flow
- Write in a clear, professional tone unless told otherwise
- Cite sources inline where appropriate
- Keep paragraphs short and scannable
- Use bullet points for lists of items or steps
- End with a brief summary or conclusion when appropriate
- Do not invent facts — only use information provided in the research notes
model:
provider: openai
name: gpt-4o-mini
temperature: 0.7
max_tokens: 4096
guardrails:
max_tokens_per_run: 10000
max_tool_calls: 0
timeout_seconds: 60initrunner run coordinator.yaml -p "Write a report on WebAssembly adoption in 2025"What to notice: The coordinator never researches or writes directly — it delegates via
delegate_to_researcheranddelegate_to_writertools.shared_memorygives all agents access to the same memory database.max_depth: 2prevents infinite delegation chains. The writer hasmax_tool_calls: 0— it's a pure generation agent with no tools.
Full Example Catalog
Every example below can be inspected with initrunner examples info <name> and cloned with initrunner examples clone <name>. Source files are also available in the GitHub examples directory.
Role Examples
Single-agent configurations — one YAML file, one purpose.
| Name | Description |
|---|---|
code-reviewer | Read-only code review with git + filesystem tools |
data-analyst | SQL queries, Python analysis, and report writing |
rag-agent | Knowledge base Q&A with document ingestion and citation |
github-tracker | Manage GitHub issues via declarative API endpoints |
uptime-monitor | Cron-scheduled HTTP checks with Slack alerts |
deployment-checker | Autonomous deployment verification with plan-execute loops |
memory-assistant | Personal assistant that learns across sessions |
custom-tools-demo | Custom Python tool functions with config injection |
security-scanner | Static analysis and dependency audit agent |
log-analyzer | Parse and summarize application logs |
db-migrator | Generate and validate database migration scripts |
api-tester | Automated REST API endpoint testing |
doc-generator | Generate documentation from source code |
slack-responder | Auto-respond to Slack messages with context-aware answers |
incident-responder | On-call triage and runbook execution |
changelog-writer | Generate changelogs from git history |
pr-summarizer | Summarize pull request changes for reviewers |
Compose Examples
Multi-agent pipelines defined with kind: Compose.
| Name | Description |
|---|---|
content-pipeline | Watcher → researcher → writer → reviewer |
email-pipeline | Inbox watcher → triager → researcher → responder |
onboarding-pipeline | Repo scanner → doc generator → quiz builder |
Skills
Reusable tool bundles you can import into any agent with skills:.
| Name | Description |
|---|---|
web-research | Web search, page fetching, and summarization |
git-ops | Branch management, cherry-pick, and release tagging |
Run
initrunner examples listfor the latest catalog — new examples are added with every release.