InitRunner

Troubleshooting & FAQ

Provider & API Key Issues

API key not found

Error: API key not found for provider 'openai'

InitRunner looks for API keys in this order:

  1. spec.model.api_key in the role file (not recommended for production)
  2. Environment variable: OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY, etc.
  3. .env file in the role file's directory
  4. ~/.initrunner/.env global config

Fix: Export the key or add it to your .env file:

export OPENAI_API_KEY=sk-...

Model not found

Error: Model 'gpt-5-turbo' not found for provider 'openai'

Fix: Check the model name matches your provider's available models. Run:

initrunner models --provider openai

See Providers for supported models per provider.

Rate limiting / 429 errors

Error: Rate limit exceeded (429)

Fix:

  • Reduce temperature or max_tokens_per_run to lower usage
  • Add iteration_delay_seconds in autonomous mode to space out requests
  • Switch to a higher-tier API plan
  • Use a different model (e.g., gpt-4o-mini instead of gpt-4o)

Tool Execution Failures

Tool not found

Error: Tool 'search_documents' is not registered

Fix: This usually means the tool wasn't configured in spec.tools, or for search_documents, you haven't added an spec.ingest section. Run initrunner ingest role.yaml after adding ingestion config.

Permission denied (filesystem)

Error: Access denied: path '/etc/passwd' is outside allowed root

Filesystem tools are sandboxed to root_path. You cannot access files outside the configured directory.

Fix: Update root_path in your filesystem tool config, or use an absolute path that falls within the allowed root.

Shell command blocked

Error: Command 'rm' is not in the allowed commands list

Shell tools restrict which commands can run via allowed_commands.

Fix: Add the command to the allowlist in your role file:

tools:
  - type: shell
    allowed_commands:
      - curl
      - rm  # add the command you need

MCP connection failed

Error: Failed to connect to MCP server at localhost:3001

Fix:

  • Verify the MCP server is running and listening on the expected port
  • Check that the url in your MCP tool config matches the server address
  • Test connectivity: curl http://localhost:3001/health

Memory & Ingestion Problems

No documents ingested

search_documents returned: "No documents have been ingested yet"

Fix: Run the ingestion pipeline before querying:

initrunner ingest role.yaml

Make sure your spec.ingest.sources glob patterns match actual files:

# Test the glob pattern
ls docs/**/*.md

Memory not persisting between sessions

Short-term memory only lasts for the duration of a single session or daemon run. If you need cross-session recall, configure long-term memory:

memory:
  long_term:
    enabled: true
    store_path: ~/.initrunner/memory/my-agent.db

See Memory for details.

Embedding errors

Error: Failed to generate embeddings

Fix:

  • Check that the embedding provider API key is set
  • Verify the embedding model exists (e.g., text-embedding-3-small for OpenAI)
  • If using a different provider for embeddings than for the main model, set ingest.embeddings.provider explicitly

YAML Configuration Mistakes

Missing required fields

Error: 'spec.role' is required

Every role file needs at minimum:

apiVersion: initrunner/v1
kind: Agent
metadata:
  name: my-agent
spec:
  role: Your system prompt here.
  model:
    provider: openai
    name: gpt-4o-mini

Indentation errors

YAML is indentation-sensitive. Use 2 spaces (not tabs). Common mistakes:

# Wrong — tools is not under spec
spec:
  role: ...
tools:          # should be indented under spec
  - type: shell

# Correct
spec:
  role: ...
  tools:
    - type: shell

Environment variable substitution

Variables like ${SLACK_WEBHOOK_URL} are resolved at runtime from the environment. If they resolve to empty strings:

Fix:

  • Export the variable: export SLACK_WEBHOOK_URL=https://hooks.slack.com/...
  • Add it to .env in the role file's directory
  • For systemd/compose deployments, use the environment file (see Compose)

Autonomous Mode Issues

Infinite loops / agent won't stop

Cause: The agent keeps creating new plan steps or never calls finish_task.

Fix: Set guardrails to enforce limits:

guardrails:
  max_iterations: 5
  autonomous_token_budget: 30000
  max_tool_calls: 15
autonomy:
  max_plan_steps: 6
  iteration_delay_seconds: 2

The agent will stop when any limit is reached.

Empty or vague plans

Cause: The system prompt doesn't give the agent clear enough instructions on what to do.

Fix: Be specific in spec.role about the expected workflow:

role: |
  You are a deployment checker. Follow these steps exactly:
  1. Use update_plan to create a verification checklist
  2. Run curl for each endpoint
  3. Mark each step passed or failed
  4. Call finish_task with the overall result

See Autonomy for best practices.


Daemon & Trigger Issues

Cron not firing

Fix:

  • Verify the cron expression is valid (5-field format: min hour day month weekday)
  • Check timezone — defaults to UTC
  • Make sure the daemon is running: initrunner daemon role.yaml
  • Check audit logs for errors: sqlite3 ~/.initrunner/audit.db "SELECT * FROM events ORDER BY created_at DESC LIMIT 10"

File watcher not detecting changes

Fix:

  • Ensure the paths directory exists before starting the daemon
  • Check extensions filter — an empty list watches all files, a populated list only watches those extensions
  • Increase debounce_seconds if events are being swallowed by rapid consecutive changes
  • Verify process_existing: true if you want existing files to be processed on startup

Webhook not receiving events

Fix:

  • Confirm the port is not already in use: ss -tlnp | grep 8080
  • Test locally: curl -X POST http://127.0.0.1:8080/webhook -d '{"test": true}'
  • If using HMAC verification (secret), ensure the sender includes a valid X-Hub-Signature-256 header
  • Check firewall rules if the sender is on a different host

See Triggers for full configuration.


Compose Issues

Circular dependency detected

Error: Circular dependency: a -> b -> c -> a

Fix: Redesign the service graph so that data flows in one direction. Use a shared sink or an intermediary service to break the cycle.

Delegate sink not connecting

Error: Delegate target 'consumer' not found in services

Fix: The target name in a delegate sink must exactly match a service name defined in spec.services. Check for typos.

Services not starting in order

Fix: Add depends_on to enforce startup ordering:

services:
  producer:
    role: roles/producer.yaml
    sink: { type: delegate, target: consumer }
  consumer:
    role: roles/consumer.yaml
    depends_on: [producer]

See Compose for the full orchestration guide.


Performance Tips

  • Choose the right model — Use gpt-4o-mini or equivalent for simple tasks. Reserve larger models for complex reasoning.
  • Limit guardrails to what you need — Overly aggressive max_tool_calls or max_tokens_per_run can cause agents to stop before finishing useful work.
  • Use read_only: true on filesystem tools when agents only need to read files. This skips confirmation prompts and reduces overhead.
  • Tune chunking for RAG — Smaller chunks (256-512) give more precise search results. Larger chunks (1024+) provide more context but may dilute relevance.
  • Use paragraph chunking for prose — It preserves document structure better than fixed chunking for documentation and articles.
  • Add iteration_delay_seconds in autonomous mode to avoid hitting rate limits.

FAQ

Can I use multiple providers in one agent?

Not within a single agent — each agent is bound to one spec.model provider. However, you can use Compose to orchestrate multiple agents, each with a different provider.

Can I run agents offline?

Yes, if you use a local provider like Ollama. All other features (tools, memory, ingestion) work without an internet connection. Only the LLM API calls require connectivity (unless running locally).

Where is my data stored?

DataDefault Location
Audit logs~/.initrunner/audit.db
Memory~/.initrunner/memory/<agent-name>.db
Ingestion vectors~/.initrunner/stores/<agent-name>.db
Session stateIn-memory (lost on exit)

How do I reset memory?

Delete the memory database file:

rm ~/.initrunner/memory/my-agent.db

Or re-ingest documents to rebuild the vector store:

initrunner ingest role.yaml

Can I use InitRunner in CI/CD?

Yes. Use single-shot mode with -p to pass a prompt and capture the output:

initrunner run role.yaml -p "Analyze the latest test results" --output json

Set API keys as CI environment variables. See Testing for test automation patterns.

How do I update InitRunner?

pip install --upgrade initrunner

Or with extras:

pip install --upgrade "initrunner[ingest]"

On this page