Memory in 5 Minutes
Give any agent persistent memory in three commands — facts it remembers across sessions, episodes it can look back on, and procedures it applies automatically.
Before you start: Memory needs an embedding model. The default is OpenAI
text-embedding-3-small— setOPENAI_API_KEYto use it, or setembeddings.providerto switch providers (Google, Ollama, and more). No API keys? Jump to fully local setup.
The 3-Command Flow
initrunner init --name assistant --template memory # scaffold a memory-ready role file
initrunner run role.yaml -i # chat — the agent can now remember things
initrunner run role.yaml -i --resume # pick up exactly where you left offWhat each command does
initrunner init --name assistant --template memory
Scaffolds a role YAML pre-configured with spec.memory defaults and a system prompt that instructs the agent to use remember(), recall(), and learn_procedure(). The generated file looks like this:
apiVersion: initrunner/v1
kind: Agent
metadata:
name: assistant
spec:
role: |
You are a helpful assistant with long-term memory.
Use remember() to save important facts.
Use recall() to search your memories before answering.
Use learn_procedure() to record useful patterns.
model:
provider: openai
name: gpt-4o-mini
memory:
max_sessions: 10
max_resume_messages: 20
embeddings:
provider: openai
model: text-embedding-3-small
# api_key_env: OPENAI_API_KEY # optional: override which env var holds the key
semantic:
max_memories: 1000
episodic:
max_episodes: 500
procedural:
max_procedures: 100
consolidation:
enabled: true
interval: after_sessionChange provider and model under spec.model to switch LLM backends. See Providers for all options.
Change provider and model under memory.embeddings to switch embedding backends.
initrunner run role.yaml -i
Starts the agent in interactive mode. The agent has three memory tools available automatically:
- Semantic —
remember/recall: store and search arbitrary facts by meaning - Episodic —
record_episode: log experiences; auto-captured in autonomous and daemon modes - Procedural —
learn_procedure: save reusable rules that are auto-injected into the system prompt on future sessions
Every session is saved to ~/.initrunner/memory/<agent-name>/. Re-running without --resume starts a fresh context window but long-term memories persist.
initrunner run role.yaml -i --resume
Reloads the previous session's messages (up to max_resume_messages: 20 by default) so the conversation continues exactly where it left off. Semantic, episodic, and procedural memories are always available regardless of whether you resume.
Inspect and Manage Memory
initrunner memory list role.yaml # show all stored memories
initrunner memory list role.yaml --type semantic # filter by memory type
initrunner memory consolidate role.yaml # extract facts from episodes
initrunner memory export role.yaml -o memories.json # export to JSON
initrunner memory clear role.yaml # wipe all memory for this agentEmbedding API Key
The embedding key is read from an environment variable. The default depends on your provider:
| Provider | Default env var | Notes |
|---|---|---|
openai | OPENAI_API_KEY | |
anthropic | OPENAI_API_KEY | Anthropic has no embeddings API — falls back to OpenAI by default; set embeddings.provider to switch |
google | GOOGLE_API_KEY | |
ollama | (none) | Runs locally |
Anthropic users: Anthropic has no embeddings API. The default fallback is OpenAI — set OPENAI_API_KEY (in your environment or ~/.initrunner/.env) if keeping that default. To avoid needing an OpenAI key, set embeddings.provider: google or embeddings.provider: ollama instead.
Override the key name — if your key is stored under a different env var name, set api_key_env in the embedding config:
spec:
memory:
embeddings:
provider: openai
# api_key_env: OPENAI_API_KEY # optional overrideDiagnose key issues with the doctor command:
initrunner doctorThe Embedding Providers section shows which keys are set and which are missing.
Fully Local — No API Keys
Swap both the LLM and the embedding model to Ollama for a completely local setup:
spec:
model:
provider: ollama
name: llama3.2
memory:
embeddings:
provider: ollama
model: nomic-embed-textThen run the same three commands — no API keys required.
Next Steps
- Memory reference — full configuration options, memory types, consolidation, and storage details
- Providers — all supported LLM and embedding backends
- Compose — share a memory store across multiple agents