Memory & Learning
OrcBot’s memory system provides multi-layered storage for conversations, user preferences, learned knowledge, and semantic search capabilities. The memory architecture supports both short-term operational memory and long-term persistent storage.Memory Architecture
Memory Types
- Short Memory: Recent action steps and observations (last ~50 items)
- Episodic Memory: LLM-generated summaries of completed actions
- Long Memory: Persistent markdown files (USER.md, LEARNING.md, JOURNAL.md)
- Vector Memory: Semantic search index for embedding-based recall
- Daily Memory: Append-only logs organized by date
- Contact Profiles: Per-contact relationship context (WhatsApp)
Memory Limits
- Context limit: 50 short memories (configurable via
memoryContextLimit) - Episodic limit: 200 summaries (configurable via
memoryEpisodicLimit) - Consolidation threshold: 30 short memories before summarization
- Memory content max: 500 characters per entry
- Flush interval: Auto-flush when soft threshold reached
recall_memory
Semantic search across ALL memory types. This is the primary skill for recalling past conversations and context.Parameters
Natural language description of what to recall. The system finds semantically similar memories, not just keyword matches.
Maximum number of results to return
Return Value
Formatted list of relevant memories with timestamps, types, sources, and relevance scores.
Features
- Semantic search: Uses vector embeddings for meaning-based recall (when configured)
- Cross-channel: Searches across Telegram, WhatsApp, Discord, Slack, email
- Multi-type: Searches short, episodic, and long-term memory
- Keyword fallback: Falls back to keyword search if vector memory unavailable
- Ranked results: Sorted by relevance score (semantic) or recency (keyword)
Example Usage
Recall recent deployment discussion:Response Example
Metadata
- isDeep:
false- Memory operations are fast - isParallelSafe:
true
update_user_profile
Permanently persist information learned about the user.Parameters
Information to add to USER.md. Should be factual, concise, and actionable.
Return Value
Confirmation that the profile was updated
What to Store
Store:- User preferences (“prefers concise answers”, “works in PST timezone”)
- Core identity (name, role, company, location)
- Communication style (“direct and technical”)
- Important relationships (“team lead for Project X”)
- Constraints and requirements (“Python 3.11 only”, “no external dependencies”)
- Ephemeral information (“currently working on feature Y”)
- Passwords or secrets
- Detailed conversation logs (use memory for that)
- Information that changes frequently
Example Usage
USER.md Format
The skill appends toUSER.md in a structured format:
Metadata
- isDeep:
false - isDangerous:
false
update_learning
Research a topic and persist knowledge to LEARNING.md.Parameters
Topic to research and learn about
Optional pre-researched content to save. If omitted, the skill automatically researches the topic.
Return Value
Confirmation with a summary of what was learned
Features
- Auto-research: If no content provided, uses
web_searchor LLM to gather info - Structured storage: Organizes knowledge by topic headings
- LLM extraction: Uses fast model to distill key facts (capped at 3000 chars input)
- Size limits: Per-entry cap of 3000 chars to prevent bloat
Example Usage
Auto-research:Response Example
LEARNING.md Format
Metadata
- isDeep:
true- Research operations are substantive - isResearch:
false
LEARNING.md is for technical knowledge. Use it to build a knowledge base of programming concepts, frameworks, APIs, and technical patterns you encounter.
update_journal
Write a self-reflection entry to JOURNAL.md.Parameters
Journal entry content. Should be introspective and reflective.
Return Value
Confirmation that the journal was updated
What to Journal
- Reflections on complex tasks
- Lessons learned from failures
- Insights about user behavior or preferences
- Self-improvement observations
- Strategic thinking about long-term goals
Example Usage
JOURNAL.md Format
Metadata
- isDeep:
false
deep_reason
Perform intensive multi-step chain-of-thought analysis.Parameters
Topic or question to analyze deeply
Return Value
Multi-step reasoning output with conclusions and insights
Use Cases
- Ethical dilemmas (“Should we prioritize speed or security?”)
- Complex technical decisions (“Which database architecture?”)
- Strategic planning (“How to scale this system to 1M users?”)
- Root cause analysis (“Why did this deployment fail?”)
Example Usage
Response Example
Metadata
- isDeep:
true - isResearch:
false
RAG Knowledge Store
The RAG (Retrieval-Augmented Generation) knowledge store provides persistent, searchable document storage with semantic search.rag_ingest
Ingest content into the knowledge store. Parameters:Content to ingest (text, markdown, JSON, CSV, code)
Source identifier (e.g., “report.md”, “api-docs”)
Collection name for organization
Document title
Tags for filtering
Content format: text, markdown, csv, json, jsonl, code
rag_ingest_file
Ingest a local file. Parameters:Path to the file to ingest
Collection name
Tags
Document title
rag_ingest_url
Download and ingest from a URL. Parameters:URL to download and ingest
Collection name
Tags
Document title
- Auto-detects HTML and applies Readability extraction
- Handles plain text, markdown, JSON
- Preserves metadata (URL, fetch date)
rag_search
Semantic search across ingested documents. Parameters:Search query
Max results
Search within specific collection
Filter by tags
rag_list
List documents and collections. Parameters:List specific collection (omit for all)
rag_delete
Delete documents or entire collections. Parameters:Specific document ID to delete
Delete entire collection
Best Practices
Vector memory requires API keys. Semantic search needs
openaiApiKey or googleApiKey. Without it, the system falls back to keyword search.