Documentation Index
Fetch the complete documentation index at: https://docs.mem0.ai/llms.txt
Use this file to discover all available pages before exploring further.
New Memory Algorithm — State-of-the-Art Accuracy at ~3-4x Lower CostGround-up rewrite of the memory pipeline with 20+ point benchmark improvements:
- LoCoMo: 71.4 → 91.6 (+20) — multi-turn conversation recall
- LongMemEval: 67.8 → 93.4 (+26) — long-term memory across sessions
- BEAM (1M tokens): 64.1 — production-scale memory evaluation
- Agent memories are first-class — Previous algorithm: 46% on assistant recall. New: 100%
- Temporal reasoning works — “Where did I live before SF?” Previous: 51%. New: 93%
- ~3-4x fewer tokens — Under 7K tokens per retrieval vs 25K+ for full-context approaches
- ADD-only extraction — Memories accumulate; nothing is overwritten or deleted
- Hybrid retrieval — Semantic + BM25 keyword + entity boost, scored in parallel
- Entity linking — Entities extracted, embedded, and linked across memories
search() defaults changed, deprecated params removed. See migration guide.Mem0 Skill Graph — In-Context Documentation for AI AgentsAI coding agents in Claude Code, Cursor, and Codex can now access Mem0 knowledge directly in their workflow — no doc searching required. Three interconnected skills launched:
- mem0 Core Skill — Complete Python and TypeScript SDK reference, REST API patterns, and integration guides for LangChain, CrewAI, Autogen, and more
- mem0-cli Skill — Terminal command reference, configuration walkthroughs, and CI/CD recipes
- mem0-vercel-ai-sdk Skill — Vercel AI SDK provider API, memory-augmented generation patterns, and multi-provider setup
Official Mem0 CLI — Now on PyPI and npmA full-featured command-line interface for Mem0, available in both Python and Node.js:
- Install:
pip install mem0-cliornpm install -g @mem0/cli - Full command suite —
add,search,list,get,update,delete,import,config,init,status,entity,event - Interactive setup —
mem0 initwith email verification or direct API key entry - Works everywhere — Platform (Mem0 Cloud) and self-hosted OSS modes
- Scriptable —
--jsonflag for CI/CD pipelines and automation - Dual SDK — Same commands, same experience across Python and Node.js
OpenClaw Plugin — Production-ReadyThe OpenClaw Mem0 plugin went from initial release to production-ready in one week (v1.0.0 → v1.0.4):
- Skills-based memory architecture — New extraction pipeline with skill-loader, batched extraction, and domain-aware memory triage
- Dream gate — Automatic memory consolidation during idle periods for higher-quality long-term recall
- Interactive CLI —
openclaw mem0 init,status,config,import, andeventcommands - Unified tool naming —
memory_addandmemory_deletereplace 4 legacy tools, matching the platform API - Security hardened — Path traversal protection, pinned dependencies, 329 tests across 10 files
Mem0 Plugin for Claude Code, Cursor, and CodexLaunched a unified Mem0 plugin across three major AI development environments — Claude Code and Cursor first (March 25), then Codex (April 2):
- 9 MCP memory tools — add, search, get, update, delete, bulk delete, entity management via
mcp.mem0.ai - Lifecycle hooks — Automatic memory capture at session start, context compaction, task completion, and session end
- Cloud MCP server — Managed endpoint replaces local MCP and Smithery setup
- Streamable HTTP transport — New MCP transport protocol for real-time streaming
- Codex-specific skill — Dedicated skill in
mem0-plugin/skills/mem0-codexfor Codex workflows
Apache AGE, Turbopuffer, MiniMax, and pgvector for Node.jsMajor expansion of the provider ecosystem:
- Apache AGE — New graph store support, bringing the total to 4 graph store backends (Neo4j, Memgraph, Kuzu, Apache AGE)
- Turbopuffer — New vector database provider for Python SDK
- MiniMax — New LLM provider with dedicated AWS Bedrock support
- pgvector for Node.js — PostgreSQL vector support added to the TypeScript OSS SDK
- Reasoning models —
reasoning_effortparameter for OpenAI o1/o3-style models