Model Context Protocol (MCP) provides a standardized way for AI agents to manage their own memory through Mem0, without manual API calls.
Why use MCP
When building AI applications, memory management often requires manual integration. MCP eliminates this complexity by:- Universal compatibility: Works with any MCP-compatible client (Claude Desktop, Cursor, custom agents)
- Agent autonomy: AI agents decide when to save, search, or update memories
- Zero infrastructure: No servers to maintain - Mem0 handles everything
- Standardized protocol: One integration works across all your AI tools
Available tools
The MCP server exposes 9 memory tools to your AI client:| Tool | Purpose |
|---|---|
add_memory | Store conversations or facts |
search_memories | Find relevant memories with filters |
get_memories | List memories with pagination |
update_memory | Modify existing memory content |
delete_memory | Remove specific memories |
delete_all_memories | Bulk delete memories |
delete_entities | Remove user/agent/app entities |
get_memory | Retrieve single memory by ID |
list_entities | View stored entities |
Deployment options
Choose the deployment method that fits your workflow:Python package (recommended)
Python package (recommended)
Install and run locally with uvx:Configure your client:
Docker container
Docker container
Containerized deployment with HTTP endpoint:Configure for HTTP:
Smithery
Smithery
One-click setup with managed service:Visit smithery.ai/server/@mem0ai/mem0-memory-mcp and:
- Select your AI client (Cursor, Claude Desktop, etc.)
- Configure your Mem0 API key
- Set your default user ID
- Enable graph memory (optional)
- Copy the generated configuration
Configuration
Required environment variables
Optional variables
Test your setup with the Python agent
Test your setup with the Python agent
The included Pydantic AI agent provides an interactive REPL to test memory operations:Testing different server configurations:
-
Local server (default):
python example/pydantic_ai_repl.py -
Docker container:
-
Smithery remote:
- “Remember that I love tiramisu”
- “Search for my food preferences”
- “Update my project: the mobile app is now 80% complete”
- “Show me all memories about project Phoenix”
- “Delete memories from 2023”
How the testing works
- Configuration loads - Reads from
example/config.jsonby default - Server starts - Launches or connects to the Mem0 MCP server
- Agent connects - Pydantic AI agent (Mem0Guide) attaches to the server
- Interactive REPL - You get a chat interface to test all memory operations
Example interactions
Once connected, your AI agent can:Try these prompts
What you can do
The Mem0 MCP server enables powerful memory capabilities for your AI applications:- Health tracking: “I’m allergic to peanuts and shellfish” - Add new health information
- Research data: “Store these trial parameters: 200 participants, double-blind, placebo-controlled” - Save structured data
- Preference queries: “What do you know about my dietary preferences?” - Search and retrieve relevant memories
- Project updates: “Update my project status: the mobile app is now 80% complete” - Modify existing memory
- Data cleanup: “Delete all memories from 2023” - Bulk remove outdated information
- Topic overview: “Show me everything about Project Phoenix” - List all memories for a subject
Performance tips
- Enable graph memories for relationship-aware recall
- Use specific filters when searching large memory sets
- Batch operations when adding multiple memories
- Monitor memory usage in the Mem0 dashboard
Best practices
- Start simple: Use the Python package for development
- Use wildcards:
user_id: "*"to search across all users - Test locally: Use the bundled Python agent to verify setup
- Monitor usage: Track memory operations in the dashboard
- Document patterns: Share successful prompt patterns with your team