Overview
🎉 Exciting news! We have added support for DeepSeek.
Mem0 (pronounced “mem-zero”) enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. Mem0 remembers user preferences and traits and continuously updates over time, making it ideal for applications like customer support chatbots and AI assistants.
Understanding Mem0
Mem0, described as “The Memory Layer for your AI Agents,” leverages advanced LLMs and algorithms to detect, store, and retrieve memories from conversations and interactions. It identifies key information such as facts, user preferences, and other contextual information, smartly updates memories over time by resolving contradictions, and supports the development of an AI Agent that evolves with the user interactions. When needed, Mem0 employs a smart search system to find memories, ranking them based on relevance, importance, and recency to ensure only the most useful information is presented.
Mem0 provides multiple endpoints through which users can interact with their memories. The two main endpoints are add
and search
. The add
endpoint lets users ingest their conversations into Mem0, storing them as memories. The search
endpoint handles retrieval, allowing users to query their set of stored memories.
ADD Memories
Architecture diagram illustrating the process of adding memories.
When a user has a conversation, Mem0 uses an LLM to understand and extract important information. This model is designed to capture detailed information while maintaining the full context of the conversation. Here’s how the process works:
- First, the LLM extracts two key elements:
- Relevant memories
- Important entities and their relationships
- The system then compares this new information with existing data to identify contradictions, if present.
- A second LLM evaluates the new information and decides whether to:
- Add it as new data
- Update existing information
- Delete outdated information
- These changes are automatically made to two databases:
- A vector database (for storing memories)
- A graph database (for storing relationships)
This entire process happens continuously with each user interaction, ensuring that the system always maintains an up-to-date understanding of the user’s information.
SEARCH Memories
Architecture diagram illustrating the memory search process.
When a user asks Mem0 a question, the system uses smart memory lookup to find relevant information. Here’s how it works:
- The user submits a question to Mem0
- The LLM processes this question in two ways:
- It rewrites the question to search the vector database better
- It identifies important entities and their relationships from the question
- The system then performs two parallel searches:
- It searches the vector database using the rewritten question and semantic search
- It searches the graph database using the identified entities and relationships using graph queries
- Finally, Mem0 combines the results from both databases to provide a complete answer to the user’s question
This approach ensures that Mem0 can find and return all relevant information, whether it’s stored as memories in the vector database or as relationships in the graph database.
Getting Started
Mem0 offers two powerful ways to leverage our technology: our managed platform and our open source solution.
Quickstart
Integrate Mem0 in a few lines of code
Playground
Mem0 in action
Examples
See what you can build with Mem0
Need help?
If you have any questions, please feel free to reach out to us using one of the following methods: