Mem0 Open Source Overview
Mem0 Open Source delivers the same adaptive memory engine as the platform, but packaged for teams that need to run everything on their own infrastructure. You own the stack, the data, and the customizations.What Mem0 OSS provides
- Full control: Tune every component, from LLMs to vector stores, inside your environment.
- Offline ready: Keep memory on your own network when compliance or privacy demands it.
- Extendable codebase: Fork the repo, add providers, and ship custom automations.
Two ways to run Mem0 OSS: as a library inside your app (Python or Node), or as a self-hosted server with a dashboard, per-user API keys, and a request audit log.
Choose your path
Self-hosted setup
Run
make bootstrap to launch the server + dashboard, create an admin, and issue your first API key.Python Quickstart
Bootstrap CLI and verify add/search loop.
Node.js Quickstart
Install TypeScript SDK and run starter script.
Configure Components
LLM, embedder, vector store, reranker setup.
Tune Retrieval & Rerankers
Hybrid retrieval and reranker controls.
Memory Evaluation
Benchmarks and how Mem0 is tested.
What you get with Mem0 OSS
What you get with Mem0 OSS
| Benefit | What you get |
|---|---|
| Full infrastructure control | Host on your own servers with complete access to configuration and deployment. |
| Complete customization | Modify the implementation, extend functionality, and tailor it to your stack. |
| Local development | Perfect for development, testing, and offline environments. |
| No vendor lock-in | Keep ownership of your data, providers, and pipelines. |
| Community driven | Contribute improvements and tap into a growing ecosystem. |
Default components
Library defaults (when you
import Mem0 and call Memory() directly):- LLM: OpenAI
gpt-5-mini(viaOPENAI_API_KEY) - Embeddings: OpenAI
text-embedding-3-small - Vector store: Local Qdrant at
/tmp/qdrant - History store: SQLite at
~/.mem0/history.db - Reranker: Disabled until configured
Memory.from_config.Self-hosted server defaults (the
server/ Docker Compose stack):- LLM: OpenAI
gpt-4.1-nano-2025-04-14(override withMEM0_DEFAULT_LLM_MODEL) - Embeddings: OpenAI
text-embedding-3-small(override withMEM0_DEFAULT_EMBEDDER_MODEL) - Vector store: Postgres + pgvector
- Bundled providers:
openai,anthropic,gemini— switch from the Configuration page