Mem0 v1.0.0 is now available — Introducing rerankers, async by default, Azure support, and more. View changelog →
Self-Host Mem0 with Full Control
Mem0 Open Source gives you a powerful, self-hosted memory layer for AI agents. Deploy on your infrastructure, customize every component, and maintain complete data ownership.Get Started
Choose your preferred SDK and get Mem0 running locally in minutes:Python Quickstart
Install and configure Mem0 OSS with Python in 10 minutes
Node.js Quickstart
Set up Mem0 OSS with Node.js and TypeScript support
Explore OSS Capabilities
Mem0 Open Source offers powerful features for building production-grade AI applications with memory. From graph-based knowledge structures to flexible component configuration, you have full control over how memory works in your system.Graph Memory
Build relationship-aware memory with knowledge graph capabilities
Component Configuration
Choose your LLM, vector database, embedding model, and rerankers
REST API
Build high-throughput pipelines with async clients and REST endpoints
Platform vs OSS? See our comparison guide to understand which deployment option fits your use case.
Why Choose Open Source?
| Benefit | What You Get |
|---|---|
| Full Infrastructure Control | Host on your own servers with complete access to configuration and deployment |
| Complete Customization | Modify implementation, extend functionality, and adapt to your specific needs |
| Local Development | Perfect for development, testing, and air-gapped environments |
| No Vendor Lock-in | Own your data, choose your stack, and maintain full independence |
| Community Driven | Contribute to and benefit from active community improvements and integrations |
Looking for production scale? Mem0 Platform offers managed infrastructure with advanced features like webhooks, multimodal support, and enterprise support.
Need help? Check out our GitHub repository for source code, issues, and community discussions.
Default Components
No configuration needed to get started. Mem0 works out of the box with sensible defaults:
- LLM: OpenAI
gpt-4.1-nano-2025-04-14via yourOPENAI_API_KEY - Embeddings: OpenAI
text-embedding-3-small(1536 dimensions) - Vector store: Local Qdrant instance storing data at
/tmp/qdrant - History storage: SQLite database at
~/.mem0/history.db - Reranker: Disabled unless you configure one
Memory.from_config.