Skip to main content

Mem0 Open Source Overview

Mem0 Open Source delivers the same adaptive memory engine as the platform, but packaged for teams that need to run everything on their own infrastructure. You own the stack, the data, and the customizations.

What Mem0 OSS provides

  • Full control: Tune every component, from LLMs to vector stores, inside your environment.
  • Offline ready: Keep memory on your own network when compliance or privacy demands it.
  • Extendable codebase: Fork the repo, add providers, and ship custom automations.
Two ways to run Mem0 OSS: as a library inside your app (Python or Node), or as a self-hosted server with a dashboard, per-user API keys, and a request audit log.

Choose your path

Self-hosted setup

Run make bootstrap to launch the server + dashboard, create an admin, and issue your first API key.

Python Quickstart

Bootstrap CLI and verify add/search loop.

Node.js Quickstart

Install TypeScript SDK and run starter script.

Configure Components

LLM, embedder, vector store, reranker setup.

Tune Retrieval & Rerankers

Hybrid retrieval and reranker controls.

Memory Evaluation

Benchmarks and how Mem0 is tested.
Need a managed alternative? Compare hosting models in the Platform vs OSS guide or switch tabs to the Platform documentation.
BenefitWhat you get
Full infrastructure controlHost on your own servers with complete access to configuration and deployment.
Complete customizationModify the implementation, extend functionality, and tailor it to your stack.
Local developmentPerfect for development, testing, and offline environments.
No vendor lock-inKeep ownership of your data, providers, and pipelines.
Community drivenContribute improvements and tap into a growing ecosystem.

Default components

Library defaults (when you import Mem0 and call Memory() directly):
  • LLM: OpenAI gpt-5-mini (via OPENAI_API_KEY)
  • Embeddings: OpenAI text-embedding-3-small
  • Vector store: Local Qdrant at /tmp/qdrant
  • History store: SQLite at ~/.mem0/history.db
  • Reranker: Disabled until configured
Override any component with Memory.from_config.
Self-hosted server defaults (the server/ Docker Compose stack):
  • LLM: OpenAI gpt-4.1-nano-2025-04-14 (override with MEM0_DEFAULT_LLM_MODEL)
  • Embeddings: OpenAI text-embedding-3-small (override with MEM0_DEFAULT_EMBEDDER_MODEL)
  • Vector store: Postgres + pgvector
  • Bundled providers: openai, anthropic, gemini — switch from the Configuration page
See Self-Hosted Setup for the full provider list and how to extend it.

Keep going

Review Platform vs OSS

Run the Python Quickstart