Quick Start
The simplest setup uses OpenAI defaults:Configuration Components
Mem0 has four configurable components. Click any to see all supported providers and detailed configuration options.LLMs
17 providers including OpenAI, Anthropic, Ollama, Groq, and moreConfigure the language model for memory extraction and processing
Vector Databases
25+ databases including Qdrant, Chroma, Pinecone, Weaviate, and moreChoose where to store and retrieve memory embeddings
Embedding Models
9 providers including OpenAI, HuggingFace, Ollama, and moreSelect the model to convert memories into vector embeddings
Rerankers
4 models including Cohere, Zero Entropy, and LLM-basedImprove search relevance by re-scoring retrieved memories
Configuration Recipes
Production Setup with Qdrant
For production deployments, use a dedicated vector store:1
Start Qdrant
2
Configure Mem0
Fully Local Setup
Run Mem0 completely offline with Ollama (no external APIs):Local Setup with Ollama
Step-by-step guide to run Mem0 with local LLM and embeddings
Multi-Cloud Setup
Mix providers from different clouds:Graph Memory Setup
Enable relationship tracking with Neo4j:Graph Memory Guide
Learn how to use graph memory for relationship-based retrieval
Advanced Configuration
Custom Prompts
Override default prompts for memory processing:Custom Fact Extraction
Customize how memories are extracted from conversations
Custom Memory Updates
Control how existing memories are modified
Reranking for Better Search
Add reranking to improve search relevance:Reranking Guide
Learn how reranking improves memory search accuracy
History Database
Configure where operation history is stored:All Configuration Options
LLM Configuration
LLM Configuration
| Parameter | Description | Provider |
|---|---|---|
provider | LLM provider (e.g., “openai”, “anthropic”) | All |
model | Model to use | All |
temperature | Temperature of the model (0.0-2.0) | All |
api_key | API key to use | Most |
max_tokens | Maximum tokens to generate | All |
top_p | Nucleus sampling threshold | All |
top_k | Top-k sampling parameter | Some |
ollama_base_url | Base URL for Ollama API | Ollama |
openai_base_url | Base URL for OpenAI API | OpenAI |
azure_kwargs | Azure-specific initialization args | Azure OpenAI |
Vector Store Configuration
Vector Store Configuration
Common parameters (provider-specific options vary):
See all 25+ vector stores: Vector Databases Overview
| Parameter | Description | Example |
|---|---|---|
provider | Vector store provider | ”qdrant” |
host | Host address | ”localhost” |
port | Port number | 6333 |
collection_name | Collection/index name | ”memories” |
api_key | API key (for cloud stores) | “your-key” |
Embedder Configuration
Embedder Configuration
| Parameter | Description | Default |
|---|---|---|
provider | Embedding provider | ”openai” |
model | Embedding model to use | ”text-embedding-3-small” |
api_key | API key for embedding service | None |
Reranker Configuration
Reranker Configuration
| Parameter | Description | Example |
|---|---|---|
provider | Reranker provider | ”cohere” |
model | Reranker model to use | ”rerank-english-v3.0” |
top_k | Number of results to return | 5 |
api_key | API key for reranker service | ”your-key” |
Graph Store Configuration
Graph Store Configuration
| Parameter | Description | Example |
|---|---|---|
provider | Graph store provider | ”neo4j” |
url | Connection URL | ”neo4j+s://…” |
username | Authentication username | ”neo4j” |
password | Authentication password | ”your-password” |
General Configuration
General Configuration
| Parameter | Description | Default |
|---|---|---|
history_db_path | Path to the history database | ”/history.db” |
custom_fact_extraction_prompt | Custom prompt for memory extraction | None |
custom_update_memory_prompt | Custom prompt for memory updates | None |