Skip to main content
Mem0 is highly configurable, allowing you to customize every component of your memory system. Choose from 51+ supported providers across LLMs, vector databases, embedders, and rerankers.

Quick Start

The simplest setup uses OpenAI defaults:
import os
from mem0 import Memory

os.environ["OPENAI_API_KEY"] = "your-api-key"
m = Memory()
For production or custom setups, configure specific components:
from mem0 import Memory

config = {
    "vector_store": {
        "provider": "qdrant",
        "config": {
            "host": "localhost",
            "port": 6333
        }
    },
    "llm": {
        "provider": "openai",
        "config": {
            "model": "gpt-4.1-nano-2025-04-14",
            "temperature": 0.1
        }
    }
}

m = Memory.from_config(config)

Configuration Components

Mem0 has four configurable components. Click any to see all supported providers and detailed configuration options.

Configuration Recipes

Production Setup with Qdrant

For production deployments, use a dedicated vector store:
1

Start Qdrant

docker pull qdrant/qdrant

docker run -p 6333:6333 -p 6334:6334 \
 -v $(pwd)/qdrant_storage:/qdrant/storage:z \
 qdrant/qdrant

2

Configure Mem0

import os
from mem0 import Memory

os.environ["OPENAI_API_KEY"] = "your-api-key"

config = {
    "vector_store": {
        "provider": "qdrant",
        "config": {
            "host": "localhost",
            "port": 6333,
        }
    }
}

m = Memory.from_config(config)

Fully Local Setup

Run Mem0 completely offline with Ollama (no external APIs):

Local Setup with Ollama

Step-by-step guide to run Mem0 with local LLM and embeddings

Multi-Cloud Setup

Mix providers from different clouds:
config = {
    "llm": {
        "provider": "azure_openai",
        "config": {
            "api_key": "azure-key",
            "deployment_name": "gpt-4.1-nano-2025-04-14"
        }
    },
    "vector_store": {
        "provider": "pinecone",
        "config": {
            "api_key": "pinecone-key",
            "index_name": "mem0"
        }
    },
    "embedder": {
        "provider": "vertexai",
        "config": {
            "model": "textembedding-gecko@003"
        }
    }
}

Graph Memory Setup

Enable relationship tracking with Neo4j:
config = {
    "graph_store": {
        "provider": "neo4j",
        "config": {
            "url": "neo4j+s://your-instance.databases.neo4j.io",
            "username": "neo4j",
            "password": "your-password"
        }
    }
}

m = Memory.from_config(config)

Graph Memory Guide

Learn how to use graph memory for relationship-based retrieval

Advanced Configuration

Custom Prompts

Override default prompts for memory processing:
config = {
    "custom_fact_extraction_prompt": """
    Extract key facts from the conversation.
    Focus on: preferences, decisions, and context.
    Output as a single sentence.
    """,
    "custom_update_memory_prompt": """
    Update the existing memory with new information.
    Preserve important context from the old memory.
    """
}
Add reranking to improve search relevance:
config = {
    "rerank": {
        "provider": "cohere",
        "config": {
            "model": "rerank-english-v3.0",
            "top_k": 5
        }
    }
}

Reranking Guide

Learn how reranking improves memory search accuracy

History Database

Configure where operation history is stored:
config = {
    "history_db_path": "/custom/path/to/history.db"
}

All Configuration Options

ParameterDescriptionProvider
providerLLM provider (e.g., “openai”, “anthropic”)All
modelModel to useAll
temperatureTemperature of the model (0.0-2.0)All
api_keyAPI key to useMost
max_tokensMaximum tokens to generateAll
top_pNucleus sampling thresholdAll
top_kTop-k sampling parameterSome
ollama_base_urlBase URL for Ollama APIOllama
openai_base_urlBase URL for OpenAI APIOpenAI
azure_kwargsAzure-specific initialization argsAzure OpenAI
See all 17 LLM providers: LLMs Overview
Common parameters (provider-specific options vary):
ParameterDescriptionExample
providerVector store provider”qdrant”
hostHost address”localhost”
portPort number6333
collection_nameCollection/index name”memories”
api_keyAPI key (for cloud stores)“your-key”
See all 25+ vector stores: Vector Databases Overview
ParameterDescriptionDefault
providerEmbedding provider”openai”
modelEmbedding model to use”text-embedding-3-small”
api_keyAPI key for embedding serviceNone
See all 9 embedder providers: Embedders Overview
ParameterDescriptionExample
providerReranker provider”cohere”
modelReranker model to use”rerank-english-v3.0”
top_kNumber of results to return5
api_keyAPI key for reranker service”your-key”
See all reranker options: Rerankers Overview
ParameterDescriptionExample
providerGraph store provider”neo4j”
urlConnection URL”neo4j+s://…”
usernameAuthentication username”neo4j”
passwordAuthentication password”your-password”
Learn more: Graph Memory Overview
ParameterDescriptionDefault
history_db_pathPath to the history database/history.db”
custom_fact_extraction_promptCustom prompt for memory extractionNone
custom_update_memory_promptCustom prompt for memory updatesNone

Next Steps

I