Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.mem0.ai/llms.txt

Use this file to discover all available pages before exploring further.

You can use embedding models from Ollama to run Mem0 locally.

Usage

import os
from mem0 import Memory

os.environ["OPENAI_API_KEY"] = "your_api_key" # For LLM

config = {
    "embedder": {
        "provider": "ollama",
        "config": {
            "model": "mxbai-embed-large"
        }
    }
}

m = Memory.from_config(config)
messages = [
    {"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
    {"role": "assistant", "content": "How about thriller movies? They can be quite engaging."},
    {"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."},
    {"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
]
m.add(messages, user_id="john")

Config

Here are the parameters available for configuring Ollama embedder:
ParameterDescriptionDefault Value
modelThe name of the Ollama model to usenomic-embed-text
embedding_dimsDimensions of the embedding model512
ollama_base_urlBase URL for ollama connectionNone