You can use embedding models from Ollama to run Mem0 locally.

Usage

import os
from mem0 import Memory

os.environ["OPENAI_API_KEY"] = "your_api_key" # For LLM

config = {
    "embedder": {
        "provider": "ollama",
        "config": {
            "model": "mxbai-embed-large"
        }
    }
}

m = Memory.from_config(config)
m.add("I'm visiting Paris", user_id="john")

Config

Here are the parameters available for configuring Ollama embedder:

ParameterDescriptionDefault Value
modelThe name of the OpenAI model to usenomic-embed-text
embedding_dimsDimensions of the embedding model512
ollama_base_urlBase URL for ollama connectionNone