šŸ” Mem0 is now SOC 2 and HIPAA compliant! We're committed to the highest standards of data security and privacy, enabling secure memory for enterprises, healthcare, and beyond. Learn more

You can use LLMs from Ollama to run Mem0 locally. These models support tool support.

Usage

import os
from mem0 import Memory

os.environ["OPENAI_API_KEY"] = "your-api-key" # for embedder

config = {
    "llm": {
        "provider": "ollama",
        "config": {
            "model": "mixtral:8x7b",
            "temperature": 0.1,
            "max_tokens": 2000,
        }
    }
}

m = Memory.from_config(config)
messages = [
    {"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
    {"role": "assistant", "content": "How about a thriller movies? They can be quite engaging."},
    {"role": "user", "content": "I’m not a big fan of thriller movies but I love sci-fi movies."},
    {"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
]
m.add(messages, user_id="alice", metadata={"category": "movies"})

Config

All available parameters for the ollama config are present in Master List of All Params in Config.