Supported LLMs
Ollama
You can use LLMs from Ollama to run Mem0 locally. These models support tool support.
Usage
import os
from mem0 import Memory
os.environ["OPENAI_API_KEY"] = "your-api-key" # for embedder
config = {
"llm": {
"provider": "ollama",
"config": {
"model": "mixtral:8x7b",
"temperature": 0.1,
"max_tokens": 2000,
}
}
}
m = Memory.from_config(config)
m.add("Likes to play cricket on weekends", user_id="alice", metadata={"category": "hobbies"})
Config
All available parameters for the ollama
config are present in Master List of All Params in Config.