Documentation Index
Fetch the complete documentation index at: https://docs.mem0.ai/llms.txt
Use this file to discover all available pages before exploring further.
You can use embedding models from Ollama to run Mem0 locally.
Usage
import os
from mem0 import Memory
os.environ["OPENAI_API_KEY"] = "your_api_key" # For LLM
config = {
"embedder": {
"provider": "ollama",
"config": {
"model": "mxbai-embed-large"
}
}
}
m = Memory.from_config(config)
messages = [
{"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
{"role": "assistant", "content": "How about thriller movies? They can be quite engaging."},
{"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."},
{"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
]
m.add(messages, user_id="john")
Config
Here are the parameters available for configuring Ollama embedder:
| Parameter | Description | Default Value |
|---|
model | The name of the Ollama model to use | nomic-embed-text |
embedding_dims | Dimensions of the embedding model | 512 |
ollama_base_url | Base URL for ollama connection | None |
| Parameter | Description | Default Value |
|---|
model | The name of the Ollama model to use | nomic-embed-text:latest |
url | Base URL for Ollama server | http://localhost:11434 |
embeddingDims | Dimensions of the embedding model | 768 |