Supported LLMs
Ollama
š¢ Announcing our research paper: Mem0 achieves 26% higher accuracy than OpenAI Memory, 91% lower latency, and 90% token savings! Read the paper to learn how we're revolutionizing AI agent memory.
You can use LLMs from Ollama to run Mem0 locally. These models support tool support.
Usage
Config
All available parameters for the ollama
config are present in Master List of All Params in Config.