Supported LLMs
LM Studio
📢 Announcing our research paper: Mem0 achieves 26% higher accuracy than OpenAI Memory, 91% lower latency, and 90% token savings! Read the paper to learn how we're revolutionizing AI agent memory.
To use LM Studio with Mem0, you’ll need to have LM Studio running locally with its server enabled. LM Studio provides a way to run local LLMs with an OpenAI-compatible API.
Usage
Running Completely Locally
You can also use LM Studio for both LLM and embedding to run Mem0 entirely locally:
When using LM Studio for both LLM and embedding, make sure you have:
- An LLM model loaded for generating responses
- An embedding model loaded for vector embeddings
- The server enabled with the correct endpoints accessible
To use LM Studio, you need to:
- Download and install LM Studio
- Start a local server from the “Server” tab
- Set the appropriate
lmstudio_base_url
in your configuration (default is usually http://localhost:1234/v1)
Config
All available parameters for the lmstudio
config are present in Master List of All Params in Config.