To use LM Studio with Mem0, you’ll need to have LM Studio running locally with its server enabled. LM Studio provides a way to run local LLMs with an OpenAI-compatible API.Documentation Index
Fetch the complete documentation index at: https://docs.mem0.ai/llms.txt
Use this file to discover all available pages before exploring further.
Usage
Running Completely Locally
You can also use LM Studio for both LLM and embedding to run Mem0 entirely locally:When using LM Studio for both LLM and embedding, make sure you have:
- An LLM model loaded for generating responses
- An embedding model loaded for vector embeddings
- The server enabled with the correct endpoints accessible
To use LM Studio, you need to:
- Download and install LM Studio
- Start a local server from the “Server” tab
- Set the appropriate
lmstudio_base_urlin your configuration (default is usually http://localhost:1234/v1)
Config
All available parameters for thelmstudio config are present in Master List of All Params in Config.