LM Studio
To use LM Studio with Mem0, you’ll need to have LM Studio running locally with its server enabled. LM Studio provides a way to run local LLMs with an OpenAI-compatible API.
Usage
Running Completely Locally
You can also use LM Studio for both LLM and embedding to run Mem0 entirely locally:
When using LM Studio for both LLM and embedding, make sure you have:
- An LLM model loaded for generating responses
- An embedding model loaded for vector embeddings
- The server enabled with the correct endpoints accessible
To use LM Studio, you need to:
- Download and install LM Studio
- Start a local server from the “Server” tab
- Set the appropriate
lmstudio_base_url
in your configuration (default is usually http://localhost:1234/v1)
Config
All available parameters for the lmstudio
config are present in Master List of All Params in Config.