Usage
To use a llm, you must provide a configuration to customize its usage. If no configuration is supplied, a default configuration will be applied, andOpenAI
will be used as the llm.
For a comprehensive list of available parameters for llm configuration, please refer to Config.
Supported LLMs
See the list of supported LLMs below.All LLMs are supported in Python. The following LLMs are also supported in TypeScript: OpenAI, Anthropic, and Groq.
OpenAI
Ollama
Azure OpenAI
Anthropic
Together
Groq
Litellm
Mistral AI
Google AI
AWS bedrock
DeepSeek
xAI
Sarvam AI
LM Studio
Langchain
Structured vs Unstructured Outputs
Mem0 supports two types of OpenAI LLM formats, each with its own strengths and use cases:Structured Outputs
Structured outputs are LLMs that align with OpenAI’s structured outputs model:- Optimized for: Returning structured responses (e.g., JSON objects)
- Benefits: Precise, easily parseable data
- Ideal for: Data extraction, form filling, API responses
- Learn more: OpenAI Structured Outputs Guide
Unstructured Outputs
Unstructured outputs correspond to OpenAI’s standard, free-form text model:- Flexibility: Returns open-ended, natural language responses
- Customization: Use the
response_format
parameter to guide output - Trade-off: Less efficient than structured outputs for specific data needs
- Best for: Creative writing, explanations, general conversation