How to define configurations?
The
config
is defined as a Python dictionary with two main keys:llm
: Specifies the llm provider and its configurationprovider
: The name of the llm (e.g., “openai”, “groq”)config
: A nested dictionary containing provider-specific settings
Config Values Precedence
Config values are applied in the following order of precedence (from highest to lowest):- Values explicitly set in the
config
object/dictionary - Environment variables (e.g.,
OPENAI_API_KEY
,OPENAI_BASE_URL
) - Default values defined in the LLM implementation
config
will override corresponding environment variables, which in turn override default values.
How to Use Config
Here’s a general example of how to use the config with Mem0:Why is Config Needed?
Config is essential for:- Specifying which LLM to use.
- Providing necessary connection details (e.g., model, api_key, temperature).
- Ensuring proper initialization and connection to your chosen LLM.
Master List of All Params in Config
Here’s a comprehensive list of all parameters that can be used across different LLMs:Parameter | Description | Provider |
---|---|---|
model | Embedding model to use | All |
temperature | Temperature of the model | All |
api_key | API key to use | All |
max_tokens | Tokens to generate | All |
top_p | Probability threshold for nucleus sampling | All |
top_k | Number of highest probability tokens to keep | All |
http_client_proxies | Allow proxy server settings | AzureOpenAI |
models | List of models | Openrouter |
route | Routing strategy | Openrouter |
openrouter_base_url | Base URL for Openrouter API | Openrouter |
site_url | Site URL | Openrouter |
app_name | Application name | Openrouter |
ollama_base_url | Base URL for Ollama API | Ollama |
openai_base_url | Base URL for OpenAI API | OpenAI |
azure_kwargs | Azure LLM args for initialization | AzureOpenAI |
deepseek_base_url | Base URL for DeepSeek API | DeepSeek |
xai_base_url | Base URL for XAI API | XAI |
sarvam_base_url | Base URL for Sarvam API | Sarvam |
reasoning_effort | Reasoning level (low, medium, high) | Sarvam |
frequency_penalty | Penalize frequent tokens (-2.0 to 2.0) | Sarvam |
presence_penalty | Penalize existing tokens (-2.0 to 2.0) | Sarvam |
seed | Seed for deterministic sampling | Sarvam |
stop | Stop sequences (max 4) | Sarvam |
lmstudio_base_url | Base URL for LM Studio API | LM Studio |
response_callback | LLM response callback function | OpenAI |