Supported LLMs
Mistral AI
📢 Announcing our research paper: Mem0 achieves 26% higher accuracy than OpenAI Memory, 91% lower latency, and 90% token savings! Read the paper to learn how we're revolutionizing AI agent memory.
To use mistral’s models, please obtain the Mistral AI api key from their console. Set the MISTRAL_API_KEY
environment variable to use the model as given below in the example.
Usage
Config
All available parameters for the litellm
config are present in Master List of All Params in Config.