Mem0 supports LangChain as a provider to access a wide range of LLM models. LangChain is a framework for developing applications powered by language models, making it easy to integrate various LLM providers through a consistent interface.

For a complete list of available chat models supported by LangChain, refer to the LangChain Chat Models documentation.

Usage

import os
from mem0 import Memory
from langchain_openai import ChatOpenAI

# Set necessary environment variables for your chosen LangChain provider
os.environ["OPENAI_API_KEY"] = "your-api-key"

# Initialize a LangChain model directly
openai_model = ChatOpenAI(
    model="gpt-4o",
    temperature=0.2,
    max_tokens=2000
)

# Pass the initialized model to the config
config = {
    "llm": {
        "provider": "langchain",
        "config": {
            "model": openai_model
        }
    }
}

m = Memory.from_config(config)
messages = [
    {"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
    {"role": "assistant", "content": "How about a thriller movies? They can be quite engaging."},
    {"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."},
    {"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
]
m.add(messages, user_id="alice", metadata={"category": "movies"})

Supported LangChain Providers

LangChain supports a wide range of LLM providers, including:

  • OpenAI (ChatOpenAI)
  • Anthropic (ChatAnthropic)
  • Google (ChatGoogleGenerativeAI, ChatGooglePalm)
  • Mistral (ChatMistralAI)
  • Ollama (ChatOllama)
  • Azure OpenAI (AzureChatOpenAI)
  • HuggingFace (HuggingFaceChatEndpoint)
  • And many more

You can use any of these model instances directly in your configuration. For a complete and up-to-date list of available providers, refer to the LangChain Chat Models documentation.

Provider-Specific Configuration

When using LangChain as a provider, you’ll need to:

  1. Set the appropriate environment variables for your chosen LLM provider
  2. Import and initialize the specific model class you want to use
  3. Pass the initialized model instance to the config

Make sure to install the necessary LangChain packages and any provider-specific dependencies.

Config

All available parameters for the langchain config are present in Master List of All Params in Config.