NodeSDK now supports Graph Memory. 🎉
Installation
To use Mem0 with Graph Memory support, install it using pip:Initialize Graph Memory
To initialize Graph Memory you’ll need to set up your configuration with graph store providers. Currently, we support Neo4j, Memgraph, Neptune Analytics, Neptune DB Cluster,and Kuzu as graph store providers.Initialize Neo4j
You can setup Neo4j locally or use the hosted Neo4j AuraDB.If you are using Neo4j locally, then you need to install APOC plugins.
- Main Configuration: If
llm
is set in the main config, it will be used for all graph operations. - Graph Store Configuration: If
llm
is set in the graph_store config, it will override the main configllm
and be used specifically for graph operations. - Default Configuration: If no custom LLM is set, the default LLM (
gpt-4o-2024-08-06
) will be used for all graph operations.
If you are using NodeSDK, you need to pass
enableGraph
as true
in the config
object.Initialize Memgraph
Run Memgraph with Docker:--schema-info-enabled
flag is set to True
for more performant schema
generation.
Additional information can be found on Memgraph
documentation.
User can also customize the LLM for Graph Memory from the Supported LLM list with three levels of configuration:
- Main Configuration: If
llm
is set in the main config, it will be used for all graph operations. - Graph Store Configuration: If
llm
is set in the graph_store config, it will override the main configllm
and be used specifically for graph operations. - Default Configuration: If no custom LLM is set, the default LLM (
gpt-4o-2024-08-06
) will be used for all graph operations.
Initialize Neptune Analytics
Note: You can use Neptune Analytics as part of an Amazon tech stack Setup AWS Bedrock, AOSS, and Neptune Create an instance of Amazon Neptune Analytics in your AWS account following the AWS documentation.- Public connectivity is not enabled by default, and if accessing from outside a VPC, it needs to be enabled.
- Once the Amazon Neptune Analytics instance is available, you will need the graph-identifier to connect.
- The Neptune Analytics instance must be created using the same vector dimensions as the embedding model creates. See: Vector indexing in Neptune Analytics.
- neptune-graph:ReadDataViaQuery
- neptune-graph:WriteDataViaQuery
- neptune-graph:DeleteDataViaQuery
- Main Configuration: If
llm
is set in the main config, it will be used for all graph operations. - Graph Store Configuration: If
llm
is set in the graph_store config, it will override the main configllm
and be used specifically for graph operations. - Default Configuration: If no custom LLM is set, the default LLM (
gpt-4o-2024-08-06
) will be used for all graph operations.
- For issues connecting to Amazon Neptune Analytics, please refer to the Connecting to a graph guide.
- For issues related to authentication, refer to the boto3 client configuration options.
- For more details on how to connect, configure, and use the graph_memory graph store, see the Neptune Analytics example in our AWS example guide.
- The Neptune memory store uses AWS LangChain Python API to connect to Neptune instances. For additional configuration options for connecting to your Amazon Neptune Analytics instance, see AWS LangChain API documentation.
Initialize Neptune DB
Note that Neptune DB does not support vectors, and this graph store provider requires a collection in the vector store to save entity vectors. Create a cluster of Amazon DB instances in your AWS account following the AWS documentation.- Public connectivity is not enabled by default. To access the instance from outside a VPC, public connectivity needs to be enabled on the Neptune DB instance by following Neptune Public Endpoints.
- Once the Amazon Neptune Cluster instance is available, you will need the graph host endpoint to connect.
- Neptune DB doesn’t support vectors. The
collection_name
config field can be used to specify the vector store collection used to store vectors for the Neptune entities.
- neptune-db:ReadDataViaQuery
- neptune-db:WriteDataViaQuery
- neptune-db:DeleteDataViaQuery
- Main Configuration: If
llm
is set in the main config, it will be used for all graph operations. - Graph Store Configuration: If
llm
is set in the graph_store config, it will override the main configllm
and be used specifically for graph operations. - Default Configuration: If no custom LLM is set, the default LLM (
gpt-4o-2024-08-06
) will be used for all graph operations.
- For issues connecting to Amazon Neptune Analytics, please refer to the Accessing graph data in Amazon Neptune.
- For issues related to authentication, refer to the boto3 client configuration options.
- For more details on how to connect, configure, and use the graph_memory graph store, see the Neptune DB example notebook.
- The Neptune memory store uses AWS LangChain Python API to connect to Neptune instances. For additional configuration options for connecting to your Amazon Neptune Analytics instance, see AWS LangChain API documentation.
Initialize Kuzu
Kuzu is a fully local in-process graph database system that runs openCypher queries. Kuzu comes embedded into the Python package and there is no additional setup required. Kuzu needs a path to a file where it will store the graph database. For example:Graph Operations
Mem0’s graph memory supports the following operations:Add Memories
Mem0 with Graph Memory supports “user_id”, “agent_id”, and “run_id” parameters. You can use any combination of these to organize your memories. Use “userId”, “agentId”, and “runId” in NodeSDK.
Get all memories
Search Memories
Delete all Memories
Example Usage
Here’s an example of how to use Mem0’s graph operations:- First, we’ll add some memories for a user named Alice.
- Then, we’ll visualize how the graph evolves as we add more memories.
- You’ll see how entities and relationships are automatically extracted and connected in the graph.
Add Memories
Below are the steps to add memories and visualize the graph:1
Add memory 'I like going to hikes'

2
Add memory 'I love to play badminton'

3
Add memory 'I hate playing badminton'

4
Add memory 'My friend name is john and john has a dog named tommy'

5
Add memory 'My name is Alice'

6
Add memory 'John loves to hike and Harry loves to hike as well'

7
Add memory 'My friend peter is the spiderman'

Search Memories


Note: The Graph Memory implementation is not standalone. You will be adding/retrieving memories to the vector store and the graph store simultaneously.
Using Multiple Agents with Graph Memory
When working with multiple agents and sessions, you can use the “agent_id” and “run_id” parameters to organize memories by user, agent, and run context. This allows you to:- Create agent-specific knowledge graphs
- Share common knowledge between agents
- Isolate sensitive or specialized information to specific agents
- Track conversation sessions and runs separately
- Maintain context across different execution contexts