Skip to main content
Graph Memory extends Mem0 by persisting nodes and edges alongside embeddings, so recalls stitch together people, places, and events instead of just keywords.
You’ll use this when…
  • Conversation history mixes multiple actors and objects that vectors alone blur together
  • Compliance or auditing demands a graph of who said what and when
  • Agent teams need shared context without duplicating every memory in each run

How Graph Memory Maps Context

Mem0 extracts entities and relationships from every memory write, stores embeddings in your vector database, and mirrors relationships in a graph backend. On retrieval, vector search narrows candidates, then the graph supplies context and re-ranks results.
Graph Memory complements your vector store. Keep both healthy to avoid blind spots.

How It Works

1

Extract people, places, and facts

Mem0’s extraction LLM identifies entities, relationships, and timestamps from the conversation payload you send to memory.add.
2

Store vectors and edges together

Embeddings land in your configured vector database while nodes and edges flow into a Bolt-compatible graph backend (Neo4j, Memgraph, Neptune, or Kuzu).
3

Blend graph context at search time

memory.search first performs vector similarity, then follows connected nodes to boost (or filter) answers before optionally handing results to a reranker.

Quickstart (Neo4j Aura)

Time to implement: ~10 minutes · Prerequisites: Python 3.10+, Node.js 18+, Neo4j Aura DB (free tier)
Provision a free Neo4j Aura instance, copy the Bolt URI, username, and password, then follow the language tab that matches your stack.
  • Python
  • TypeScript
1

Install Mem0 with graph extras

pip install "mem0ai[graph]"
2

Export Neo4j credentials

export NEO4J_URL="neo4j+s://<your-instance>.databases.neo4j.io"
export NEO4J_USERNAME="neo4j"
export NEO4J_PASSWORD="your-password"
3

Add and recall a relationship

import os
from mem0 import Memory

config = {
    "graph_store": {
        "provider": "neo4j",
        "config": {
            "url": os.environ["NEO4J_URL"],
            "username": os.environ["NEO4J_USERNAME"],
            "password": os.environ["NEO4J_PASSWORD"],
            "database": "neo4j",
        }
    }
}

memory = Memory.from_config(config)

conversation = [
    {"role": "user", "content": "Alice met Bob at GraphConf 2025 in San Francisco."},
    {"role": "assistant", "content": "Great! Logging that connection."},
]

memory.add(conversation, user_id="demo-user")

results = memory.search(
    "Who did Alice meet at GraphConf?",
    user_id="demo-user",
    limit=3,
    rerank=True,
)

for hit in results["results"]:
    print(hit["memory"])
Expect to see Alice met Bob at GraphConf 2025 in the output. In Neo4j Browser run MATCH (p:Person)-[r]->(q:Person) RETURN p,r,q LIMIT 5; to confirm the edge exists.

Operate Graph Memory Day-to-Day

Guide which relationships become nodes and edges.
import os
from mem0 import Memory

config = {
    "graph_store": {
        "provider": "neo4j",
        "config": {
            "url": os.environ["NEO4J_URL"],
            "username": os.environ["NEO4J_USERNAME"],
            "password": os.environ["NEO4J_PASSWORD"],
        },
        "custom_prompt": "Please only capture people, organisations, and project links.",
    }
}

memory = Memory.from_config(config_dict=config)
Keep noisy edges out of the graph by demanding higher extraction confidence.
config["graph_store"]["config"]["threshold"] = 0.75
Disable graph writes or reads when you only want vector behaviour.
memory.add(messages, user_id="demo-user", enable_graph=False)
results = memory.search("marketing partners", user_id="demo-user", enable_graph=False)
Separate or share context across agents and sessions with user_id, agent_id, and run_id.
memory.add("I prefer Italian cuisine", { userId: "bob", agentId: "food-assistant" });
memory.add("I'm allergic to peanuts", { userId: "bob", agentId: "health-assistant" });
memory.add("I live in Seattle", { userId: "bob" });

const food = await memory.search("What food do I like?", { userId: "bob", agentId: "food-assistant" });
const allergies = await memory.search("What are my allergies?", { userId: "bob", agentId: "health-assistant" });
const location = await memory.search("Where do I live?", { userId: "bob" });
Monitor graph growth, especially on free tiers, by periodically cleaning dormant nodes: MATCH (n) WHERE n.lastSeen < date() - duration('P90D') DETACH DELETE n.

Troubleshooting

Confirm Bolt connectivity is enabled, credentials match Aura, and your IP is allow-listed. Retry after confirming the URI format is neo4j+s://....
Ensure the graph identifier matches the vector dimension used by your embedder and that the IAM role allows neptune-graph:*DataViaQuery actions.
Catch the provider error and retry with enable_graph=False so vector-only search keeps serving responses while the graph backend recovers.

Decision Points

  • Select the graph store that fits your deployment (managed Aura vs. self-hosted Neo4j vs. AWS Neptune vs. local Kuzu).
  • Decide when to enable graph writes per request; routine conversations may stay vector-only to save latency.
  • Set a policy for pruning stale relationships so your graph stays fast and affordable.

Provider setup

Choose your backend and expand the matching panel for configuration details and links.
Install the APOC plugin for self-hosted deployments, then configure Mem0:
import { Memory } from "mem0ai/oss";

const config = {
    enableGraph: true,
    graphStore: {
        provider: "neo4j",
        config: {
            url: "neo4j+s://<HOST>",
            username: "neo4j",
            password: "<PASSWORD>",
        }
    }
};

const memory = new Memory(config);
Additional docs: Neo4j Aura Quickstart, APOC installation.
Run Memgraph Mage locally with schema introspection enabled:
docker run -p 7687:7687 memgraph/memgraph-mage:latest --schema-info-enabled=True
Then point Mem0 at the instance:
from mem0 import Memory

config = {
    "graph_store": {
        "provider": "memgraph",
        "config": {
            "url": "bolt://localhost:7687",
            "username": "memgraph",
            "password": "your-password",
        },
    },
}

m = Memory.from_config(config_dict=config)
Learn more: Memgraph Docs.
Match vector dimensions between Neptune and your embedder, enable public connectivity (if needed), and grant IAM permissions:
from mem0 import Memory

config = {
    "graph_store": {
        "provider": "neptune",
        "config": {
            "endpoint": "neptune-graph://<GRAPH_ID>",
        },
    },
}

m = Memory.from_config(config_dict=config)
Reference: Neptune Analytics Guide.
Create a Neptune cluster, enable the public endpoint if you operate outside the VPC, and point Mem0 at the host:
from mem0 import Memory

config = {
    "graph_store": {
        "provider": "neptunedb",
        "config": {
            "collection_name": "<VECTOR_COLLECTION_NAME>",
            "endpoint": "neptune-graph://<HOST_ENDPOINT>",
        },
    },
}

m = Memory.from_config(config_dict=config)
Reference: Accessing Data in Neptune DB.
Kuzu runs in-process, so supply a path (or :memory:) for the database file:
config = {
    "graph_store": {
        "provider": "kuzu",
        "config": {
            "db": "/tmp/mem0-example.kuzu"
        }
    }
}
Kuzu will clear its state when using :memory: once the process exits. See the Kuzu documentation for advanced settings.
I