Blend vector search with graph relationships to answer multi-hop questions.
Most AI agents use vector stores for RAG operations - they work great for semantic search and retrieving relevant context. But there’s a gap when queries require understanding connections between entities.Mem0 brings graph memory into the picture to fill this gap. In this cookbook, we’ll create a company knowledge base with Mem0, using both vector and graph stores. You’ll learn when each one helps along the way.
When you add a memory to Mem0, it goes into a vector store by default. Vector stores are excellent at semantic search - finding memories that match the meaning of your query.Graph stores work differently. They extract entities (people, projects, teams) and relationships between them (works_with, reports_to, member_of). This lets you answer questions that need connecting information across multiple memories.We will go through examples in this cookbook while building a company’s knowledge base along the way.
Since we’re building a company knowledge base, let’s add some employee information:
Copy
Ask AI
from mem0 import MemoryClientclient = MemoryClient(api_key="your-api-key")# Add employee infoclient.add("Emma is a software engineer in Seattle", user_id="company_kb")client.add("David is a product manager in Austin", user_id="company_kb")
Now let’s search for Emma’s role:
Copy
Ask AI
results = client.search("What does Emma do?", filters={"user_id": "company_kb"})print(results['results'][0]['memory'])
Output:
Copy
Ask AI
Emma is a software engineer in Seattle
Expected output: Vector search returned Emma’s role instantly. When queries ask for facts directly stored in one memory, vector semantic search is perfect—fast and accurate.
This works perfectly. Vector search found the memory that semantically matches “What does Emma do?” and returned Emma’s role.
Let’s add some information about how the team works together:
Copy
Ask AI
client.add("Emma works with David on the mobile app redesign", user_id="company_kb")client.add("David reports to Rachel, who manages the design team", user_id="company_kb")
Now we have two pieces of information stored:
Emma works with David
David reports to Rachel
Let’s try asking something that needs both pieces:
Copy
Ask AI
results = client.search( "Who is Emma's teammate's manager?", filters={"user_id": "company_kb"})for r in results['results']: print(r['memory'])
Output:
Copy
Ask AI
Emma works with David on the mobile app redesignDavid reports to Rachel, who manages the design team
Vector search returned both memories, but it didn’t connect them. You’d need to manually figure out:
Emma’s teammate is David (from memory 1)
David’s manager is Rachel (from memory 2)
So the answer is Rachel
Vector search can’t traverse relationships. It returns relevant memories, but you must connect the dots manually. For “Who is Emma’s teammate’s manager?”, vector search gives you the pieces—not the answer. This breaks down as queries get more complex (3+ hops).
Let’s add the same information with graph memory enabled:
Copy
Ask AI
client.add( "Emma works with David on the mobile app redesign", user_id="company_kb", enable_graph=True)client.add( "David reports to Rachel, who manages the design team", user_id="company_kb", enable_graph=True)
When you set enable_graph=True, Mem0 extracts entities and relationships:
emma --[works_with]--> david
david --[reports_to]--> rachel
rachel --[manages]--> design_team
Now the same query works differently:
Copy
Ask AI
results = client.search( "Who is Emma's teammate's manager?", filters={"user_id": "company_kb"}, enable_graph=True)print(results['results'][0]['memory'])print("\\nRelationships found:")for rel in results.get('relations', []): print(f" {rel['source']}, {rel['target']} ({rel['relationship']})")
Output:
Copy
Ask AI
David reports to Rachel, who manages the design teamRelationships found: emma, david (works_with) david, rachel (reports_to)
Expected behavior: Graph memory returns the direct answer—“David reports to Rachel”—plus the relationship chain that got there. No manual connecting needed. The graph traversed: Emma → works_with → David → reports_to → Rachel.
Graph memory traversed the relationships automatically: Emma works with David, David reports to Rachel, so Rachel is the answer.
Here’s what the graph looks like behind the scenes:Graph memory lets you discover relations and memories which are tricky to do with direct vector stores.Vector search would need the exact words in your query to match. Graph memory follows the connections.
Let’s build a small company knowledge base with both approaches:
Copy
Ask AI
# Facts about individuals - vector store is fineclient.add("Emma specializes in React and TypeScript", user_id="company_kb")client.add("David has 5 years of product management experience", user_id="company_kb")# Relationships - use graph memoryclient.add( "Emma and David work together on the mobile app", user_id="company_kb", enable_graph=True)client.add( "David reports to Rachel", user_id="company_kb", enable_graph=True)client.add( "Rachel runs weekly team syncs every Tuesday", user_id="company_kb", enable_graph=True)
Now we can ask different types of questions:
Copy
Ask AI
# Direct fact - vector searchresults = client.search("What are Emma's skills?", filters={"user_id": "company_kb"})print(results['results'][0]['memory'])
Output:
Copy
Ask AI
Emma specializes in React and TypeScript
Copy
Ask AI
# Multi-hop relationship - graph searchresults = client.search( "What meetings does Emma's project manager's boss run?", filters={"user_id": "company_kb"}, enable_graph=True)print(results['results'][0]['memory'])
Output:
Copy
Ask AI
Rachel runs weekly team syncs every Tuesday
Graph memory connected: Emma works with David, David reports to Rachel, Rachel runs team syncs.
Enable graph memory when your queries need multi-hop traversal: org charts (who reports to whom), project teams (who collaborates), CRMs (which contacts connect to companies). For single-fact lookups, stick with vector search—it’s faster and cheaper.
Graph memory adds processing time and cost. When you call client.add() with enable_graph=True, Mem0 makes extra LLM calls to extract entities and relationships.
Cost consideration: Graph memory extraction adds ~2-3 extra LLM calls per add() operation to identify entities and relationships. Use it selectively—enable graph for organizational structure and long-term relationships, skip it for temporary notes and simple facts.
Use graph memory when the relationship traversal adds real value. For most use cases, vector search is sufficient and faster.
Copy
Ask AI
# Long-term organizational structure - worth using graphclient.add( "Emma mentors two junior engineers on the frontend team", user_id="company_kb", enable_graph=True)# Temporary notes - skip graph, not worth the costclient.add( "Emma is out sick today", user_id="company_kb", run_id="daily_notes")
You can enable graph memory in two ways:Per-call (recommended to start):
Copy
Ask AI
client.add("Emma works with David", user_id="company_kb", enable_graph=True)client.search("team structure", filters={"user_id": "company_kb"}, enable_graph=True)
Project-wide (if most of your data has relationships):
Copy
Ask AI
client.project.update(enable_graph=True)# Now every add uses graph automaticallyclient.add("Emma mentors Jordan", user_id="company_kb")
Vector stores handle most memory operations efficiently—semantic search works great for finding relevant information. Add graph memory when your queries need to understand how entities connect across multiple hops.The key is knowing which tool fits your query pattern: direct questions work with vectors, multi-hop relationship queries need graphs.
Partition Memories by Entity
Scope memories across users, agents, apps, and sessions to balance personalization and reuse.
Export Everything Safely
Learn how to migrate or audit stored memories with structured exports.