Search Memory
Retrieve relevant memories from Mem0 using powerful semantic and filtered search capabilities.
Overview
The search
operation allows you to retrieve relevant memories based on a natural language query and optional filters like user ID, agent ID, categories, and more. This is the foundation of giving your agents memory-aware behavior.
Mem0 supports:
- Semantic similarity search
- Metadata filtering (with advanced logic)
- Reranking and thresholds
- Cross-agent, multi-session context resolution
This applies to both:
- Mem0 Platform (hosted API with full-scale features)
- Mem0 Open Source (local-first with LLM inference and local vector DB)
Architecture
Architecture diagram illustrating the memory search process.
The search flow follows these steps:
-
Query Processing An LLM refines and optimizes your natural language query.
-
Vector Search Semantic embeddings are used to find the most relevant memories using cosine similarity.
-
Filtering & Ranking Logical and comparison-based filters are applied. Memories are scored, filtered, and optionally reranked.
-
Results Delivery Relevant memories are returned with associated metadata and timestamps.
Example: Mem0 Platform
Example: Mem0 Open Source
Tips for Better Search
- Use descriptive natural queries (Mem0 can interpret intent)
- Apply filters for scoped, faster lookup
- Use
version: "v2"
for enhanced results - Consider wildcard filters (e.g.,
run_id: "*"
) for broader matches - Tune with
top_k
,threshold
, orrerank
if needed
More Details
For the full list of filter logic, comparison operators, and optional search parameters, see the Search Memory API Reference.
Need help?
If you have any questions, please feel free to reach out to us using one of the following methods: