Overview
Thesearch
operation allows you to retrieve relevant memories based on a natural language query and optional filters like user ID, agent ID, categories, and more. This is the foundation of giving your agents memory-aware behavior.
Mem0 supports:
- Semantic similarity search
- Metadata filtering (with advanced logic)
- Reranking and thresholds
- Cross-agent, multi-session context resolution
- Mem0 Platform (hosted API with full-scale features)
- Mem0 Open Source (local-first with LLM inference and local vector DB)
Architecture

Architecture diagram illustrating the memory search process.
- Query Processing An LLM refines and optimizes your natural language query.
- Vector Search Semantic embeddings are used to find the most relevant memories using cosine similarity.
- Filtering & Ranking Logical and comparison-based filters are applied. Memories are scored, filtered, and optionally reranked.
- Results Delivery Relevant memories are returned with associated metadata and timestamps.
Example: Mem0 Platform
Example: Mem0 Open Source
Tips for Better Search
- Use descriptive natural queries (Mem0 can interpret intent)
- Apply filters for scoped, faster lookup
- Use
version: "v2"
for enhanced results - Consider wildcard filters (e.g.,
run_id: "*"
) for broader matches - Tune with
top_k
,threshold
, orrerank
if needed