Overview
Thesearch operation allows you to retrieve relevant memories based on a natural language query and optional filters like user ID, agent ID, categories, and more. This is the foundation of giving your agents memory-aware behavior.
Mem0 supports:
- Semantic similarity search
- Metadata filtering (with advanced logic)
- Reranking and thresholds
- Cross-agent, multi-session context resolution
- Mem0 Platform (hosted API with full-scale features)
- Mem0 Open Source (local-first with LLM inference and local vector DB)
Architecture

Architecture diagram illustrating the memory search process.
search, Mem0 performs the following steps:
- Query Processing An LLM refines and optimizes your natural language query.
- Vector Search Semantic embeddings are used to find the most relevant memories using cosine similarity.
- Filtering & Ranking Logical and comparison-based filters are applied. Memories are scored, filtered, and optionally reranked.
- Results Delivery Relevant memories are returned with associated metadata and timestamps.
Example: Mem0 Platform
Example: Mem0 Open Source
Using Filters
Filters help narrow down search results. Common use cases: Filter by Session Context:Tips for Better Search
- Use natural language: Mem0 understands intent, so describe what you’re looking for naturally
- Scope with session IDs: Always provide at least
user_idto scope search to relevant memories - Combine filters: Use AND/OR logic to create precise queries (Platform)
- Consider wildcard filters: Use wildcard filters (e.g.,
run_id: "*") for broader matches - Tune parameters: Adjust
top_kfor result count,thresholdfor relevance cutoff - Enable reranking: Use
rerank=True(default) when you have a reranker configured