Skip to main content
Add long-term memory to Hermes Agent — a self-improving AI agent CLI by Nous Research. Hermes has a pluggable memory system, and Mem0 is one of the supported providers. Once enabled, Mem0 automatically learns facts from your conversations and surfaces relevant ones before each turn — all without slowing down the chat.

Overview

Hermes runs a built-in memory system (file-based MEMORY.md and USER.md) alongside one external provider. When Mem0 is active, it works additively with the built-in system at three key moments in every conversation turn:

1. Before the Agent Responds (Prefetch)

When you send a message, Hermes checks if it already has cached Mem0 search results from the previous turn. If so, those memories are injected into the system prompt so the LLM can see them. This is zero-latency — no waiting for an API call.

2. After the Agent Responds (Sync)

Once the LLM finishes responding, Hermes sends the (user message, assistant response) pair to Mem0’s API in a background thread. Mem0’s server-side LLM automatically extracts facts (e.g., “user prefers Python”, “user works at Acme Corp”) — you don’t have to tell it what to remember.

3. Background Prefetch for Next Turn

At the same time as sync, Hermes kicks off a background search on Mem0 to pre-load relevant memories for the next turn. By the time you type your next message, the memories are already cached.

Agent Tools

When Mem0 is active, the LLM gets three extra tools it can call during conversations:
ToolDescription
mem0_profileFetch all stored memories about the user
mem0_searchSemantic search through memories (supports optional reranking via rerank and top_k parameters)
mem0_concludeStore a specific fact verbatim — uses infer=False so no server-side LLM extraction happens

Installation

Install Hermes Agent:
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
source ~/.bashrc
The mem0ai Python package is automatically installed when you enable the Mem0 provider — no manual pip install needed.

Setup

hermes memory setup
Select mem0 as the provider and enter your Mem0 API key when prompted. The wizard writes your config to ~/.hermes/mem0.json.
Get your API key from app.mem0.ai.

Option 2: Manual Configuration

hermes config set memory.provider mem0
echo "MEM0_API_KEY=your-api-key" >> ~/.hermes/.env
Then in your config.yaml:
memory:
  provider: mem0
That’s it — Mem0 runs automatically from this point.

Configuration Options

Configuration is stored in ~/.hermes/mem0.json. Values can also be set via environment variables.
KeyEnv VariableDefaultDescription
api_keyMEM0_API_KEYRequired. Mem0 Platform API key
user_idMEM0_USER_IDhermes-userUser identifier for scoping memories
agent_idMEM0_AGENT_IDhermesAgent identifier
reranktrueEnable reranking for memory recall

Reliability

  • Circuit Breaker — If Mem0’s API fails 5 times in a row, Hermes stops calling it for 2 minutes, then retries. The agent keeps working fine without memory during that time.
  • Non-blocking — All Mem0 API calls happen in background daemon threads. A slow or failed API call never blocks your conversation.
  • Thread-safe — The Mem0 client uses lazy initialization with locking, safe for concurrent access.

Key Features

  1. Zero-Latency Recall — Memories are prefetched in the background and cached, ready before you type
  2. Server-side Extraction — Mem0’s API automatically extracts and deduplicates facts from each exchange
  3. Non-blocking — All API calls run in background daemon threads
  4. Fault Tolerant — Circuit breaker ensures the agent works even if Mem0 is temporarily unreachable
  5. Additive Memory — Works alongside Hermes’ built-in file-based memory system (MEMORY.md, USER.md)

OpenClaw Integration

Add memory to OpenClaw agents with auto-recall and auto-capture

Mem0 Platform

Get your API key and explore the Mem0 dashboard