Skip to main content
Build a multi-agent swarm simulation with graph-powered memory using Mem0 OSS and MiroFish patterns. MiroFish is a graph-centric system — it extracts entities and relationships from documents, builds a knowledge graph, and queries it throughout its pipeline. Mem0’s Graph Memory is a natural replacement for its Zep Cloud integration.
This cookbook demonstrates the core memory patterns using a simplified simulation. MiroFish’s actual architecture uses a factory pattern (memory_factory.py) with abstract providers, batch buffering with retries in ZepGraphMemoryUpdater, and IPC-based agent interviews. This cookbook focuses on the Mem0 API integration points — wrap these calls in your own retry/batch logic for production use.

Overview

This cookbook implements a Housing Policy Prediction Simulation following MiroFish’s five-stage workflow:
  1. Graph Building — Ingest seed documents, extract entities and relationships
  2. Environment Setup — Query the knowledge graph to enrich agent profiles
  3. Simulation — Track agent interactions with per-agent memory isolation
  4. Report Generation — Semantic search + graph traversal for analysis
  5. Deep Interaction — Query post-simulation memory and relationships (MiroFish also supports live agent interviews via IPC — not covered here)
Three agents debate a housing policy reform:
  • Mayor Chen — Policy advocate pushing for zoning reform
  • Wang (Homeowner) — Opposition leader organizing resistance
  • Professor Li — Academic providing data-driven analysis

Prerequisites

pip install "mem0ai[graph]"
You need a graph backend. Choose one:
BackendSetupBest for
Neo4j Aura (free tier)Sign up, get Bolt URIProduction, closest to Zep
Neo4j Dockerdocker run -p 7687:7687 -e NEO4J_AUTH=neo4j/password neo4j:5Local development
Kuzu (embedded)No setup needed — runs in-processQuick testing, zero dependencies
export OPENAI_API_KEY="sk-..."

# Option A: Neo4j Docker (local development)
docker run -p 7687:7687 -e NEO4J_AUTH=neo4j/password neo4j:5
export NEO4J_URL="neo4j://localhost:7687"
export NEO4J_USERNAME="neo4j"
export NEO4J_PASSWORD="password"

# Option B: Neo4j Aura (production — free tier available)
export NEO4J_URL="neo4j+s://<your-instance>.databases.neo4j.io"
export NEO4J_USERNAME="neo4j"
export NEO4J_PASSWORD="your-aura-password"

# Option C: Kuzu (zero setup — auto-detected when NEO4J_URL is not set)
# No exports needed

Complete Implementation

"""
MiroFish Swarm Prediction Simulation with Mem0 Graph Memory

MiroFish uses Zep Cloud as its knowledge graph backend. This implementation
replaces Zep with Mem0 OSS Graph Memory, which provides:
- Automatic entity extraction from text
- Relationship mining (source → relationship → destination triples)
- Combined vector + graph search returning memories AND relations
- Per-agent isolation via run_id
- Self-hosted with no node caps

Follows MiroFish's 5-stage pipeline:
1. Graph Building    - Ingest seed documents, extract entities
2. Environment Setup - Query graph to enrich agent profiles
3. Simulation        - Track agent actions with per-agent isolation
4. Report Generation - Semantic + graph search for analysis
5. Deep Interaction  - Query post-simulation knowledge graph

Run:
    export OPENAI_API_KEY="sk-..."
    export NEO4J_URL="neo4j://localhost:7687"
    export NEO4J_USERNAME="neo4j"
    export NEO4J_PASSWORD="password"
    python mirofish_swarm_memory.py
"""

import os
import time
from mem0 import Memory


# ======================================================================
# MiroFish Agent Action Types (matches OASIS simulation output)
# ======================================================================

# Twitter actions
TWITTER_ACTIONS = [
    "CREATE_POST", "LIKE_POST", "REPOST", "FOLLOW",
    "DO_NOTHING", "QUOTE_POST",
]

# Reddit actions (superset — includes moderation + discovery)
REDDIT_ACTIONS = [
    "LIKE_POST", "DISLIKE_POST", "CREATE_POST", "CREATE_COMMENT",
    "LIKE_COMMENT", "DISLIKE_COMMENT", "SEARCH_POSTS", "SEARCH_USER",
    "TREND", "REFRESH", "DO_NOTHING", "FOLLOW", "MUTE",
]

# Combined (DO_NOTHING is skipped during memory storage)
MIROFISH_ACTIONS = list(set(TWITTER_ACTIONS + REDDIT_ACTIONS) - {"DO_NOTHING"})


# ======================================================================
# Graph Memory Configuration
# ======================================================================

def build_config():
    """Build Mem0 config with Graph Memory.

    Uses Neo4j if credentials are set, otherwise falls back to Kuzu (embedded).
    """
    neo4j_url = os.environ.get("NEO4J_URL")

    # Shared config for LLM, embedder, and vector store
    base = {
        "llm": {
            "provider": "openai",
            "config": {"model": "gpt-4o-mini", "temperature": 0.1}
        },
        "embedder": {
            "provider": "openai",
            "config": {"model": "text-embedding-3-small", "embedding_dims": 1536}
        },
        "vector_store": {
            "provider": "qdrant",
            "config": {
                "collection_name": "mirofish",
                "embedding_model_dims": 1536,
            }
        },
    }

    custom_prompt = (
        "Extract all people, organizations, policies, locations, "
        "and their relationships. Capture support/opposition stances, "
        "affiliations, and quantitative claims."
    )

    if neo4j_url:
        base["graph_store"] = {
            "provider": "neo4j",
            "config": {
                "url": neo4j_url,
                "username": os.environ.get("NEO4J_USERNAME", "neo4j"),
                "password": os.environ.get("NEO4J_PASSWORD", "password"),
            },
            "custom_prompt": custom_prompt,
        }
    else:
        # Fallback: Kuzu embedded (no external services needed)
        print("  NEO4J_URL not set — using Kuzu (embedded) graph store")
        base["graph_store"] = {
            "provider": "kuzu",
            "config": {"db": "/tmp/mirofish_graph.kuzu"},
            "custom_prompt": custom_prompt,
        }

    return base


# ======================================================================
# Simulation Engine
# ======================================================================

class MiroFishSimulation:
    """
    Multi-agent simulation with graph-powered memory.

    Uses Mem0 Graph Memory to replace MiroFish's Zep Cloud integration:
    - Entities and relationships are extracted automatically from text
    - search() returns both semantic memories AND graph relations
    - Per-agent isolation via run_id
    - Project isolation via user_id
    """

    def __init__(self, project_id: str, config: dict):
        self.project_id = project_id
        self.memory = Memory.from_config(config)
        self.stats = {
            "documents_ingested": 0,
            "activities_recorded": 0,
            "rounds_completed": 0,
        }

    # ------------------------------------------------------------------
    # Stage 1: Graph Building — Seed Document Ingestion
    # ------------------------------------------------------------------

    def ingest_documents(self, documents: list[str]):
        """Ingest seed documents and extract entities + relationships.

        MiroFish equivalent: GraphBuilderService.build_graph()
        Zep equivalent: graph.add_batch() with episode polling

        With Mem0 Graph Memory, each document is processed by the LLM
        to extract entities (people, orgs, policies) and relationships
        (supports, opposes, filed). These become nodes and edges in the
        graph store, alongside vector embeddings for semantic search.
        """
        print("  Ingesting documents and building knowledge graph...")
        for i, doc in enumerate(documents):
            result = self.memory.add(
                [{"role": "user", "content": doc}],
                user_id=self.project_id,
                metadata={"stage": "graph_building", "source": "seed_document", "chunk_index": i}
            )
            # Graph Memory returns extracted relations
            relations = result.get("relations", {})
            added = relations.get("added_entities", [])
            if added:
                print(f"    Doc {i}: extracted {len(added)} entities/relations")

        self.stats["documents_ingested"] = len(documents)
        print(f"  Ingested {len(documents)} documents")

    # ------------------------------------------------------------------
    # Stage 2: Environment Setup — Agent Profile Enrichment
    # ------------------------------------------------------------------

    def enrich_agent_profile(self, agent_name: str, persona_query: str) -> dict:
        """Search memory + graph for context relevant to an agent's persona.

        MiroFish equivalent: OasisProfileGenerator using graph.search()

        Returns both semantic memories and graph relations that can be
        injected into the agent's system prompt.
        """
        results = self.memory.search(
            persona_query,
            user_id=self.project_id,
            limit=10
        )
        facts = [r["memory"] for r in results.get("results", [])]
        relations = results.get("relations", [])

        print(f"  {agent_name}: {len(facts)} facts, {len(relations)} relations")
        return {"facts": facts, "relations": relations}

    # ------------------------------------------------------------------
    # Stage 3: Simulation — Agent Activity Tracking
    # ------------------------------------------------------------------

    def record_action(self, agent_id: str, agent_name: str,
                      action_type: str, content: str,
                      platform: str, round_num: int):
        """Record a single agent action as a memory with graph extraction.

        MiroFish equivalent: ZepGraphMemoryUpdater.add_activity()
        Zep equivalent: graph.add(type="text", data=episode_text)

        Agent memories use run_id to group by agent (no assistant
        memories involved). Graph Memory extracts entities/relationships
        from the action content automatically.
        """
        formatted = f"{agent_name} [{action_type}]: {content}"

        self.memory.add(
            [{"role": "user", "content": formatted}],
            run_id=agent_id,
            metadata={
                "action_type": action_type,
                "platform": platform,
                "round": round_num,
                "agent_name": agent_name,
            }
        )
        self.stats["activities_recorded"] += 1

    def run_round(self, round_num: int, activities: list[tuple]):
        """Execute one simulation round."""
        print(f"  Round {round_num}: {len(activities)} actions")
        for agent_id, agent_name, action_type, content, platform in activities:
            self.record_action(agent_id, agent_name, action_type, content, platform, round_num)
        self.stats["rounds_completed"] = max(self.stats["rounds_completed"], round_num)

    def recall_agent_memory(self, agent_id: str, query: str) -> dict:
        """Agent recalls its own memories mid-simulation.

        Searches by run_id to match the scope used during add().
        """
        results = self.memory.search(
            query,
            run_id=agent_id,
            limit=5
        )
        return {
            "memories": [r["memory"] for r in results.get("results", [])],
            "relations": results.get("relations", []),
        }

    # ------------------------------------------------------------------
    # Stage 4: Report Generation — Semantic + Graph Retrieval
    # ------------------------------------------------------------------

    def quick_search(self, query: str, limit: int = 10) -> dict:
        """Semantic search + graph relations across all agents.

        MiroFish equivalent: ZepToolsService.quick_search()
        Returns both vector-matched memories and related graph triples.
        """
        results = self.memory.search(
            query,
            user_id=self.project_id,
            limit=limit
        )
        return {
            "memories": [r["memory"] for r in results.get("results", [])],
            "relations": results.get("relations", []),
        }

    def panorama_search(self) -> dict:
        """Retrieve all memories + all graph relations.

        MiroFish equivalent: ZepToolsService.panorama_search()
        Returns the complete knowledge state for report generation.
        """
        results = self.memory.get_all(user_id=self.project_id)
        return {
            "memories": [r["memory"] for r in results.get("results", [])],
            "relations": results.get("relations", []),
        }

    def agent_search(self, agent_id: str, query: str, limit: int = 10) -> dict:
        """Search within a single agent's memory space."""
        results = self.memory.search(
            query,
            run_id=agent_id,
            limit=limit
        )
        return {
            "memories": [r["memory"] for r in results.get("results", [])],
            "relations": results.get("relations", []),
        }

    # ------------------------------------------------------------------
    # Cleanup
    # ------------------------------------------------------------------

    def cleanup(self):
        """Delete all memories and graph data for this simulation."""
        self.memory.delete_all(user_id=self.project_id)
        print(f"  Cleaned up all memories for {self.project_id}")


# ======================================================================
# Run the full 5-stage pipeline
# ======================================================================

def main():
    project_id = f"mirofish_housing_{int(time.time())}"
    config = build_config()
    sim = MiroFishSimulation(project_id=project_id, config=config)

    # ==================================================================
    # STAGE 1: Graph Building — Ingest seed documents
    # ==================================================================
    print("=" * 60)
    print("STAGE 1: Graph Building")
    print("=" * 60)

    sim.ingest_documents([
        "The city council proposed a new zoning reform allowing higher "
        "density housing in suburban areas. Mayor Chen expressed strong "
        "support, citing a 40% housing shortage affecting young professionals. "
        "The reform would allow buildings up to 8 stories in previously "
        "restricted 3-story zones.",

        "Local homeowners association president Wang opposes the reform, "
        "arguing it will decrease property values by 15-20%. The association "
        "represents 5,000 homeowners in the affected districts. Wang has "
        "organized three community meetings and collected 2,000 signatures.",

        "Professor Li from Beijing University published research showing "
        "similar reforms in Shenzhen led to 15% price drops in existing "
        "homes but created 30% more affordable housing units within 3 years. "
        "The study covered 12 districts and 50,000 housing units.",
    ])

    # ==================================================================
    # STAGE 2: Environment Setup — Enrich agent profiles
    # ==================================================================
    print("\n" + "=" * 60)
    print("STAGE 2: Environment Setup")
    print("=" * 60)

    mayor_context = sim.enrich_agent_profile(
        "Mayor Chen",
        "Mayor Chen housing reform zoning policy"
    )
    wang_context = sim.enrich_agent_profile(
        "Wang",
        "Wang homeowner opposition property values petition"
    )
    li_context = sim.enrich_agent_profile(
        "Professor Li",
        "Professor Li research housing data Shenzhen"
    )

    print("\n  Example profile context for Mayor Chen:")
    for fact in mayor_context["facts"][:3]:
        print(f"    Fact: {fact}")
    for rel in mayor_context["relations"][:3]:
        src = rel.get("source", "?")
        edge = rel.get("relationship", "?")
        dst = rel.get("destination", rel.get("target", "?"))
        print(f"    Relation: {src} --[{edge}]--> {dst}")

    # ==================================================================
    # STAGE 3: Simulation — Run agent interactions
    # ==================================================================
    print("\n" + "=" * 60)
    print("STAGE 3: Simulation")
    print("=" * 60)

    # Round 1: Opening statements
    sim.run_round(1, [
        ("mayor_chen", "Mayor Chen", "CREATE_POST",
         "This reform will create 10,000 new housing units by 2028. "
         "Young families deserve affordable homes. #HousingForAll",
         "twitter"),

        ("wang_homeowner", "Wang", "CREATE_POST",
         "Our property values will plummet! The council ignores the "
         "voices of 5,000 homeowners. #StopTheReform",
         "twitter"),

        ("prof_li", "Professor Li", "CREATE_POST",
         "New analysis: Shenzhen zoning data shows net positive outcomes "
         "after 3 years. Short-term pain, long-term gain for housing equity.",
         "twitter"),
    ])

    # Round 2: Debate and interaction
    sim.run_round(2, [
        ("wang_homeowner", "Wang", "CREATE_COMMENT",
         "Replied to Professor Li: 'Shenzhen is a tier-1 city with "
         "completely different dynamics. Your comparison is misleading.'",
         "twitter"),

        ("mayor_chen", "Mayor Chen", "LIKE_POST",
         "Liked Professor Li's post about Shenzhen housing data.",
         "twitter"),

        ("prof_li", "Professor Li", "CREATE_COMMENT",
         "Replied to Wang: 'The methodology controls for city tier "
         "and population density. I invite you to review the full dataset.'",
         "twitter"),

        ("mayor_chen", "Mayor Chen", "CREATE_POST",
         "Data from @ProfLi confirms what we've been saying: zoning "
         "reform works. Let's move forward with evidence, not fear.",
         "twitter"),
    ])

    # Round 3: Escalation and platform expansion
    sim.run_round(3, [
        ("wang_homeowner", "Wang", "CREATE_POST",
         "Filing formal petition with 3,000 signatures against the "
         "zoning reform. Council meeting next Tuesday. All homeowners "
         "must attend!",
         "reddit"),

        ("mayor_chen", "Mayor Chen", "CREATE_POST",
         "Announcing public town hall on zoning reform this Saturday. "
         "All voices welcome. Data-driven decisions benefit everyone.",
         "twitter"),

        ("prof_li", "Professor Li", "CREATE_POST",
         "Published full dataset and methodology on my university page. "
         "Transparency is essential for informed public debate.",
         "twitter"),

        ("wang_homeowner", "Wang", "FOLLOW",
         "Followed @MayorChen to monitor policy updates.",
         "twitter"),
    ])

    # Mid-simulation: agent recalls own memory + graph
    print("\n  Mid-simulation recall for Mayor Chen:")
    mayor_recall = sim.recall_agent_memory(
        "mayor_chen",
        "What positions have I taken on housing reform?"
    )
    for mem in mayor_recall["memories"]:
        print(f"    Memory: {mem}")
    for rel in mayor_recall["relations"][:3]:
        src = rel.get("source", "?")
        edge = rel.get("relationship", "?")
        dst = rel.get("destination", rel.get("target", "?"))
        print(f"    Relation: {src} --[{edge}]--> {dst}")

    # ==================================================================
    # STAGE 4: Report Generation — Retrieve memories + graph for analysis
    # ==================================================================
    print("\n" + "=" * 60)
    print("STAGE 4: Report Generation")
    print("=" * 60)

    # Quick search: targeted query
    print("\n  Quick Search: 'opposition to housing reform'")
    opposition = sim.quick_search("opposition to housing reform", limit=5)
    for mem in opposition["memories"]:
        print(f"    Memory: {mem}")
    for rel in opposition["relations"][:3]:
        src = rel.get("source", "?")
        edge = rel.get("relationship", "?")
        dst = rel.get("destination", rel.get("target", "?"))
        print(f"    Relation: {src} --[{edge}]--> {dst}")

    # Agent-specific search
    print("\n  Agent Search: Wang's activities")
    wang_activities = sim.agent_search("wang_homeowner", "all actions and statements")
    for mem in wang_activities["memories"]:
        print(f"    Memory: {mem}")

    # Panorama: full overview
    print("\n  Panorama Search: all memories + relations")
    panorama = sim.panorama_search()
    print(f"    Total memories: {len(panorama['memories'])}")
    print(f"    Total relations: {len(panorama['relations'])}")
    for mem in panorama["memories"][:5]:
        print(f"    Memory: {mem}")
    if len(panorama["memories"]) > 5:
        print(f"    ... and {len(panorama['memories']) - 5} more")
    for rel in panorama["relations"][:5]:
        src = rel.get("source", "?")
        edge = rel.get("relationship", "?")
        dst = rel.get("destination", rel.get("target", "?"))
        print(f"    Relation: {src} --[{edge}]--> {dst}")

    # ==================================================================
    # STAGE 5: Deep Interaction — Post-simulation queries
    # ==================================================================
    print("\n" + "=" * 60)
    print("STAGE 5: Deep Interaction")
    print("=" * 60)

    queries = [
        "How did the debate evolve across the three rounds?",
        "What evidence was cited by each side?",
        "Who supports and who opposes the reform?",
    ]

    for query in queries:
        print(f"\n  Query: '{query}'")
        results = sim.quick_search(query, limit=3)
        for mem in results["memories"][:2]:
            print(f"    Memory: {mem}")
        for rel in results["relations"][:2]:
            src = rel.get("source", rel.get("source_node", "?"))
            edge = rel.get("relationship", rel.get("relation", "?"))
            dst = rel.get("destination", rel.get("destination_node", "?"))
            print(f"    Relation: {src} --[{edge}]--> {dst}")

    # ==================================================================
    # Summary
    # ==================================================================
    print("\n" + "=" * 60)
    print("SIMULATION COMPLETE")
    print("=" * 60)
    print(f"  Project ID:        {project_id}")
    print(f"  Documents ingested: {sim.stats['documents_ingested']}")
    print(f"  Activities tracked: {sim.stats['activities_recorded']}")
    print(f"  Rounds completed:  {sim.stats['rounds_completed']}")
    print(f"  Total memories:    {len(panorama['memories'])}")
    print(f"  Total relations:   {len(panorama['relations'])}")

    # Cleanup (uncomment to delete all memories + graph data)
    # sim.cleanup()


if __name__ == "__main__":
    print("MiroFish Swarm Prediction Simulation powered by Mem0 Graph Memory\n")
    main()

How It Works

Graph Memory: The Right Fit for MiroFish

MiroFish’s entire pipeline revolves around a knowledge graph — it extracts entities from documents, builds relationships, and queries the graph throughout simulation and reporting. Mem0’s Graph Memory provides the same capabilities:
MiroFish needsZep CloudMem0 Graph Memory
Entity extractionBuilt-in via Zep APIAutomatic via LLM extraction
Relationship miningGraph edges(source) --[relationship]--> (destination) triples
Semantic + keyword searchSemantic + BM25Vector similarity + graph relation retrieval
Graph traversalNode/edge queriesrelations array in search results
Per-agent isolationSingle shared graph in MiroFishNative run_id scoping
Self-hostingNo (cloud only)Yes — Neo4j, Memgraph, Kuzu, Neptune
Node/memory limitsCapped on free tierUnlimited (self-hosted)

How search() Returns Both Memories and Relations

When Graph Memory is enabled, every search() call returns two arrays:
results = memory.search("housing reform", user_id="my_sim")

# Vector-matched memories (ordered by similarity)
results["results"]   # [{"memory": "...", "score": 0.85, ...}, ...]

# Graph relations connected to query entities
results["relations"] # [{"source": "mayor_chen", "relationship": "supports", "destination": "zoning_reform"}, ...]
This is what makes Mem0 Graph Memory a natural replacement for Zep — you get semantic search AND structured graph data in a single call.

Per-Agent Memory Isolation

user_id scopes the simulation project. run_id tags individual agent actions at storage time (we use run_id instead of agent_id since no assistant memories are involved). Searches use user_id for project-wide retrieval:
# Store project-level memories (seed documents)
memory.add(
    [{"role": "user", "content": "Mayor Chen supports the zoning reform."}],
    user_id="my_sim"
)

# Store agent-specific memories (simulation actions)
memory.add(
    [{"role": "user", "content": "Mayor Chen [CREATE_POST]: Reform works!"}],
    run_id="mayor_chen"
)

# Search project-level memories (seed docs)
memory.search("housing reform", user_id="my_sim")

# Search agent-specific memories (actions stored with run_id)
memory.search("housing reform", run_id="mayor_chen")

# Get all project-level memories + graph relations
memory.get_all(user_id="my_sim")
Use user_id for project-level data (seed documents) and run_id for agent actions — both for add() and search(). Always match the scope: if you add() with run_id, search() with run_id. Use the message list format [{"role": "user", "content": "..."}] for all add() calls — it works on both OSS and Cloud.

Stage Mapping

MiroFish StageWhat HappensMem0 Graph Memory Call
1. Graph BuildingIngest docs, extract entitiesmemory.add(doc, user_id=project) — entities/relations extracted automatically
2. Environment SetupEnrich agent personas from graphmemory.search(query, user_id=project) — returns facts + relations
3. SimulationTrack per-agent actionsmemory.add(messages, run_id=agent)
3. SimulationMid-round recallmemory.search(query, run_id=agent)
4. Report GenerationTargeted analysismemory.search(query, user_id=project) — memories + graph
4. Report GenerationFull overviewmemory.get_all(user_id=project) — all memories + all relations
5. Deep InteractionFollow-up queriesmemory.search(query, user_id=project)

Zep-to-Mem0 Migration Reference

For developers replacing MiroFish’s Zep integration. Note that Mem0 Graph Memory covers the core graph operations but some Zep features have no direct equivalent — see caveats below.
MiroFish ServiceZep CallMem0 Graph Memory EquivalentCaveat
GraphBuilderServiceclient.graph.create()Implicit on first memory.add()
GraphBuilderServiceclient.graph.set_ontology()custom_prompt in graph_store configFreeform text, not a typed schema like Zep’s EntityModel/EdgeModel
GraphBuilderServiceclient.graph.add_batch(episodes)memory.add() per chunkNo batch API — call per chunk
GraphBuilderServiceclient.graph.episode.get(uuid)Not needed (add is synchronous in OSS)
GraphBuilderServiceclient.graph.delete(id)memory.delete_all(user_id=...)
ZepEntityReaderclient.graph.node.get_by_graph_id()memory.get_all(user_id=...)relations
ZepEntityReaderclient.graph.node.get(uuid)memory.search(entity_name, user_id=...)Semantic search, not exact ID lookup
ZepEntityReaderclient.graph.node.get_entity_edges()memory.search(entity_name, user_id=...)relationsReturns all matching relations, not edges for a specific node
ZepGraphMemoryUpdaterclient.graph.add(type="text")memory.add(messages, run_id=...)No batch buffering or retry — implement in your wrapper
ZepToolsServicesearch_graph(query, scope)memory.search(query, user_id=...) → memories + relations
ZepToolsServiceget_entities()memory.get_all(user_id=...)relations
ZepToolsServicePanorama (all nodes + edges)memory.get_all(user_id=...)No temporal fact separation (active vs historical)
ZepToolsServiceInsightForge (multi-query decomposition)Not availableImplement LLM-driven sub-query decomposition in your own ReportAgent
OasisProfileGeneratorclient.graph.search()memory.search(query, user_id=...)
What Mem0 Graph Memory does not cover: Zep’s typed ontology schemas (EntityModel, EdgeModel), temporal fact lifecycle (valid_at/invalid_at/expired_at), single-node-by-ID lookup, and InsightForge’s multi-query decomposition. For InsightForge-like functionality, implement sub-query logic in your own ReportAgent using memory.search() as the retrieval primitive.

Custom Extraction Prompts

Guide what entities and relationships Mem0 extracts — analogous to (but less structured than) Zep’s set_ontology():
config = {
    "graph_store": {
        "provider": "neo4j",
        "config": {"url": "...", "username": "...", "password": "..."},
        "custom_prompt": (
            "Extract all people, organizations, policies, locations, "
            "and their relationships. Capture support/opposition stances, "
            "affiliations, and quantitative claims."
        ),
    }
}

Action Types

MiroFish’s OASIS engine produces these agent action types. Format them as natural language when storing. Skip DO_NOTHING actions (no memory value). TREND and REFRESH are Reddit-only discovery actions — store if you want to track browsing behavior.
Action TypePlatformExample Memory Content
CREATE_POSTBoth"Mayor Chen [CREATE_POST]: This reform will create 10,000 units"
CREATE_COMMENTReddit"Wang [CREATE_COMMENT]: Replied to Prof Li: 'Your data is misleading'"
LIKE_POSTBoth"Mayor Chen [LIKE_POST]: Liked Prof Li's post about Shenzhen data"
REPOSTTwitter"Prof Li [REPOST]: Reposted Mayor Chen's town hall announcement"
FOLLOWBoth"Wang [FOLLOW]: Followed @MayorChen"
QUOTE_POSTTwitter"Mayor Chen [QUOTE_POST]: 'Data confirms reform works' quoting Prof Li"
DISLIKE_POSTReddit"Wang [DISLIKE_POST]: Downvoted Mayor Chen's reform post"
TRENDReddit"Prof Li [TREND]: Browsed trending topics"
DO_NOTHINGBothSkip — no memory value

Running the Example

# Option A: Neo4j (production)
export OPENAI_API_KEY="sk-..."
export NEO4J_URL="neo4j://localhost:7687"
export NEO4J_USERNAME="neo4j"
export NEO4J_PASSWORD="password"
python mirofish_swarm_memory.py

# Option B: Kuzu (zero dependencies, just need OpenAI key)
export OPENAI_API_KEY="sk-..."
python mirofish_swarm_memory.py  # auto-detects missing NEO4J_URL, uses Kuzu
Exact output varies as Mem0 automatically extracts and deduplicates entities. The specific relations and memory counts depend on LLM extraction quality.

Best Practices

  1. Unique user_id per simulation — Use timestamps or UUIDs (e.g., mirofish_housing_1742198400) to prevent memory collisions between runs
  2. Always set run_id for agent actions — Per-agent isolation prevents memory cross-contamination between agents
  3. Use custom_prompt — Guide entity extraction to capture domain-specific relationships (people, policies, stances)
  4. Format actions as natural language"Mayor Chen [CREATE_POST]: content" extracts better entities than raw JSON
  5. Query relations for reports — The relations array in search results gives structured (source, relationship, destination) triples for building analytical reports
  6. Cleanup old simulations — Call delete_all(user_id=...) when a simulation run is no longer needed

Resources

Graph Memory

Full Graph Memory documentation with provider setup.

MiroFish GitHub

MiroFish source code and setup guide.