Mem0 distills memory from your agent's conversations. HangarX Cortex grounds memory in the documents and notes you've already written, and exposes them as a queryable knowledge graph with cited provenance.
They're both excellent at what they do. They're solving different problems.
Pick Cortex if
Your agent's memory should come from documents you already have.
Knowledge bases, technical docs, Obsidian vaults, codebases, meeting notes, support tickets, research corpora — anywhere structured truth lives in writing.
Pick Mem0 if
Your agent's memory should come from conversations with users.
Customer-support bots, healthcare assistants, e-commerce concierges, CRM agents — anywhere the agent learns about the user through dialogue.
Cortex and Mem0 share more than people realize. Both are serious infrastructure, well-engineered, and architecturally similar in important ways.
The dimensions that matter when you're choosing between corpus-grounded memory and conversation-distilled memory.
| Dimension | HangarX Cortex | Mem0 |
|---|---|---|
| Memory source | Existing corpus (documents, notes, code, URLs) | Conversation history (chat → distilled memories) |
| Architecture | FalkorDB graph + Postgres pgvector + claim store | Graph + vector (proprietary compression engine) |
| Provenance / citations Cortex links every retrieved claim back to its source span. Mem0's memories are condensed from conversations. | ||
| Multi-hop reasoning over a knowledge graph | ||
| Contradiction detection across claims | ||
| MCP server | ||
| Self-host (Docker) | ||
| 100% local mode (no cloud) | ||
| LLM provider breadth | 8+ (Gemini, OpenAI, Anthropic, xAI, Kimi, HF, OpenRouter, Ollama) | 20+ providers |
| Embedding provider breadth | Gemini, Ollama (more via local) | 12+ providers |
| Document/note ingestion pipeline Cortex is built around ingesting your existing corpus. Mem0 supports add-from-text but is optimized for conversation memory. | ||
| Native Obsidian plugin | ||
| Hybrid retrieval (BM25 + vector + graph) | ||
| Temporal queries (asOf) |
Most AI memory tools start from the same assumption: the agent will learn things over time by talking to the user, and the memory layer's job is to remember what was said. Mem0 is world-class at this. Their compression engine is a real research contribution.
Cortex starts from a different assumption: you already wrote down what matters. Your specs, your decisions, your team's playbooks, your codebase comments, your meeting notes, your design docs. The job of a memory layer should be to make that knowledge — the stuff already encoded in your corpus — available to every agent you use, with full citations back to the source.
Conversation-distilled memory is best for building knowledge about the user. Corpus-grounded memory is best for building knowledge from sources you can audit. Cortex is the second category.
Not exactly — and that's the point. Mem0 is optimized for distilling memory from agent conversations. Cortex is optimized for extracting structured memory from your existing documents and exposing it as a queryable knowledge graph. If your agent's memory needs to come from chat history, Mem0 may be the better fit. If it needs to come from your existing corpus — notes, docs, code, logs — Cortex is the better fit.
Yes. Some teams use Mem0 for short-term conversational memory and Cortex for long-term corpus-grounded memory. Both expose MCP servers, so an agent can call into both as needed.
Because hallucination is the #1 reason agent memory fails in production. Cortex stores claims as SPO triples (subject, predicate, object) with a pointer back to the exact source span. Every retrieved fact can be traced to where it came from. This makes Cortex auditable in a way that compressed/distilled memory is not.
Yes. The Obsidian plugin and the Cortex API stack are open source. Cloud mode runs the same core with managed infrastructure on top. Mem0 also has an open-source variant (mem0ai/mem0) alongside their managed Platform.
It depends on the workload. Mem0 is optimized for tight conversation-loop latency. Cortex is optimized for hybrid retrieval over large corpora — BM25 + vector + multi-hop graph + reranking. For pure short-term recall, Mem0 has fewer steps. For deep retrieval over a large knowledge base, Cortex's pipeline produces higher-quality grounded answers.
If you're evaluating this against Cortex, you're probably also weighing these.
vs. Zep
Both are temporal graphs. The source — corpus vs. conversation — is what differs.
vs. Cognee
The closest direct competitor. Real differences in retrieval depth, audience, and integrations.
vs. LangChain (LangMem)
LangMem is a memory primitive for LangChain. Cortex is end-to-end memory infrastructure.
The fastest way to see whether corpus-grounded memory is the right fit: point Cortex at your docs, notes, or vault, and watch your agents start citing them.