Honest, factual comparisons between HangarX Cortex and other memory layers for AI agents. We lead with shared ground, sharpen on the wedge, and tell you when to pick the other tool.
How Cortex stacks up against the most-evaluated memory layers across the dimensions buyers care about most. Click any column for a full comparison.
| Dimension | Cortex | Mem0 | Zep | Cognee | Letta | LangMem | LlamaIndex | Pinecone |
|---|---|---|---|---|---|---|---|---|
| Corpus-grounded memory (docs, notes, code) | ||||||||
| Knowledge graph | ||||||||
| Hybrid retrieval pipeline (BM25+vector+graph+rerank) | ||||||||
| Claims with provenance / citations | ||||||||
| MCP server (cross-tool memory) | ||||||||
| Self-host full stack | ||||||||
| 100% local (no cloud) | ||||||||
| Native Obsidian plugin |
Tools most often shortlisted alongside Cortex when teams are choosing AI memory infrastructure.
Cortex vs. Mem0
Mem0 distills memory from conversations. Cortex grounds memory in your existing documents.
Cortex vs. LlamaIndex
LlamaIndex is a framework you wire up. Cortex is managed memory infrastructure.
Cortex vs. Pinecone
Pinecone stores embeddings. Cortex serves grounded answers.
Cortex vs. Zep
Both are temporal graphs. The source — corpus vs. conversation — is what differs.
Cortex vs. LangChain (LangMem)
LangMem is a memory primitive for LangChain. Cortex is end-to-end memory infrastructure.
Tools at different layers of the RAG / memory stack — graph databases, BaaS platforms, and specialized libraries that teams sometimes consider as DIY alternatives.
Cortex vs. Cognee
Closest direct competitor. Real differences in retrieval depth, audience, and integrations.
Cortex vs. Letta (MemGPT)
Letta gives each agent its own state. Cortex gives many agents shared knowledge.
Cortex vs. Graphiti
Graphiti is a temporal-graph library. Cortex is the memory platform around it.
Cortex vs. Neo4j
Neo4j is a graph database. Cortex is the memory platform built on a graph database.
Cortex vs. Supabase + pgvector
Supabase gives you primitives. Cortex is what you'd build on top.
Tools that come up in evaluations but solve different problems. Quick takes on each — without a full comparison page.
OpenAI is rolling out memory features inside the Assistants API. It's tightly coupled to OpenAI's models, opaque about storage, and offers no MCP server — meaning the memory only works with OpenAI agents. Cortex is provider-agnostic and exposes the same memory to every MCP-compatible agent on your machine.
ChatGPT's built-in memory is a consumer feature for personal use, not infrastructure for building agents. It can't be queried by other AI tools, can't ingest your documents at scale, and offers no provenance or auditability. Cortex is the developer-facing infrastructure equivalent.
Anthropic has shipped early memory primitives in the Claude API. They're useful for stateful conversations within Claude itself, but they don't extend to other AI tools and aren't built around a knowledge graph or corpus ingestion. Cortex serves the same Claude agent (via MCP) plus every other agent simultaneously.
Notion AI is great if your notes live in Notion and you only need answers inside Notion. It's not exposed to external agents and isn't designed as memory infrastructure. Cortex is the equivalent for Obsidian users — and unlike Notion AI, the memory is callable by Claude, Cursor, Cline, Windsurf, and any MCP client.
These are Obsidian-specific RAG plugins that work inside Obsidian only. Cortex's Obsidian plugin works inside Obsidian too — but the same memory is also exposed over MCP to every other AI tool. The differentiator is cross-tool serving plus the deeper retrieval pipeline (knowledge graph, claims, contradictions).
Skip the build-it-yourself stack. Point Cortex at your corpus and your agents start citing sources — over MCP, across every AI tool you use.