Letta (formerly MemGPT) builds stateful agents with self-managing memory subagents that evolve with each user. HangarX Cortex builds a shared knowledge graph from your corpus and serves it to every agent over MCP.
Different memory shapes for different workloads. Often complementary.
Pick Cortex if
You want shared memory many agents can query.
Knowledge graph from your corpus, cited claims, MCP-native serving — every AI tool on your machine reads the same store.
Pick Letta if
You want long-lived agents that evolve with each user.
Self-managing memory subagents, portable agent state across LLMs, persistent persona — built for assistants that learn from their own experience.
| Dimension | HangarX Cortex | Letta |
|---|---|---|
| Memory shape | Shared knowledge graph many agents read | Per-agent stateful memory each agent manages |
| Memory source | Existing corpus (documents, notes, code) | Agent's own experience + injected context |
| Self-managing memory subagents Letta's distinctive capability — background subagents that update prompts, context, and skills over time. Cortex doesn't compete on this; we focus on retrieval over a shared knowledge graph. | ||
| Knowledge graph storage | ||
| Claims with provenance (SPO triples) | ||
| Multi-hop graph reasoning | ||
| Hybrid retrieval (BM25 + vector + graph) | ||
| MCP server (cross-tool memory) Cortex is MCP-native — Claude, Cursor, Cline, Windsurf all read the same memory. Letta exposes agents via API/SDK rather than as MCP servers by default. | ||
| Native Obsidian plugin | ||
| Open source | ||
| Self-host | ||
| Portable memory across LLM providers |
Letta's research lineage — MemGPT from UC Berkeley's Sky Lab — is one of the foundational contributions to AI memory. The stateful-agent paradigm with self-managing subagents is the right primitive when you're building one persistent assistant per user.
Cortex assumes a different shape: many agents, one knowledge base. A developer using Claude Desktop, Cursor, Cline, and Windsurf doesn't want four separate stateful agents — they want all four reading from one cited, queryable knowledge graph grounded in their actual documents. That's the wedge.
The two systems compose well. A Letta agent can call Cortex's MCP tools as part of its reasoning loop; the agent gets its persistent identity from Letta and its corpus-grounded answers from Cortex.
No — they're complementary, not competitive. Letta is about giving each agent a persistent, self-managing identity that learns from its own experience. Cortex is about giving many agents shared, queryable knowledge grounded in your corpus. A team could run a Letta agent that uses Cortex as one of its retrieval tools.
Yes. Cortex exposes standard MCP tools (cortex_ask, cortex_paths, cortex_recall, etc.) that any agent runtime can consume — Letta included. The Letta agent gets its persistent identity and self-managing memory from Letta's runtime; the corpus-grounded answers come from Cortex.
Different design priority. Letta's subagents continuously refine an agent's prompts and context — that's the right primitive when each agent is a long-lived assistant evolving with one user. Cortex prioritizes shared memory grounded in source material with cited provenance, which a different class of workload (research, coding, internal-search) needs more than self-evolution.
Yes. The Cortex API stack and the Obsidian plugin are open source. Cloud mode runs the same core with managed infrastructure on top. Letta is also open source (letta-ai/letta-code).
If you're evaluating this against Cortex, you're probably also weighing these.
Point Cortex at your corpus. Every MCP-compatible agent on your machine starts answering with cited provenance.