LangMem is LangChain's memory SDK — semantic, episodic, and procedural abstractions you wire into your LangGraph agent. HangarX Cortex is managed memory infrastructure: a knowledge graph, vector store, claim extractor, retrieval pipeline, and MCP server, all bundled.
Different layers of the stack. Pick based on whether you want to compose memory inside an agent or consume memory across many agents.
Pick Cortex if
You want managed memory that any agent in any language can call.
MCP-native serving, knowledge graph with cited claims, hybrid retrieval, and a Docker stack you can self-host. Framework-agnostic.
Pick LangMem if
You're building a LangGraph agent and want memory primitives that fit the framework.
Semantic, episodic, and procedural memory with first-class LangGraph integration. Storage is yours to choose.
Cortex and LangMem are both built on the conviction that long-term memory is the missing piece in today's AI agents. We agree on the fundamentals.
The dimensions that matter when choosing between a memory SDK inside your agent and a managed memory platform behind it.
| Dimension | HangarX Cortex | LangMem |
|---|---|---|
| Layer of the stack | Managed memory platform | SDK / library inside your agent code |
| Storage included LangMem is storage-agnostic — you bring your own backend (Postgres, Redis, vector DB, etc.). Cortex ships FalkorDB graph + Postgres pgvector preconfigured. | ||
| Knowledge graph | ||
| Hybrid retrieval pipeline LangMem provides memory primitives. Retrieval is yours to build. Cortex ships BM25 + vector + graph + PPR + reranking as one tuned pipeline. | ||
| Semantic memory | ||
| Episodic memory LangMem documents episodic as a category but the launch post acknowledged it doesn't yet have opinionated utilities for it. | ||
| Procedural memory (evolving prompts) Procedural memory — agent behavior evolved through prompt updates — is genuinely novel and LangMem-specific. Cortex doesn't compete here. | ||
| Claims with provenance (SPO triples) | ||
| Contradiction detection | ||
| MCP server (cross-tool memory) LangMem is consumed by code inside a LangGraph agent. Cortex exposes MCP, so any MCP-compatible agent (Claude, Cursor, Cline, Windsurf) reads the same memory. | ||
| Native Obsidian plugin | ||
| Self-host Both are self-hostable. LangMem is a Python library; Cortex is a Docker stack. | ||
| LangChain / LangGraph framework lock-in LangMem works best inside LangGraph. Cortex is framework-agnostic — call it from any language over REST or MCP. | ||
| Languages | TypeScript / REST / MCP | Python |
LangMem is a thoughtful library. The semantic/episodic/procedural taxonomy is the right mental model, and procedural memory — agents that evolve their own prompts from feedback — is one of the more interesting research directions in this space.
But a memory SDK only works if the agent calling it is the only consumer. Most engineering teams today are not building one agent — they're using five. Claude Desktop for chat, Claude Code in the terminal, Cursor in the IDE, Windsurf for project-wide refactors, maybe Cline or Zed in parallel. A LangGraph agent with a LangMem-backed memory is invisible to all of those tools.
Cortex sits behind an MCP server. The same knowledge graph — extracted from your corpus, with claims and provenance — is read by every MCP-compatible agent on a developer's machine. You ship the memory once; it shows up everywhere. That's the difference between a primitive and a platform.
Not exactly — they're different layers. LangMem is a memory primitive: a Python SDK with semantic, episodic, and procedural abstractions you wire into your LangChain or LangGraph agent. Cortex is a memory platform: storage, graph, claim extraction, hybrid retrieval, and an MCP server bundled together. If you're building a custom agent in LangGraph and want a memory primitive that fits the framework, LangMem. If you want managed memory infrastructure that any agent in any language can call, Cortex.
In principle, yes. LangMem is storage-agnostic, so you can implement a backend adapter that points at Cortex's graph + vector stores. Some teams will want this hybrid: LangMem's semantic/procedural abstractions for agent state, Cortex for the corpus-grounded knowledge layer. Both APIs are open.
LangGraph ships with a long-term memory layer that LangMem builds on. It's the right choice if your agent lives entirely inside LangGraph and you want first-class framework integration. The tradeoff: framework-coupled memory doesn't expose itself to other AI tools. Cortex's MCP server lets the same memory be consumed by Claude Desktop, Cursor, Cline, Windsurf, and any future MCP client without writing per-tool integrations.
It matters when your team uses more than one AI tool. A coding team typically runs Cursor, Claude Code, and Windsurf side-by-side. If your memory lives behind a LangChain agent, those other tools can't see it. If your memory lives behind Cortex's MCP server, every tool reads from the same store. The cost of MCP-native serving is paid once; the value compounds with every new tool.
Yes, and it's worth being honest about. LangMem's procedural memory — the agent learning from feedback by evolving its own prompts — is a genuinely novel research direction and the strongest reason to pick LangMem if that workload matches your needs. Cortex doesn't compete on procedural memory; we focus on grounded, cited semantic memory over corpora. Different problems.
If you're evaluating this against Cortex, you're probably also weighing these.
Skip the framework-coupled SDK. Cortex's MCP server lets every AI tool on your machine read the same grounded, cited memory — across Claude, Cursor, Cline, Windsurf, and beyond.