Supabase gives you Postgres, pgvector, auth, storage, and edge functions — best-in-class primitives. HangarX Cortex is the memory platform you'd build on top: document ingestion, knowledge graph, hybrid retrieval, claims, MCP server, all wired up.
They sit at different layers. Many teams use both.
Pick Cortex if
You want grounded memory, not a database.
Document ingestion, knowledge graph, hybrid retrieval, claims with provenance, and MCP serving — bundled. Skip the integration work.
Pick Supabase if
You want a complete BaaS to build the rest yourself.
Postgres + pgvector + auth + storage + edge functions + real-time. Best-in-class primitives; bring your own retrieval pipeline.
| Dimension | HangarX Cortex | Supabase + pgvector |
|---|---|---|
| Layer of the stack | Memory platform | BaaS — Postgres + pgvector + auth + storage |
| Postgres + pgvector Cortex uses pgvector internally too. The difference is what's wrapped around it. | ||
| Knowledge graph Supabase has no graph layer. You'd add Neo4j / FalkorDB / a graph extension yourself. | ||
| LLM extraction (entities, relationships, claims) | ||
| Document parsing + chunking | ||
| Hybrid retrieval pipeline (BM25 + vector + graph + reranker) | ||
| Claims with provenance (SPO triples) | ||
| Contradiction detection across claims | ||
| MCP server | ||
| Native Obsidian plugin | ||
| Auth + storage + edge functions + database Supabase is a full BaaS. Cortex is memory infrastructure only — pair it with Supabase if you also need auth and storage primitives. | ||
| Self-host | ||
| Time to grounded agent answer | Minutes (Docker up + ingest) | Days to weeks (you build extraction, retrieval, serving, MCP yourself) |
Supabase is the best general-purpose BaaS in the market. The Postgres + pgvector + auth + storage stack is the foundation for thousands of production apps. If you're building a CRUD app with embeddings, Supabase is the right answer.
But embeddings + Postgres is not the same as agent memory. To go from “I have a Supabase project” to “my agent answers questions over my corpus with citations,” you still need to build: document parsing, chunking strategy, LLM extraction of entities and relationships, a graph layer, hybrid retrieval, reranking, contradiction detection, claim provenance, and an MCP server. That's a quarter of work — at minimum. Cortex is what you'd end up with.
Only if you want grounded agent memory and you don't want to build it yourself. Supabase + pgvector gives you world-class storage primitives — Postgres, vector indexing, auth, edge functions, real-time. Cortex is the memory platform you'd build on top: document ingestion, LLM extraction, knowledge graph, hybrid retrieval, claims with provenance, contradiction detection, and an MCP server. If you have ML/infra engineers and a long roadmap, building on Supabase gives you ceiling. If you want grounded memory shipping in days, Cortex is faster.
Cortex's vector layer is Postgres + pgvector — the same engine Supabase uses. So yes, in principle you can point Cortex at a Supabase Postgres instance instead of running its own. You'd lose some of the integrated Docker convenience but gain Supabase's managed infrastructure. The graph layer (FalkorDB) is separate; that you'd still run via Cortex's Docker stack.
Many teams do — and many spend the next quarter rebuilding what Cortex already ships: document parsers, chunking strategies, embedding pipelines, retrieval logic, reranking, citation tracking, claim extraction, MCP serving. If your roadmap is bespoke retrieval that doesn't fit anyone's opinions, build it. If your roadmap is shipping agent memory and moving on to your actual product, Cortex is what you'd build anyway.
Not really — they're at different layers. Supabase competes with Firebase / AWS Amplify / general BaaS. Cortex competes with Mem0 / Cognee / Zep / other memory platforms. Many Cortex deployments use Supabase for auth and storage and Cortex for memory.
If you're evaluating this against Cortex, you're probably also weighing these.
Don't spend a quarter building extraction, retrieval, reranking, and MCP serving on pgvector. Cortex ships what you'd end up with.