Cortex vs. Supabase + pgvector

Building blocks. Or a finished platform.

Supabase gives you Postgres, pgvector, auth, storage, and edge functions — best-in-class primitives. HangarX Cortex is the memory platform you'd build on top: document ingestion, knowledge graph, hybrid retrieval, claims, MCP server, all wired up.

They sit at different layers. Many teams use both.

Pick Cortex if

You want grounded memory, not a database.

Document ingestion, knowledge graph, hybrid retrieval, claims with provenance, and MCP serving — bundled. Skip the integration work.

Pick Supabase if

You want a complete BaaS to build the rest yourself.

Postgres + pgvector + auth + storage + edge functions + real-time. Best-in-class primitives; bring your own retrieval pipeline.

What we agree on

  • Postgres + pgvector is the right vector storage engine for most workloads
  • Open source matters — both are OSS-first
  • Self-host should be a first-class option
  • Developer experience drives adoption — both invest heavily here
  • Hybrid retrieval (keyword + dense) beats pure vector
  • Build vs. buy is the only real question — both are credible answers

Where we differ

DimensionHangarX CortexSupabase + pgvector
Layer of the stackMemory platformBaaS — Postgres + pgvector + auth + storage
Postgres + pgvector

Cortex uses pgvector internally too. The difference is what's wrapped around it.

Knowledge graph

Supabase has no graph layer. You'd add Neo4j / FalkorDB / a graph extension yourself.

LLM extraction (entities, relationships, claims)
Document parsing + chunking
Hybrid retrieval pipeline (BM25 + vector + graph + reranker)
Claims with provenance (SPO triples)
Contradiction detection across claims
MCP server
Native Obsidian plugin
Auth + storage + edge functions + database

Supabase is a full BaaS. Cortex is memory infrastructure only — pair it with Supabase if you also need auth and storage primitives.

Self-host
Time to grounded agent answerMinutes (Docker up + ingest)Days to weeks (you build extraction, retrieval, serving, MCP yourself)

The wedge

Supabase is the best general-purpose BaaS in the market. The Postgres + pgvector + auth + storage stack is the foundation for thousands of production apps. If you're building a CRUD app with embeddings, Supabase is the right answer.

But embeddings + Postgres is not the same as agent memory. To go from “I have a Supabase project” to “my agent answers questions over my corpus with citations,” you still need to build: document parsing, chunking strategy, LLM extraction of entities and relationships, a graph layer, hybrid retrieval, reranking, contradiction detection, claim provenance, and an MCP server. That's a quarter of work — at minimum. Cortex is what you'd end up with.

When you should pick Supabase

  • You're building a CRUD app with vector search and don't need a knowledge graph or MCP serving.
  • You need a complete BaaS with auth, storage, edge functions, and real-time — Cortex doesn't compete on those.
  • You have ML/infra bandwidth to build extraction, retrieval, reranking, and serving on top of pgvector.
  • You want fine-grained control over schema, queries, and indexing strategies.
  • You need the broadest ecosystem of Postgres-native tooling.

When you should pick Cortex

  • You want grounded agent memory shipping in days, not a quarter.
  • Your problem is question-answering over a corpus, not similarity search over rows.
  • You need a knowledge graph, claims with provenance, and contradiction detection — not just vectors.
  • You want MCP-native serving so Claude, Cursor, Cline, Windsurf all read the same memory.
  • You'd rather use Supabase for auth/storage and Cortex for memory than build memory on Supabase.

FAQ

I'm using Supabase already. Do I need Cortex?

Only if you want grounded agent memory and you don't want to build it yourself. Supabase + pgvector gives you world-class storage primitives — Postgres, vector indexing, auth, edge functions, real-time. Cortex is the memory platform you'd build on top: document ingestion, LLM extraction, knowledge graph, hybrid retrieval, claims with provenance, contradiction detection, and an MCP server. If you have ML/infra engineers and a long roadmap, building on Supabase gives you ceiling. If you want grounded memory shipping in days, Cortex is faster.

Can I run Cortex on top of Supabase?

Cortex's vector layer is Postgres + pgvector — the same engine Supabase uses. So yes, in principle you can point Cortex at a Supabase Postgres instance instead of running its own. You'd lose some of the integrated Docker convenience but gain Supabase's managed infrastructure. The graph layer (FalkorDB) is separate; that you'd still run via Cortex's Docker stack.

Why not just build my own RAG on Supabase?

Many teams do — and many spend the next quarter rebuilding what Cortex already ships: document parsers, chunking strategies, embedding pipelines, retrieval logic, reranking, citation tracking, claim extraction, MCP serving. If your roadmap is bespoke retrieval that doesn't fit anyone's opinions, build it. If your roadmap is shipping agent memory and moving on to your actual product, Cortex is what you'd build anyway.

Is Cortex a competitor to Supabase?

Not really — they're at different layers. Supabase competes with Firebase / AWS Amplify / general BaaS. Cortex competes with Mem0 / Cognee / Zep / other memory platforms. Many Cortex deployments use Supabase for auth and storage and Cortex for memory.

Skip the build. Get the platform.

Don't spend a quarter building extraction, retrieval, reranking, and MCP serving on pgvector. Cortex ships what you'd end up with.