Your Obsidian vault becomes shared memory for Claude, Cursor, and every MCP-compatible agent on your machine.
Built for ObsidianFive core surfaces, one plugin. Switch between them to see what your vault feels like once it's connected.
Hybrid retrieval — BM25 + vector + multi-hop graph expansion — returns answers grounded in your notes. Click a citation to jump to the source. Suggested starters get you out of the blank-page state.
You've already written everything. Your standups, your design docs, your half-finished thoughts. The problem isn't capturing knowledge — it's making it usable by the agents you work with every day.
Claude Desktop forgot what you decided last week.
HangarX remembers.
Cursor doesn't know your team's conventions.
HangarX answers from your notes.
You repeat yourself across every new chat.
HangarX is the source of truth all of them read.
It's not another chat UI bolted onto Obsidian. It's the connective tissue that makes your existing AI tools dramatically more useful.
Cloud mode is the fastest way to try HangarX. Sign-in is OAuth — no key copy-paste.
That's it. Your vault is now searchable, your agents can read it, and you can ask questions from inside Obsidian.
.cortex/, .obsidian/, and your templates are excluded by default.Run everything on your machine — your notes never leave the laptop.
docker-compose.cortex.yml next to your notes with your keys baked in (encrypted at rest).docker compose -f docker-compose.cortex.yml up -dBring your own keys (BYOK)
Add or rotate keys anytime in LLM provider keys. The plugin pushes the change to the running container's encrypted runtime config — no Compose re-save, no docker compose up. Switch the active chat model per-request from the LLM (runtime) panel.
Fully offline
Run Ollama locally and pick it as the chat provider. For embeddings, switch the Embedding provider to Ollama and run ollama pull nomic-embed-text — the entire stack then makes zero cloud API calls.
Requires Docker Desktop. No source code, no Node.js, no compile step — images are pulled from Docker Hub.
The Agents section in settings shows every supported harness as a one-click row. HangarX merges its MCP server entry into the client's config file non-destructively — your other MCP servers are preserved.
Claude Desktop
Anthropic's desktop chat app
Claude Code
Anthropic's CLI coding agent
Cursor
AI-first code editor
Cline
VS Code autonomous coding extension
Windsurf
Codeium's agentic IDE
Other MCP-compatible app
Zed, Goose, Codex CLI, or any custom client via JSON snippet
50+ tools across the categories below. Most agents only use 5–10 in practice — the Q&A, graph exploration, and memory sections cover almost every common workload.
| cortex_unified_ask | Natural-language Q&A grounded in your vault, with citations |
| cortex_chat | Multi-turn agentic chat with the full tool loop |
| cortex_get_context | Hybrid retrieval bundle (entities + chunks + memories) for one query |
| cortex_unified_search | One-shot search across entities, documents, and memories |
| cortex_advanced_search | Hybrid search with date / tag / entity-type filters |
| cortex_search_entities | Find entities by name + optional type filter |
| cortex_list_entities | Paginated entity listing with type filters |
| cortex_get_entity | Fetch one entity's full record |
| cortex_get_neighbors | Expand 1–3 hops out from an entity |
| cortex_find_paths | Shortest path between two entities — multi-hop reasoning |
| cortex_explain_entity | Properties + neighbors + sources for one entity in a single call |
| cortex_get_provenance | Source documents an entity was extracted from — the citation tool |
| cortex_query_graph | Run a custom Cypher query (read-only) against the graph |
| cortex_get_schema | Introspect the live graph schema |
| cortex_get_communities | Auto-detected entity clusters / topics |
| cortex_predict_links | ML-suggested missing edges between entities |
| cortex_point_in_time | Temporal queries — graph state as of a given timestamp |
| cortex_search_documents | Semantic search across your notes |
| cortex_summarize_document | LLM summary of a single document |
| cortex_remember | Save a fact / preference / decision to persistent memory |
| cortex_recall | Retrieve memories relevant to a query |
| cortex_relate | Find memories semantically related to an entity or topic |
| cortex_feedback | Record agent feedback (helpful / not helpful) for future ranking |
| cortex_graph_stats | Totals + per-type breakdowns of entities and relationships |
| cortex_find_duplicates | Find likely duplicate entities by embedding similarity |
| cortex_diff_graph | Compare graph state between two timestamps |
| cortex_export_graph | Export the graph to JSON / GraphML / Cypher |
| cortex_file_persistence_status | Check sync state of files between vault and graph |
| cortex_ingest | Add a single text chunk + metadata to the graph |
| cortex_bulk_ingest | Batch ingest — efficient for large documents |
| cortex_create_document / cortex_delete_document | Document-level lifecycle |
| cortex_create_entity / cortex_update_entity / cortex_delete_entity | Entity-level lifecycle |
| cortex_create_relationship | Add a typed edge between two entities |
| cortex_merge_entities | Merge a source entity into a target |
| cortex_tag_entity | Lightweight metadata write |
| cortex_web_search | Search the public web |
| cortex_web_scrape | Fetch and extract content from a URL |
| cortex_list_workflows / cortex_run_workflow | Discover and trigger durable workflows |
| cortex_create_workflow / cortex_update_workflow / cortex_delete_workflow | Workflow lifecycle |
| cortex_list_custom_tools / cortex_run_custom_tool | User-defined API tools |
| cortex_subscribe / cortex_subscribe_poll | Subscribe to graph mutations |
| cortex_event_log_subscribe / cortex_event_log_poll / cortex_event_log_unsubscribe | Event-log subscription lifecycle |
| cortex_generate_image | Generate an image from a prompt |
| cortex_query_analytics | Run pre-computed KPI / analytics queries |
You don't have to use external agents. The plugin ships with:
Ask your vault (⌘P)
Multi-hop chat over your knowledge graph with citations back to source notes. Suggested starters get you out of the blank-page state.
Show on graph
Push chat answers into Obsidian's built-in Graph view. Cited entities stay highlighted, everything else dims. Toggle the pin to auto-highlight every future answer.
Two-way sync modal
Push, pull, two-way, diff, force re-ingest — all in one modal. Mid-flight cancellation drains in-flight server workers cleanly.
Inline link suggestions
Ghost-text [[wikilink]] autocompletes driven by entity matches in your graph. Tab to accept, Esc to dismiss.
Knowledge graph stats
Totals + per-type breakdowns for entities and relationships. Surfaced both in Obsidian and to external agents via the cortex_stats MCP tool.
Switch LLMs on the fly
Pick Gemini, Claude, GPT-4o, Kimi K2.5, Llama, Qwen, or Ollama per request — no container restart. BYOK keys live encrypted on the server and update the runtime config immediately.
Cloud mode
Notes are sent to the HangarX hosted API for entity extraction and embedding. They're stored in your scoped workspace and never used to train models. Revoke access anytime from your API keys settings.
Local mode
Nothing leaves your machine. The Docker stack runs FalkorDB (graph), Postgres + pgvector (embeddings), and the Cortex API. You bring an LLM key — or run Ollama for fully offline.
What's excluded by default
.cortex/, .obsidian/, and templates/. Configure include/exclude lists in What to sync.
Attachments
Images, PDFs, and other binaries referenced by your notes are ingested by default. Toggle off in Sync attachments.
| Command | Description |
|---|---|
| HangarX: Ask your vault | Open the Q&A chat |
| HangarX: Sync | Open the multi-purpose sync modal |
| HangarX: Sync current note | Push only the active file |
| HangarX: Diff vault vs knowledge graph | Open the 4-bucket diff view |
| HangarX: Pull graph entities into vault | Materialize entities as markdown |
| HangarX: Force re-ingest entire vault | Re-sync everything (use after a server reset) |
| HangarX: Connect agents (Claude, Cursor)… | Jump to the Agents settings panel |
| HangarX: Knowledge graph stats | Show entity / relationship totals + breakdowns |
| HangarX: Ingest URL into knowledge graph | Scrape a URL and add it to the graph |
| HangarX: Show onboarding | Reopen the first-run walkthrough |
A deep tour of how the plugin works, every setting, and what each piece does. If you just want to ship, the quick start is enough.
HangarX turns your vault into a queryable knowledge graph + vector index, then exposes it over MCP (Model Context Protocol) so any compliant agent can read and write to it. Nothing magical — just three layers wired together:
cortex-mcp) wraps the API and exposes a small set of tools. Agents you connect call those tools to ask questions, recall facts, or trace ideas back to source notes.Reads are sub-100ms after the index warms up. Writes from agents (cortex_remember) are stored as new entities with provenance so you can see which agent claimed what, when.
Three ways to install, in order of how most people do it.
Community Plugins (recommended)
From release URI
Click the install button anywhere on this page (or copy obsidian://show-plugin?id=hangarx-obsidian) and Obsidian opens the install dialog directly.
Manual / BRAT
Grab main.js, manifest.json, and styles.css from the latest GitHub release into .obsidian/plugins/hangarx-obsidian/, then enable.
Everything is in Settings → HangarX. Defaults are sensible; the fields you actually touch are below.
| Setting | What it does |
|---|---|
| Mode | Cloud (HangarX hosted) or Local (Docker). Cloud is OAuth-managed; Local is BYO infrastructure. |
| API URL | Auto-set per mode — https://cortex.hangarx.ai for Cloud, http://localhost:3400 for Local. Editable as an advanced override only (custom self-hosted endpoint). |
| API key | Auto-filled when you sign in via OAuth (Cloud). Generate or revoke from the dashboard. Click Test to verify. |
| Workspace | The graph namespace your vault writes to. One vault → one workspace by convention. |
| LLM provider keys (BYOK) | Local mode. Paste any combination of Gemini, OpenAI, Anthropic, Moonshot (Kimi), HuggingFace, OpenRouter, or xAI keys. Stored encrypted on the cortex-api and pushed to the running container immediately — no docker rebuild. |
| LLM (runtime) | Per-request chat provider + model. Pick Gemini, Claude, GPT-4o, Kimi K2.5, Llama 3.3, Qwen, or Ollama and click Apply — takes effect on the very next chat answer with no container restart. |
| Embedding provider | Local mode. Gemini (default) or Ollama (fully offline — run `ollama pull nomic-embed-text` first). Switching providers requires a re-ingest because vector dimensions differ. |
| What to sync | Include / exclude globs. Defaults exclude .obsidian/, .cortex/, and templates/. |
| Sync attachments | Pull in PDFs, images, and other binaries referenced by your notes. On by default; toggle off in low-bandwidth setups. |
| Sync on startup | Run a sync automatically when Obsidian opens. Off by default — trigger manually via the Sync modal or the command palette. |
| Auto-highlight on graph | When on, every chat answer auto-pushes its cited entities into Obsidian's Graph view filter (non-matching nodes dim, cited ones stay highlighted). Off by default; toggle from the pin button on any answer or here in settings. |
| Agents panel | One-click installer that merges HangarX into Claude Desktop / Claude Code / Cursor / Cline / Windsurf MCP configs without overwriting existing entries. |
Vault sync
Change-detection by content hash, not modified time. Edits to .cortex/, .obsidian/, and templates are skipped. Deletes are propagated, so removing a note also removes its entities (configurable).
First sync uploads everything; subsequent syncs are diff-based and finish in seconds for typical vaults.
Ask your vault (⌘P)
GraphRAG chat that runs hybrid retrieval (BM25 + vector + graph expansion via PageRank) and cites the source notes inline. Click a citation to jump straight to the note.
Multi-hop questions like “What did we decide about pricing in the Q3 review, and who pushed back?” traverse relationships rather than just matching keywords.
Show on graph
Each chat answer can push its cited entities into Obsidian's built-in Graph view filter. Non-matching nodes dim, cited ones stay highlighted. Toggle the pin to make every future answer auto-highlight without clicking again.
Two-way sync modal
One UI for Push, Pull, Two-way, Diff (4-bucket reconciliation: vault-only · drifted · graph-only · in-sync), and Force re-ingest. Mid-flight cancellation propagates to in-flight server workers via a job ID and drains cleanly within ~30 seconds.
Inline link suggestions
Ghost-text wikilink autocomplete driven by entity matches in your graph. As you type, HangarX detects when you're referencing a known entity and suggests a [[wikilink]]. Tab accepts,Esc dismisses.
Knowledge graph stats
Totals and per-type breakdowns for entities, relationships, and documents. Surfaced both in Obsidian's stats panel and to external agents via the cortex_stats MCP tool, so a connected Claude or Cursor session can answer “how big is my graph?” in one call.
Switch LLMs on the fly
Pick Gemini, Claude, GPT-4o, Kimi K2.5, Llama 3.3, Qwen, or Ollama per request — no container restart. BYOK keys live encrypted in Postgres on the server and update the runtime config immediately when you save them in settings.
MCP tools for agents
Once an agent is connected, it gets the nine tools listed in Agents — including cortex_stats for graph totals and cortex_paths for multi-hop reasoning. Writes (cortex_remember) record provenance so you can audit what each agent claimed.
Cloud / Local toggle
Per-mode connection state. Switching from Cloud to Local doesn't migrate data — re-ingest in the new mode to rebuild the graph there. Both modes can coexist; the active one is whichever is selected.
Provenance & contradictions
Every claim is tagged with the note it came from. The cortex_contradictions tool surfaces cases where two notes assert conflicting facts about the same entity — handy when reconciling old decisions.
When you run the local Docker stack, three containers come up. You don't have to think about them, but here's the shape:
cortex-api
Node service on port 3400. Handles ingest, extraction, retrieval, and the MCP endpoint.
falkordb
Cypher-compatible graph database for entities and relationships. Built on Redis, so it also serves as the in-stack cache for query results and working memory — no separate Redis container needed.
postgres + pgvector
Stores chunks, embeddings, episodes, and provenance. Used for hybrid search.
.env editing required.docker compose up, no --force-recreate. The runtime config is encrypted in Postgres and the model router consults it on every request, so the change is effectively instant.docker compose ps shows the cortex-api container as healthy. Check docker compose logs cortex-api for startup errors. The most common cause is a missing LLM key — open Settings → HangarX → LLM provider keys, paste at least one key, then click Save Compose to vault and run docker compose -f docker-compose.cortex.yml up -d --force-recreate so the container picks them up.docker logs cortex-api to find it.cortex_stats, agents like Claude Desktop had no way to answer “how big is my graph?” — the existing tools all required a query string and returned ranked matches, not totals. Now an agent can call cortex_stats and get { totalEntities, totalRelationships, entityTypes, relationshipTypes } directly. Ships in plugin v0.0.5+; reconnect your agents to pick it up.Install the plugin, sign in, and your vault becomes shared memory across every AI tool you already use.