A unified semantic memory backend that works across Copilot, Claude, ChatGPT, Cursor, Gemini, and more. Search by meaning, not keywords. Your data stays yours.
Original architecture by Nate B Jones · Extended & self-hosted by Scott Nichols
Open Brain captures decisions, preferences, project context, and people — then surfaces them exactly when your AI needs them.
Search by meaning, not keywords. pgvector + HNSW indexing finds related thoughts even when the exact words don't match.
Every thought is automatically classified: type, topics, people mentioned, action items, dates. No manual tagging needed.
Built on the Model Context Protocol standard. Works with any MCP-compatible client — Copilot, Claude, ChatGPT, Cursor, Gemini.
Run on your own hardware with Docker, K8s, or your homelab. Your memories never leave your infrastructure.
Battle-tested Postgres with pgvector for embeddings. Single table, rich JSONB metadata, HNSW indexing for fast retrieval.
Docker Compose gets you running in minutes. Or choose Supabase cloud, Azure managed, or Kubernetes for production.
One architecture, every AI client. Thoughts flow in, get embedded and tagged, and are instantly searchable.
Your AI tool calls capture_thought via MCP. You can also capture via Slack webhook, REST API, or bulk import from Notion, Obsidian, and other tools.
The thought is embedded into a vector (via OpenRouter or local Ollama) and an LLM extracts metadata — type, topics, people, action items — all in parallel, typically under 3 seconds.
Stored in PostgreSQL + pgvector. Single thoughts table with HNSW index for sub-millisecond vector search, GIN index for metadata filtering.
Any MCP client calls search_thoughts to find memories by meaning. Your decision from three weeks ago? Found instantly, even if the words are different.
Every operation is available as an MCP tool and as a REST endpoint.
search_thoughts
Semantic vector search — find by meaning
capture_thought
Store a thought with auto-embedding
capture_thoughts
Batch capture multiple thoughts at once
list_thoughts
Filter by type, topic, person, or date
update_thought
Edit content and re-embed
delete_thought
Remove a thought by ID
thought_stats
Aggregate stats, top topics, top people
From a 5-minute Docker quickstart to a full Kubernetes homelab setup.
Up and running in 5 minutes. PostgreSQL + pgvector + Ollama + API in one command.
Managed PostgreSQL + Edge Functions. ~$0.10-$0.30/month on the free tier.
Full homelab deployment with Tailscale, MetalLB, Ollama GPU, and monitoring.
Azure Container Apps + Azure Database for PostgreSQL. Enterprise-ready.
One memory backend, every AI client. Open Brain speaks MCP so your tools don't need to.
One setup script or one prompt — your AI does the rest.
Open Copilot Chat (Agent Mode), Claude Code, or Cursor in your project — paste this prompt. The AI checks prerequisites, asks a few questions, generates your .env, starts Docker Compose, configures your client, and verifies everything works.
I want to set up Open Brain — a persistent semantic memory system for AI tools.
Clone https://github.com/srnichols/OpenBrain.git (or use the existing repo if already cloned).
Follow these steps exactly:
1. Check prerequisites: Verify Docker, Docker Compose, and optionally Ollama are installed.
2. Ask me these questions (wait for answers before proceeding):
- Which embedding provider? (ollama = free/local, openrouter = cloud/paid, azure-openai = Azure)
- If openrouter or azure-openai: What are the API credentials?
- Which AI client should I configure? (VS Code Copilot / Claude Desktop / Claude Code / Skip)
3. Generate .env from .env.example with a secure random MCP_ACCESS_KEY and DB_PASSWORD.
Set DB_HOST=postgres, OLLAMA_ENDPOINT=http://host.docker.internal:11434 if using Ollama.
4. Run: docker compose up -d --build
5. Wait for http://localhost:8000/health and http://localhost:8080/health to return healthy.
6. Configure my chosen AI client with the MCP server URL and key.
7. Verify by calling the thought_stats MCP tool.
8. Show me a summary of what was installed and next steps.
Works with GitHub Copilot (Agent Mode), Claude Code, Claude Desktop, Cursor, or any AI tool with terminal access.