Open Source · Self-Hosted · MCP Native

Persistent memory for
every AI tool you use

A unified semantic memory backend that works across Copilot, Claude, ChatGPT, Cursor, Gemini, and more. Search by meaning, not keywords. Your data stays yours.

Original architecture by Nate B Jones · Extended & self-hosted by Scott Nichols

7
MCP Tools
9+
AI Clients
4
Deploy Options
$0
Self-Hosted Cost

Everything your AI tools need to remember

Open Brain captures decisions, preferences, project context, and people — then surfaces them exactly when your AI needs them.

Semantic Search

Search by meaning, not keywords. pgvector + HNSW indexing finds related thoughts even when the exact words don't match.

Auto Metadata

Every thought is automatically classified: type, topics, people mentioned, action items, dates. No manual tagging needed.

MCP Native

Built on the Model Context Protocol standard. Works with any MCP-compatible client — Copilot, Claude, ChatGPT, Cursor, Gemini.

Private & Self-Hosted

Run on your own hardware with Docker, K8s, or your homelab. Your memories never leave your infrastructure.

PostgreSQL + pgvector

Battle-tested Postgres with pgvector for embeddings. Single table, rich JSONB metadata, HNSW indexing for fast retrieval.

5-Minute Quickstart

Docker Compose gets you running in minutes. Or choose Supabase cloud, Azure managed, or Kubernetes for production.

How it works

One architecture, every AI client. Thoughts flow in, get embedded and tagged, and are instantly searchable.

1

Capture

Your AI tool calls capture_thought via MCP. You can also capture via Slack webhook, REST API, or bulk import from Notion, Obsidian, and other tools.

2

Embed & Tag

The thought is embedded into a vector (via OpenRouter or local Ollama) and an LLM extracts metadata — type, topics, people, action items — all in parallel, typically under 3 seconds.

3

Store

Stored in PostgreSQL + pgvector. Single thoughts table with HNSW index for sub-millisecond vector search, GIN index for metadata filtering.

4

Recall

Any MCP client calls search_thoughts to find memories by meaning. Your decision from three weeks ago? Found instantly, even if the words are different.

7 MCP tools, one protocol

Every operation is available as an MCP tool and as a REST endpoint.

search_thoughts Semantic vector search — find by meaning
capture_thought Store a thought with auto-embedding
capture_thoughts Batch capture multiple thoughts at once
list_thoughts Filter by type, topic, person, or date
update_thought Edit content and re-embed
delete_thought Remove a thought by ID
thought_stats Aggregate stats, top topics, top people

Deploy your way

From a 5-minute Docker quickstart to a full Kubernetes homelab setup.

🐳

Docker Compose

Up and running in 5 minutes. PostgreSQL + pgvector + Ollama + API in one command.

docker compose up -d

Supabase Cloud

Managed PostgreSQL + Edge Functions. ~$0.10-$0.30/month on the free tier.

Serverless
☸️

Kubernetes

Full homelab deployment with Tailscale, MetalLB, Ollama GPU, and monitoring.

$0/month add-on
☁️

Azure Managed

Azure Container Apps + Azure Database for PostgreSQL. Enterprise-ready.

Managed

Works with the tools you already use

One memory backend, every AI client. Open Brain speaks MCP so your tools don't need to.

VS Code Copilot
Claude Desktop
Claude Code
Cursor
ChatGPT
Gemini
Grok
Windsurf
Any MCP Client

Quick start

One setup script or one prompt — your AI does the rest.

⚡ EASY BUTTON Paste one prompt — your AI installs everything automatically

Open Copilot Chat (Agent Mode), Claude Code, or Cursor in your project — paste this prompt. The AI checks prerequisites, asks a few questions, generates your .env, starts Docker Compose, configures your client, and verifies everything works.

Paste into any AI chat — Copilot, Claude, Cursor
I want to set up Open Brain — a persistent semantic memory system for AI tools.
Clone https://github.com/srnichols/OpenBrain.git (or use the existing repo if already cloned).

Follow these steps exactly:

1. Check prerequisites: Verify Docker, Docker Compose, and optionally Ollama are installed.

2. Ask me these questions (wait for answers before proceeding):
   - Which embedding provider? (ollama = free/local, openrouter = cloud/paid, azure-openai = Azure)
   - If openrouter or azure-openai: What are the API credentials?
   - Which AI client should I configure? (VS Code Copilot / Claude Desktop / Claude Code / Skip)

3. Generate .env from .env.example with a secure random MCP_ACCESS_KEY and DB_PASSWORD.
   Set DB_HOST=postgres, OLLAMA_ENDPOINT=http://host.docker.internal:11434 if using Ollama.

4. Run: docker compose up -d --build

5. Wait for http://localhost:8000/health and http://localhost:8080/health to return healthy.

6. Configure my chosen AI client with the MCP server URL and key.

7. Verify by calling the thought_stats MCP tool.

8. Show me a summary of what was installed and next steps.

Works with GitHub Copilot (Agent Mode), Claude Code, Claude Desktop, Cursor, or any AI tool with terminal access.

or run the setup script
# Clone and run the setup wizard
$ git clone https://github.com/srnichols/OpenBrain.git
$ cd OpenBrain
# PowerShell (Windows)
$ .\setup.ps1
# Bash (macOS / Linux)
$ chmod +x setup.sh && ./setup.sh
# The wizard will:
✓ Check Docker, Ollama, Node.js
✓ Ask which embedder (ollama / openrouter / azure-openai)
✓ Generate .env with secure random keys
✓ Start Docker Compose and wait for health
✓ Configure your AI client (Copilot / Claude / Claude Code)
or do it manually
# Manual setup
$ git clone https://github.com/srnichols/OpenBrain.git
$ cd OpenBrain
$ cp .env.example .env # edit with your keys
$ docker compose up -d
# Verify
$ curl http://localhost:8000/health
{"status":"healthy","service":"open-brain-api"}