Persistent memory for any AI agent — coding, research, support, ops. Drop one MCP snippet. Every session after that — Cursor, Claude Code, Hermes, Windsurf — answers grounded in your decisions, context, and prior work.
Free to start · No credit card · Works with any MCP-compatible IDE
The Problem
You explained your architecture, your constraints, and your approach. Next chat — same AI, same platform — asks from scratch.
You spent an hour weighing options and landed on a direction. Next session: your AI recommends the exact approach you already ruled out.
Your AI can't see your domain, your naming conventions, or the 40 decisions that shaped your project. Every response is generic.
You built something complex together yesterday. Today your AI responds like you've never met.
How It Works
Sign in, click ‘Add GitHub repository’, pick the repos you want indexed. No CLI to install. No config file. We handle webhooks, scanning, and re-indexing on every push automatically.
Copy the MCP snippet for your agent — Cursor, Claude Code, VS Code, Continue, Hermes, anything that speaks MCP. Paste it. That's the entire integration. No extension to install. No background daemon.
Every new chat, every new tab, every new agent session: your AI automatically pulls the right context — architecture, decisions, prior conversations — from your codebase. No more re-explaining what you built yesterday.
Model Context Protocol
Memory, context, code graph, scans, conversations, plans, scratchpad, briefings, degradation telemetry, multi-agent role contracts — 99 tools across 21 modules, all prefixed remb__. Pass _meta.tool_budget to get the top-K most relevant; the rest stay out of your prompt.
Cursor, Claude Code, VS Code, Continue, Windsurf, Hermes, Codex, OpenCode, Aider — anything that speaks MCP. Coding agent, research agent, support agent — one HTTPS endpoint and a Bearer token. Done.
Run `remb serve` for local stdio transport. Same 99 tools, proxied from your AI client to the Remb API. Install once with Homebrew, auto-injects your project slug from .remb.yml.
Your agent calls session_start at the beginning of every chat — loading core memories, project context, and conversation history automatically. Works with or without a scanned project. No nudges needed.
HTTP · Browser OAuth
{
"mcpServers": {
"remb": {
"type": "http",
"url": "https://www.useremb.com/api/mcp"
}
}
}Local stdio (offline)
{
"mcpServers": {
"remb": {
"command": "remb",
"args": ["serve", "--project", "YOUR_PROJECT_SLUG"]
}
}
}The promise
Most context tools index your code. Remb remembers your decisions, your patterns, and your prior conversations — and hands them to whichever agent you open next.
Features
Core memories load every session. Active memories surface on-demand based on what you're working on. Archive stores everything long-term. Any agent — coding, research, support, ops — always has the right context at the right time.
Scout, Analyze, Architect, Review, and Finalize — a multi-agent pipeline that maps features, code symbols, architecture layers, and dependency graphs from your entire repo. Optional: memory works without a scanned project.
Every session is logged and semantically indexed. Your AI starts each conversation knowing what was discussed, built, and decided before — zero context lost.
Search memories and patterns across all your projects. Tell your AI "do it like project X" and it pulls matching architecture, decisions, and implementations.
A queryable knowledge graph of every function, class, and component. Trace call chains, imports, and data flows. 8 relationship types with confidence scoring.
Chat with your codebase backed by memories, code symbols, and conversation history — all assembled automatically. Built-in AI with Anthropic, OpenAI, and Gemini.
Your personal AI brain. Save preferences, lessons learned, and research across all your projects so you never have to repeat yourself again.
OAuth PKCE authentication, credential files stored with chmod 600, scoped tokens per project, WebAuthn passkey support, and built-in 2FA.
Visual project explorer, interactive feature graph, memory manager, conversation browser, and an MCP hub for connecting external AI tools — all in one interface.
Offload large tool outputs (scans, audits, diffs) to a session-scoped scratchpad so they stay out of your prompt until needed. Save typed handoff briefings between sessions — focus, decisions, blockers, files — instead of replaying entire transcripts.
Memories that get retrieved often but lead to wrong outputs are quietly poisoning your context. Remb tracks success/rejection/undo per memory and surfaces quarantine candidates before they pollute the next session.
Reusable procedural memory the agent can search, load, and self-heal. Save how you do something once — "how we deploy the worker", "our Redis retry pattern" — and Remb auto-suggests it next session via semantic match. Versioned, patchable, project- or globally-scoped.
Typed planner → researcher → implementer → reviewer handoffs. Bad transitions or missing payload keys fail loudly here instead of silently downstream. Ships with a tool budget so 99 tools shrink to the top-K relevant for the current task.
Get Started
Add to ~/.cursor/mcp.json
{
"mcpServers": {
"remb": {
"type": "http",
"url": "https://www.useremb.com/api/mcp"
}
}
}One command, native MCP
claude mcp add remb https://www.useremb.com/api/mcp
Native MCP support (1.99+)
code --add-mcp '{"name":"remb","type":"http","url":"https://www.useremb.com/api/mcp"}'Any MCP-compatible client
{
"mcpServers": {
"remb": {
"type": "http",
"url": "https://www.useremb.com/api/mcp"
}
}
}Quick Start
Open useremb.com, sign in with GitHub, go to Settings → API Keys → Create. Memory works immediately — no project needed. Optionally connect a GitHub repo to add codebase scanning on top.
Settings → API Keys → Create. Scope it to one project or all of them. Revoke any time. Use API keys for unattended agents or CI — no browser flow needed.
Pick your IDE above, copy the snippet, paste, restart. From then on, every chat in that IDE auto-loads your project’s context.
{
"mcpServers": {
"remb": {
"type": "http",
"url": "https://www.useremb.com/api/mcp",
"headers": { "Authorization": "Bearer YOUR_REMB_API_KEY" }
}
}
}Get Started
Remb is the persistent memory and context layer for any AI agent — coding, research, support, ops. Your decisions, conventions, prior work, and project knowledge survive across every conversation, in every tool — automatically.
99 MCP tools
21 modules — memory, context, scans, briefings, more
3-tier memory
Core, active, and archive layers
Vendor-neutral
Cursor, Claude Code, Hermes — any MCP agent
Cross-project
Search patterns across every project you connect
Conversation history
Every session logged, summarised, indexed
5-phase scanning
Optional deep codebase analysis pipeline