Memory-first · MCP for every agent · 60-second setup

Your AI never
forgets.

Persistent memory for any AI agent — coding, research, support, ops. Drop one MCP snippet. Every session after that — Cursor, Claude Code, Hermes, Windsurf — answers grounded in your decisions, context, and prior work.

Get started free

Free to start · No credit card · Works with any MCP-compatible IDE

The Problem

Your AI has amnesia

Everynewconversationstartsfromzero.YourAIdoesn'trememberwhatyoubuiltyesterday,thedecisionsyoumadelastweek,orthearchitecturepatternsyou'veestablished.Hundredsofhoursofsharedcontextgone.

Context amnesia

You explained your architecture, your constraints, and your approach. Next chat — same AI, same platform — asks from scratch.

Decisions lost between sessions

You spent an hour weighing options and landed on a direction. Next session: your AI recommends the exact approach you already ruled out.

No awareness of your work

Your AI can't see your domain, your naming conventions, or the 40 decisions that shaped your project. Every response is generic.

Broken continuity

You built something complex together yesterday. Today your AI responds like you've never met.

How It Works

Set up in three minutes

Signin,getyourAPIkey,droponeMCPsnippetintoyouragent.EverysessionafterthatinanytoolyourAIstartsgroundedinyourcontext.
01Step 01

Connect your repo

Sign in, click ‘Add GitHub repository’, pick the repos you want indexed. No CLI to install. No config file. We handle webhooks, scanning, and re-indexing on every push automatically.

02Step 02

Drop one snippet into your AI

Copy the MCP snippet for your agent — Cursor, Claude Code, VS Code, Continue, Hermes, anything that speaks MCP. Paste it. That's the entire integration. No extension to install. No background daemon.

03Step 03

Your AI just knows

Every new chat, every new tab, every new agent session: your AI automatically pulls the right context — architecture, decisions, prior conversations — from your codebase. No more re-explaining what you built yesterday.

Model Context Protocol

Works with every AI tool

MCPistheopenprotocoleverymodernAIagentspeaks.Rembexposes99toolsacross21modulesmemory,context,conversations,scans,briefings,rolecontracts,andmoreoverasingleHTTPSendpoint.Cursor,ClaudeCode,VSCode,Hermes,anythingMCP-compatible.Onesnippet.Done.
Claude CodeClaude DesktopCursorVS Code CopilotWindsurfCodex CLIOpenCodeHermesAiderZedNeovim

99 tools, 21 modules

Memory, context, code graph, scans, conversations, plans, scratchpad, briefings, degradation telemetry, multi-agent role contracts — 99 tools across 21 modules, all prefixed remb__. Pass _meta.tool_budget to get the top-K most relevant; the rest stay out of your prompt.

One URL, any agent

Cursor, Claude Code, VS Code, Continue, Windsurf, Hermes, Codex, OpenCode, Aider — anything that speaks MCP. Coding agent, research agent, support agent — one HTTPS endpoint and a Bearer token. Done.

Local stdio via remb serve

Run `remb serve` for local stdio transport. Same 99 tools, proxied from your AI client to the Remb API. Install once with Homebrew, auto-injects your project slug from .remb.yml.

Auto-loaded every session

Your agent calls session_start at the beginning of every chat — loading core memories, project context, and conversation history automatically. Works with or without a scanned project. No nudges needed.

HTTP · Browser OAuth

cline_mcp.json
{
  "mcpServers": {
    "remb": {
      "type": "http",
      "url": "https://www.useremb.com/api/mcp"
    }
  }
}

Local stdio (offline)

cline_mcp.json
{
  "mcpServers": {
    "remb": {
      "command": "remb",
      "args": ["serve", "--project", "YOUR_PROJECT_SLUG"]
    }
  }
}

The promise

Everysessionstartswherethelastoneended.

Most context tools index your code. Remb remembers your decisions, your patterns, and your prior conversations — and hands them to whichever agent you open next.

Features

Memory that scales with your work

Fromthree-tiermemorytoa99-toolMCPserverwithbuilt-inscratchpad,briefings,anddegradationtelemetryRembgivesanyAIagentthepersistentcontextitneedstounderstandyourwork,rememberyourdecisions,andthinklikeateammate.
Core

Three-Tier Memory System

Core memories load every session. Active memories surface on-demand based on what you're working on. Archive stores everything long-term. Any agent — coding, research, support, ops — always has the right context at the right time.

coreactivearchive

5-Phase Codebase Scanning

Scout, Analyze, Architect, Review, and Finalize — a multi-agent pipeline that maps features, code symbols, architecture layers, and dependency graphs from your entire repo. Optional: memory works without a scanned project.

Conversation Continuity

Every session is logged and semantically indexed. Your AI starts each conversation knowing what was discussed, built, and decided before — zero context lost.

Multi-project

Cross-Project Intelligence

Search memories and patterns across all your projects. Tell your AI "do it like project X" and it pulls matching architecture, decisions, and implementations.

Interactive Code Graph

A queryable knowledge graph of every function, class, and component. Trace call chains, imports, and data flows. 8 relationship types with confidence scoring.

AI Chat with Full Context

Chat with your codebase backed by memories, code symbols, and conversation history — all assembled automatically. Built-in AI with Anthropic, OpenAI, and Gemini.

Global User Memory

Your personal AI brain. Save preferences, lessons learned, and research across all your projects so you never have to repeat yourself again.

Secure by Default

OAuth PKCE authentication, credential files stored with chmod 600, scoped tokens per project, WebAuthn passkey support, and built-in 2FA.

Web Dashboard

Visual project explorer, interactive feature graph, memory manager, conversation browser, and an MCP hub for connecting external AI tools — all in one interface.

Context Engineering

Scratchpad & Briefings

Offload large tool outputs (scans, audits, diffs) to a session-scoped scratchpad so they stay out of your prompt until needed. Save typed handoff briefings between sessions — focus, decisions, blockers, files — instead of replaying entire transcripts.

Context Engineering

Memory Degradation Telemetry

Memories that get retrieved often but lead to wrong outputs are quietly poisoning your context. Remb tracks success/rejection/undo per memory and surfaces quarantine candidates before they pollute the next session.

Procedural Memory

AI Skills Library

Reusable procedural memory the agent can search, load, and self-heal. Save how you do something once — "how we deploy the worker", "our Redis retry pattern" — and Remb auto-suggests it next session via semantic match. Versioned, patchable, project- or globally-scoped.

Context Engineering

Multi-Agent Role Contracts

Typed planner → researcher → implementer → reviewer handoffs. Bad transitions or missing payload keys fail loudly here instead of silently downstream. Ships with a tool budget so 99 tools shrink to the top-K relevant for the current task.

Get Started

Ready in seconds

Twopaths.HumansconnectaGitHubrepo.AgentsdroponeMCPsnippet.NoCLItoinstall.Nobackgrounddaemon.Noextensiontoconfigure.

Cursor

Add to ~/.cursor/mcp.json

bash
{
  "mcpServers": {
    "remb": {
      "type": "http",
      "url": "https://www.useremb.com/api/mcp"
    }
  }
}

Claude Code

One command, native MCP

bash
claude mcp add remb https://www.useremb.com/api/mcp

VS Code

Native MCP support (1.99+)

bash
code --add-mcp '{"name":"remb","type":"http","url":"https://www.useremb.com/api/mcp"}'

Continue / Windsurf / Hermes

Any MCP-compatible client

bash
{
  "mcpServers": {
    "remb": {
      "type": "http",
      "url": "https://www.useremb.com/api/mcp"
    }
  }
}

Quick Start

From zero to full context

Threesteps.NoCLIinstall.WorkswithanyMCP-compatibleIDECursor,ClaudeCode,VSCode,Continue,Windsurf,Hermes.
1
Step 1 of 3

Sign in & get your API key

Open useremb.com, sign in with GitHub, go to Settings → API Keys → Create. Memory works immediately — no project needed. Optionally connect a GitHub repo to add codebase scanning on top.

step-1
$
2
Step 2 of 3

Generate an API key

Settings → API Keys → Create. Scope it to one project or all of them. Revoke any time. Use API keys for unattended agents or CI — no browser flow needed.

step-2
$
3
Step 3 of 3

Drop the MCP snippet into your IDE

Pick your IDE above, copy the snippet, paste, restart. From then on, every chat in that IDE auto-loads your project’s context.

mcp-config.json
{
  "mcpServers": {
    "remb": {
      "type": "http",
      "url": "https://www.useremb.com/api/mcp",
      "headers": { "Authorization": "Bearer YOUR_REMB_API_KEY" }
    }
  }
}

Get Started

Give your AI a permanent memory

Remb is the persistent memory and context layer for any AI agent — coding, research, support, ops. Your decisions, conventions, prior work, and project knowledge survive across every conversation, in every tool — automatically.

99 MCP tools

21 modules — memory, context, scans, briefings, more

3-tier memory

Core, active, and archive layers

Vendor-neutral

Cursor, Claude Code, Hermes — any MCP agent

Cross-project

Search patterns across every project you connect

Conversation history

Every session logged, summarised, indexed

5-phase scanning

Optional deep codebase analysis pipeline