Documentation

Context engineering,
not codebase indexing

Remb persists what AI tools normally lose — decisions, patterns, and prior conversations. Memories accumulate, feature context updates as you work, and session history carries forward automatically. Every AI editor or agent you open starts already oriented to your project, without re-reading a file or re-explaining a decision.

Quick Start

Up and running in minutes

Choose your integration path. Context starts accumulating from the first session — no scan or repository required.

01

Add MCP configuration

// .vscode/mcp.json  (VS Code)
// .cursor/mcp.json   (Cursor)
// .mcp.json          (Claude Code)
// Same config — place it where your client expects:
{
  "servers": {
    "remb": {
      "command": "remb",
      "args": ["serve"]
    }
  }
}
02

Add an instructions file

// .github/copilot-instructions.md (VS Code)
// AGENTS.md (Claude Code)  CLAUDE.md  .cursorrules
// The content is the same across clients.
// Place it wherever your agent reads project instructions:

At session start, call remb__session_start with
mode='lean' to load a compact context manifest.
Call remb__memory_get for specific memories as needed.

After completing work, call remb__conversation_log
to record what was accomplished.
03

Context builds as you work

// No bootstrap required. Your agent saves as it goes:
//
//   remb__memory_create   — decisions and patterns
//   remb__context_save    — feature-level notes
//   remb__conversation_log — session summaries
//
// By the second session, your agent already understands
// your architecture, conventions, and prior decisions.

Capabilities

Platform capabilities

Architecture

How it all connects

Remb sits between your AI tools and a persistent knowledge store. Each layer has a distinct role — and they compose cleanly.

AI Editors & Agents

Any MCP-compatible client

VS Code, Cursor, Windsurf, Claude Code, Codex CLI, Hermes, and any other tool that speaks the Model Context Protocol. The editor or agent issues tool calls; Remb handles the rest.

MCP over HTTP or local stdio via remb serve
Remb MCP Server

100+ tools, 21 modules

The MCP layer your agent connects to. It exposes memory reads and writes, context lookups, conversation history, scan results, skills retrieval, briefings, and role contracts — all under the remb__ namespace.

One endpoint · Bearer token auth
Context Engine

Persistent knowledge store

The backing layer that stores memories by tier, feature context entries, conversation logs, scan artifacts, and project metadata. Every tool call reads from or writes to this store — so context compounds with every session.

Memories · Context · Conversations · Scans
Web Dashboard & CLI

Inspect and manage

The dashboard gives you a visual interface to browse memories, review conversation history, explore the code graph, and manage projects. The CLI provides the same capabilities in your terminal — and can run a local MCP proxy for offline use.

Dashboard · remb CLI · Desktop Agent

The Desktop Agent is an optional local companion that runs in your system tray. It watches active AI editors for new conversations, syncs context to the cloud in the background, and enables offline memory access — without any manual steps.

Ready to get started?

Persistent context for your AI tools, set up in under five minutes.