How the Memory Layer Works¶
The memory layer is one of claude-review's most powerful differentiators. It gives the tool a persistent, growing understanding of your codebase's recurring issues — something no single-pass review tool can offer.
The problem it solves¶
Most AI code review tools treat every review in isolation. They don't know that:
src/db/users.gohas had a nil pointer bug fixed three times in the last month- Your team consistently forgets to handle the error return from
os.ReadFile - The same SQL injection pattern keeps appearing in newly written query builders
claude-review's memory layer solves this by storing findings across reviews and using them to make future reviews smarter.
Architecture¶
The memory layer consists of three components:
1. SQLite database (.claude-review/memory.db)¶
Three tables:
| Table | Contents |
|---|---|
findings |
Every finding from every review: file, line, severity, category, description, PR ref |
consolidations |
Cross-PR pattern summaries generated by the consolidation agent |
false_positives |
Finding patterns rejected by the user — excluded from future reviews |
The database lives at .claude-review/memory.db in your repo root — one database per repository. The .claude-review/ directory is already in the default .gitignore. It never leaves your machine.
2. Query agent (before each review)¶
When you run claude-review diff --memory, before the finder agents are invoked:
- The query agent looks up the files changed in this diff
- It retrieves accepted findings from those same files (up to 100 most recent, across all time)
- It groups them by file to identify hotspots — files with multiple past findings
- It fetches the 5 most recent consolidated cross-PR insights
- A formatted context block is prepended to every finder agent's prompt
This means the finder agents know: "This file has had 3 security findings in the past — prioritise it."
3. Consolidation agent (on-wake trigger)¶
Consolidation does not require a background daemon. Instead, it fires automatically in the background at the start of any claude-review command invocation if either trigger condition is met:
- Time trigger: 30+ minutes have elapsed since the last consolidation (and at least 1 new finding exists)
- Volume trigger: 10+ new findings have been stored since the last consolidation
- First-run: immediately, if the DB has findings and has never been consolidated
This design piggybacks on your normal usage. Close your laptop and it pauses. Open it, run any claude-review command, and it catches up — without a persistent process, without OS scheduler setup, and without anything to install.
When consolidation fires, you'll see a brief [memory] consolidating patterns in background... line on stderr, followed by [memory] ✓ consolidation complete when it finishes. It runs in a background goroutine — the primary command is never delayed.
During consolidation, the agent receives only metadata (file paths, descriptions, severity, PR ref) — no source code is ever sent. It produces structured summaries of recurring patterns, which are stored in the consolidations table and surfaced by claude-review insights.
An optional background daemon (claude-review memory start) is still available if you want consolidation to run even when you're not actively using the tool — for example on a shared CI machine or an always-on workstation.
Data flow¶
ANY claude-review command
│
├─► [On-wake check] ShouldConsolidate?
│ │ yes → background goroutine → RunConsolidation
│ │ (metadata only, no source code)
│
└─► (if claude-review diff --memory / pr --memory)
│
┌───────▼────────┐
│ Query agent │
│ Reads DB for │
│ changed files │
└───────┬────────┘
│ context block prepended to prompts
┌───────▼──────────────────┐
│ Finder agents (×4–5) │ ← each agent sees memory context
│ Verifier agent │
│ Ranker agent │
└───────┬──────────────────┘
│ findings
┌───────▼────────┐
│ Ingest agent │
│ Writes to DB │
└────────────────┘
What the memory context looks like¶
The context block prepended to each finder agent's prompt comes directly from memory.FormatContextBlock(). Given two hotspot files and a recent insight, it looks like:
--- MEMORY CONTEXT (from past reviews of this codebase) ---
Files with a history of bugs (prioritize these):
src/auth/token.go — 3 past findings (security, types) top severity: high
src/db/users.go — 2 past findings (types) top severity: high
Recent cross-PR patterns:
- auth.go has had 3 security findings across 4 PRs
- N+1 query patterns appear repeatedly in service layer
--- END MEMORY CONTEXT ---
The finder agent sees this before the diff, so it enters the review already aware of which areas are historically risky.
When no hotspot files or insights exist yet (e.g. first few reviews), FormatContextBlock() returns an empty string and nothing is prepended — the review runs exactly as it would without --memory.
Storage modes¶
Local-only (default)¶
.claude-review/ is gitignored. Memory exists only on your machine. CI runs start cold — no memory context, findings not persisted.
This is fine for solo developers or teams where reviews only happen locally.
Shared / CI mode¶
Remove memory.db from .gitignore (or use the orphan branch approach below) so the DB is committed to the repo. CI picks it up on checkout, runs the review with full memory context, and writes findings back after the merge.
git repo: .claude-review/memory.db (committed, or on orphan branch)
Developer machine: git pull → gets full team memory
CI: git checkout → gets full team memory
Recommended: orphan branch¶
Committing memory.db to your main branch creates binary file churn in git history. The clean solution is a separate orphan branch (claude-review-memory) with no shared history:
# One-time setup
git checkout --orphan claude-review-memory
git rm -rf .
echo "claude-review memory storage" > README.md
git add README.md && git commit -m "Init"
git push origin claude-review-memory
git checkout main
CI reads the DB from the orphan branch before each review and writes it back after merges to main. See the CI Integration guide for the full workflow.
DB size and git bloat¶
memory.db stays small by design. After every consolidation, the DB is automatically pruned:
- Findings older than 90 days are deleted
- Each file is capped at its 50 most recent findings
- Consolidated insights (compact text summaries) are never pruned
On an active repo with 10 PRs/week, the DB stabilises under 1 MB permanently. It's safe to commit to the orphan branch — binary deltas in a dedicated branch with no code are negligible.
Privacy¶
memory.dbis local by default (gitignored); opt into shared mode explicitly- During consolidation, only metadata is sent to the Anthropic API: file paths, severity, category, description, and PR reference — no source code
- The optional daemon's PID file and log live in
~/.claude-review/(separate from the per-repo DB)
False positives¶
If a finding appears in memory that you consider a false positive (e.g., a pattern your codebase intentionally uses), you can mark it via the Go API (memory.MarkRejected) and it will be suppressed in future reviews. The threshold is 2 rejections before a pattern is suppressed — this prevents accidentally silencing a real bug after a single dismissal.
Once a pattern hash has been rejected twice, IsFalsePositive() returns true and the ingest agent silently skips it on all future reviews.
A CLI command for managing false positives is planned for a future release. For now they can be inspected directly:
Clearing memory¶
This wipes all findings, consolidations, and false positives for the current repo. Useful after a major refactor where old findings are no longer relevant.
Database location¶
The default path is <repo-root>/.claude-review/memory.db — each repo has its own database. You can override it in your config: