Open Source MCP Memory Server

Persistent memory for AI agents.

Built on Model Context Protocol.

Every session, your agents start from zero. HeurChain gives them structured memory that survives across sessions, models, and machines — with no prompt engineering required.

Built for speed. Designed for scale.

HeurChain is memory infrastructure — not a wrapper. Every number below is a design target, not a marketing claim.

Three memory tiers that behave like memory, not a database

Memory stored yesterday behaves differently from memory stored six months ago — enforced structurally, not by policy.

ops/ 3× decay · HOT

Hot operational context

Current session data, active task state, real-time debugging trails. Fades fast by design — noise from yesterday shouldn't pollute today's focus.

Use for: Active task tracking, debug sessions, short-horizon agent state
notes/ 1× decay

Reference knowledge

Cross-session knowledge, summaries, learned facts. Standard ACT-R decay rate — information persists proportionally to how often it's accessed.

Use for: User preferences, project summaries, recurring facts, domain knowledge
self/ 0.1× decay

Agent identity

Persona definitions, behavioral constraints, long-term preferences. Near-permanent — decays at one-tenth the baseline rate. Core identity should outlast the session.

Use for: Agent persona, hard constraints, long-term user preferences, identity

How memory flows through HeurChain

Every write is indexed. Every read is fused. Every session starts with context.

Compatible Agents
Anthropic
Kimi
OpenAI
N Nous
Agent hc.add() hc.search() HeurChain API · MCP · Decay Redis · Qdrant · BM25 Vector Index Qdrant + bge-m3 BM25 Index sparse · keyword Context RRF Fusion Injection Agent +mem ⟳ ACT-R Decay Scheduler · ops/ d=3.0 · notes/ d=1.0 · self/ d=0.1

Three lines of code. Full memory stack.

  • BM25 + Qdrant hybrid search, fused via Reciprocal Rank Fusion
  • Session consolidation: ops/ compresses to 5–15 token cues automatically
  • Working-group isolation at vault level, Redis namespace, and Qdrant collection
  • MCP native — Claude Code, Cursor, Windsurf, any MCP client
from heurchain import HeurChain

hc = HeurChain(
    url="http://localhost:3010",
    token="your-token"
)

# Store a memory
hc.add(
    "User prefers dark mode and speaks Spanish",
    user_id="user_123"
)

# Search memories
results = hc.search(
    "display preferences",
    user_id="user_123"
)

# Get proactive context at session start
context = hc.context(user_id="user_123")
{ "memories": [...], "decay_scores": [...], "relevance": 0.97 }
import { HeurChain } from "heurchain"

const hc = new HeurChain({
  url: "http://localhost:3010",
  token: "your-token",
})

// Store a memory
await hc.add(
  "User prefers dark mode and speaks Spanish",
  { userId: "user_123" }
)

// Search memories
const results = await hc.search(
  "display preferences",
  { userId: "user_123" }
)

// Get proactive context at session start
const context = await hc.context({ userId: "user_123" })
{ "memories": [...], "decay_scores": [...], "relevance": 0.97 }
# docker-compose.yml
services:
  heurchain:
    image: ghcr.io/peterjohannmedina/heurchain:latest
    ports:
      - "3010:3010"
    environment:
      - REDIS_URL=redis://redis:6379
      - QDRANT_URL=http://qdrant:6333
      - EMBED_URL=http://embedding:8080
      - BEARER_TOKEN=your-token
    depends_on:
      - redis
      - qdrant
      - embedding

  embedding:
    image: ghcr.io/peterjohannmedina/heurchain-embed:latest
    # BAAI/bge-m3 — GPU optional, CPU fallback included

  redis:
    image: redis:7-alpine
    volumes:
      - redis_data:/data

  qdrant:
    image: qdrant/qdrant:latest
    volumes:
      - qdrant_data:/qdrant/storage

volumes:
  redis_data:
  qdrant_data:
docker compose up -d # Running in <10 min

Memory for your agents. Pick your tier.

Solo agents from $5/mo. Working groups at $49.99/mo flat. All tiers run the full ACT-R memory engine — no prompt engineering required.

Community
Self-Hosted
Free

MIT licensed. Run anywhere Docker runs. No account required.


  • All 3 memory tiers (ops/ notes/ self/)
  • ACT-R temporal decay engine
  • BM25 + Qdrant hybrid retrieval
  • Session consolidation
  • MCP server (stdio + HTTP)
  • Docker in under 10 minutes
  • MIT license
View self-hosted docs
Workgroup
Managed
$49.99/mo

Per working group. 10M tokens included. Add groups at $9.99/mo each.


  • 1 working group (add more at $9.99/mo)
  • 10M tokens included per group
  • All Community features, fully managed
  • Bearer + JWT authentication
  • Token usage dashboard
  • Email support, 48h response
  • 99.5% uptime SLA
Building Out — Available Soon
Start Workgroup
Enterprise
Custom
Custom

Dedicated infrastructure. Negotiated SLA. No shared tenancy.


  • Negotiated uptime SLA with financial remedies
  • Dedicated infrastructure, no shared tenancy
  • JWT auth + SSO / IdP integration
  • Custom consolidation policies
  • Priority support with dedicated contact
Contact us
7-day free trial — Solo plan

Start free. Your card isn't charged until day 8. Cancel before then and you owe nothing. No hoops, no emails asking why you left.

Walk-away guarantee

If HeurChain doesn't make your AI smarter — or you're unhappy for any reason — export your entire vault as JSON and leave. We'll cancel your subscription and you won't be charged.

Overage: Solo $2.00/M tokens · Workgroup $1.50/M tokens above quota. Token counting uses cl100k_base. Search queries are not counted toward quota.