Built on Model Context Protocol.
Every session, your agents start from zero. HeurChain gives them structured memory that survives across sessions, models, and machines — with no prompt engineering required.
HeurChain is memory infrastructure — not a wrapper. Every number below is a design target, not a marketing claim.
Memory stored yesterday behaves differently from memory stored six months ago — enforced structurally, not by policy.
Current session data, active task state, real-time debugging trails. Fades fast by design — noise from yesterday shouldn't pollute today's focus.
Cross-session knowledge, summaries, learned facts. Standard ACT-R decay rate — information persists proportionally to how often it's accessed.
Persona definitions, behavioral constraints, long-term preferences. Near-permanent — decays at one-tenth the baseline rate. Core identity should outlast the session.
Every write is indexed. Every read is fused. Every session starts with context.
from heurchain import HeurChain hc = HeurChain( url="http://localhost:3010", token="your-token" ) # Store a memory hc.add( "User prefers dark mode and speaks Spanish", user_id="user_123" ) # Search memories results = hc.search( "display preferences", user_id="user_123" ) # Get proactive context at session start context = hc.context(user_id="user_123")
import { HeurChain } from "heurchain" const hc = new HeurChain({ url: "http://localhost:3010", token: "your-token", }) // Store a memory await hc.add( "User prefers dark mode and speaks Spanish", { userId: "user_123" } ) // Search memories const results = await hc.search( "display preferences", { userId: "user_123" } ) // Get proactive context at session start const context = await hc.context({ userId: "user_123" })
# docker-compose.yml services: heurchain: image: ghcr.io/peterjohannmedina/heurchain:latest ports: - "3010:3010" environment: - REDIS_URL=redis://redis:6379 - QDRANT_URL=http://qdrant:6333 - EMBED_URL=http://embedding:8080 - BEARER_TOKEN=your-token depends_on: - redis - qdrant - embedding embedding: image: ghcr.io/peterjohannmedina/heurchain-embed:latest # BAAI/bge-m3 — GPU optional, CPU fallback included redis: image: redis:7-alpine volumes: - redis_data:/data qdrant: image: qdrant/qdrant:latest volumes: - qdrant_data:/qdrant/storage volumes: redis_data: qdrant_data:
Solo agents from $5/mo. Working groups at $49.99/mo flat. All tiers run the full ACT-R memory engine — no prompt engineering required.
MIT licensed. Run anywhere Docker runs. No account required.
One agent. Fully managed. No infra, no Docker, no ops.
Per working group. 10M tokens included. Add groups at $9.99/mo each.
Dedicated infrastructure. Negotiated SLA. No shared tenancy.
Start free. Your card isn't charged until day 8. Cancel before then and you owe nothing. No hoops, no emails asking why you left.
If HeurChain doesn't make your AI smarter — or you're unhappy for any reason — export your entire vault as JSON and leave. We'll cancel your subscription and you won't be charged.
Overage: Solo $2.00/M tokens · Workgroup $1.50/M tokens above quota. Token counting uses cl100k_base. Search queries are not counted toward quota.