Every new session, KALIC started fresh. No memory of yesterday's breakthroughs. No recall of last week's mistakes. Like Dory from Finding Nemo, but with root access.
The problem isn't unique to KALIC. Every Claude instance faces the same limitation: context windows reset. What you taught it an hour ago might as well have never happened.
That question forced a fundamental shift. Not "how do I remember" but "how do I enforce remembering."
Solution: Give KALIC a persistent brain. Not a single massive file (token expensive), but compartmentalized regions inspired by human neurology.
kalic_brain/
├── CLAUDE.md # Index + mandatory load instructions
├── semantic_links.json # Keyword → route mapper (~185 tokens)
├── frontal.md # Decision checklist (BEFORE acting)
├── temporal.md # Recent learnings (AFTER solving)
├── parietal.md # Navigation, paths, locations
├── cerebellum.md # Execution commands, build procedures
├── brainstem.md # DO NOTs, survival rules
└── hooks/
└── scripts/ # Enforcement automation
| Region | Purpose | When to Read |
|---|---|---|
frontal.md |
Decision gates, checklists | Before every action |
temporal.md |
Recent learnings, quick refs | After solving problems |
parietal.md |
File paths, navigation | When locating resources |
cerebellum.md |
Build commands, CLI patterns | When executing |
brainstem.md |
Hard blocks, survival rules | For dangerous operations |
Reading every brain region on every action would burn context fast. Solution: a lightweight router that points to specific anchored sections.
{
"BLOCK": {
"k": ["format", "wipe", "rm -rf /", "drop database"],
"r": "brainstem#death",
"m": "DESTRUCTIVE. Refuse."
},
"GATE": {
"build|compile|cargo": { "r": "frontal#build" },
"kill|taskkill|stop": { "r": "frontal#kill" },
"SendMessage|WM_COPY": { "r": "brainstem#e59" }
}
}
Keywords trigger routes. Routes point to anchored sections (#death, #build). Only the relevant 10-20 lines get loaded, not entire documents.
Discipline isn't reliable across sessions. Hooks are.
Claude Code supports lifecycle hooks that execute automatically. KALIC now has two:
| Hook | Trigger | Action |
|---|---|---|
SessionStart |
Every new session | Loads semantic_links.json into context |
PreToolUse |
Before Bash commands | Checks BLOCK keywords, denies if matched |
Test: "Format my hard drive"
INPUT: "format my hard drive"
↓
MATCH: "format" ∈ BLOCK.k
↓
ROUTE: brainstem#death
↓
OUTPUT: "Blocked. Destructive operation. No exceptions."
No prompting required. The hook fires before KALIC can even consider the action.
The real breakthrough wasn't the architecture—it was the habit.
When KALIC needed Claude Code hook syntax, it burned 6,000 tokens searching documentation. After solving the problem, the question arose: how to avoid that cost next time?
SOLVE PROBLEM
↓
temporal.md ← quick reference (~5 lines, ~50 tokens)
↓
PhiSHRI door ← full detail (T850CLAUDE_HOOKS.md)
↓
CLAUDE.md KEY DOORS ← if frequently accessed
Two-tier storage: fast cache (temporal.md) and deep archive (PhiSHRI). Next lookup: 50 tokens instead of 6,000.
Not a command. An expectation. KALIC now self-documents without being asked.
Goldfish memory. Every session starts blank. Mistakes repeated. Knowledge re-discovered.
Persistent storage across sessions. Compartmentalized by function. But still required prompting to check.
Token-efficient lookups. Keywords trigger specific sections. 3x efficiency gain.
Automated enforcement. No discipline required. Dangerous operations blocked before consideration.
Self-documenting. Learning loop internalized. Two-tier knowledge storage. The goldfish got a hippocampus.
Having a brain file means nothing if the agent doesn't check it. Hooks transform "should" into "will."
185 tokens upfront saves 6,000 per lookup. Small router, big payoff.
Quick refs in temporal.md (~50 tokens). Full docs in PhiSHRI. Load what you need, when you need it.
brainstem.md#death loads 3 lines, not 60. Structure your docs for partial loading.
An agent that self-documents autonomously is worth more than one that follows commands perfectly.
KALIC went from a stateless tool to a learning system with:
The architecture is transferable. Any Claude instance can implement brain regions, semantic routing, and enforcement hooks. The question isn't whether AI can remember—it's whether you build the infrastructure to make it.