Alignment is memory architecture, not model weights.

Personalization IS alignment.

Download Chrome Extension

Beta · Load as unpacked extension in Chrome

The core insight

Current AI memory systems store facts about users. Hearth stores value functions that guide behavior.

The difference: "User got frustrated when I gave long answers" is a record. "This user optimizes for directness over completeness—compress, don't expand" is a seed that shapes every future response.

One depletes your context window. The other compounds.

What the research shows

Bidirectional Memory

Most memory is one-way. 🔥 Hearth captures both directions: what you said, and how the AI is learning to work with you. A compounding asset.

Constraints > Prescriptions

30× difference in sycophantic responses. 4× higher entropy. Telling a model what NOT to do creates sustained influence; telling it what to do collapses.

Rich Context = Focused Variance

71.3% reduction in problematic tokens. 2.45 bits entropy increase. Personhood isn't a constraint on creativity—it's a generative force.

How it works

Install the extension

Download, unzip, and load as an unpacked extension in Chrome.

Chat with Claude normally

🔥 Hearth runs in the background on claude.ai, building your Operator Specification — a structured document describing your identity, constraints, and behavioral guidance.

Watch it compound

The OpSpec is injected at inference time. No fine-tuning. Just context that shapes every token.