ABOUT WIDEMEM
Built in the open because the alternatives were either toys or managed services holding your data hostage. Neither is OK.
The problem
AI agents forget everything between sessions. The standard fix is to stuff more text into the context window, which is not memory. It is a bigger desk. A real memory layer needs to score what matters, forget what does not, survive restarts, and refuse to silently lose a user's medication list because 72 hours passed and the decay function got bored.
The existing options fell into two camps. Open-source libraries that treat every fact the same, pile up contradictions, and fall over past a few thousand memories. Or managed services that want your data in their cloud, charge hundreds per month, and still can't explain why they lost that allergy note.
Why I built this
I wanted a memory layer for my own AI agents that I could run locally, audit end to end, and trust with YMYL-class data (health, financial, legal facts where forgetting is not a minor regression). Nothing existed that did all three. So I wrote it.
The core ideas (importance scoring, temporal decay, batch conflict resolution, hierarchical memory, YMYL prioritization, confidence-aware retrieval) came from the same frustration. A memory system should know that “is allergic to penicillin” and “had pasta last Tuesday” are not the same kind of fact. It should rank them, decay them, and admit when it has no answer instead of hallucinating one.
widemem is local-first by default. SQLite plus FAISS, no external services, no phone home. Swap in Qdrant when you need horizontal scale. Swap in OpenAI, Anthropic, or Ollama for the LLM side. The library gets out of your way.
How we got here
From spotting the goldfish problem to v1.4 in roughly fourteen months. The shape of the journey:
Spotting the goldfish problem
AI agents kept forgetting everything between sessions. Context windows got bigger but nothing actually persisted across runs. The pain was small for prototypes and breaking for anything in production.
Hacks and workarounds
CLAUDE.md files. Hand-curated prompt prefixes. Markdown notes I forgot to update. Each fix bought a week before the same problem came back.
Reviewing the existing landscape
Looked at Mem0, Zep, Letta, LangMem, Cognee, A-Mem. Each had good ideas. None did all of: local-first by default, importance-aware so critical facts survive decay, YMYL-safe for health and financial data, and auditable end to end.
widemem v0.1 — first release
Importance scoring. Temporal decay. Batch conflict resolution. SQLite plus FAISS. Local-first by default. Private repo, scratch-my-own-itch energy.
Functional solution in place
Hierarchical memory (facts → summaries → themes). YMYL prioritization. Active retrieval and contradiction detection. Audit trail. Confidence-aware retrieval. Three retrieval modes.

v1.4 official release
34-hour LoCoMo benchmark run vs Mem0, Zep, LangMem, A-Mem, and full-context. Wins on multi-hop reasoning and token efficiency. Honest about losses on single-hop and temporal. Open-sourced under Apache 2.0.
Where this is going
Short term: a published LongMemEval pass on v1.4 (the v1.3 LoCoMo numbers are already on /benchmarks), a tested Docker image, a TypeScript client, and a set of deployment guides for self-hosting on regulated infrastructure. Longer term: enterprise support contracts for teams running widemem in production, and the tooling those teams actually need (observability hooks, backup/restore, migration tools from Mem0 and Zep).
The library itself stays Apache 2.0. That is not a marketing stance. The only way a memory layer earns trust with health and financial data is if you can read every line of it.
How to help
The most useful thing you can do is try it, break it, and tell me what broke. The second most useful thing is to star the repo on GitHub so other developers can find it. The third, if you are running an AI product where “memory” is currently a CLAUDE.md file and a prayer, is to get in touch via the enterprise page. Paid pilots fund the roadmap.
Where to find things
Built in the open under eyepaq.com. Code at github.com/remete618/widemem-ai. Writing in the blog. Vulnerability disclosure and data practices on the security page.