On giving local agents memory, identity, and the ability to forget.
I've spent years researching how we learn. Not in a lab, more like a slow, personal deep-dive into neuroscience, cognitive psychology, complex systems theory, and how all of that connects to practical things like skill acquisition, habit change, and expertise. It's a genuine obsession. And over the past year, as I started working more seriously with AI agents (using them daily for work or building experimental projects like macsdk, Mystery agents, and other), a question kept nagging me: could any of these ideas about human learning be applied to make agents better?
Not "smarter model" better. That's Anthropic's and OpenAI's problem. I mean better in a more fundamental way: could we make agents that actually learn from experience? That accumulate useful knowledge the w