As a product manager, I find the most valuable takeaways are often new mental models for thinking about a problem. For a while, the default model for AI has been the single, conversational agent. But as we task these systems with more complex, multi-step work, that model can feel limiting.
This shift toward building systems that can "reason" has been a surprising flashback for me. Before I joined the security industry, my education was in psychology, with a focus on cognition—the science of “thinking about thinking.” So when I started seeing the need for more structured AI reasoning, I found myself diving deep into what I call “agentic archetypes.” It’s a way to logically segment how these models "think" and act, allowing us to build more robust and predictable solutions while being mindful of our natural tendency to anthropomorphize everything.