I wasn't expecting to get so excited reading an article about an AI research paper. But that happened when I dug into this MIT article on Anthropic's latest findings on Claude.
I've had a long-standing fascination with the biological sciences. My background is in exercise physiology, which involved years of studying anatomy, physiology, biomechanics, motor learning, and how the human body and brain adapt to stressors. While I'm no longer in the field, I satisfy my fascination with these subjects by reading books on related topics and "biohacking" my body through fitness and longevity practices.
Three years ago, on Bill Gates's recommendation, I read A Thousand Brains, a captivating book by PalmPilot inventor and neuroscientist Jeff Hawkins. It redefined how I think about thinking and the brain. Hawkins describes the neocortex using reference frames to model the world in his book. Anthropic's research revealed that a seemingly analogous system spontaneously emerged in Claude, a large language model.
Reference Frames and Why They Matter
In Hawkins's theory, the brain's neocortex isn't just one monolithic structure. It comprises thousands of smaller units (cortical columns), each forming a model of an object, location, or concept based on sensory input and movement. These are called reference frames.
They help us understand what we're looking at, where it is, how it relates to other things, and how it changes when we move around or interact. It's why you can picture your coffee cup from above and then shift to imagining what it looks like from the side. You're switching reference frames automatically.
This distributed, flexible way of thinking is core to how humans make sense of complexity.
Claude Is Doing the Same Thing—Without Being Taught To
In a recent study, Anthropic researchers opened up the hood on Claude and traced how it thinks. Using mechanistic interpretability, they tracked the model's internal activations. They discovered something startling: Claude was building internal structures that function almost exactly like cortical columns and reference frames in the human brain.
It wasn't just following the flow of a prompt. It was holding on to roles, spatial relationships, and dynamic viewpoints as if internally mapping the world in a brain-like way.
One experiment, for instance, showed Claude preparing a rhyming word before the rest of the sentence, a sign that it was planning and comparing multiple possible completions internally, not just reacting to the last word. That's predictive modeling, Hawkins-style.
Claude's creators didn't program this. It emerged on its own.
Why This Excites Me
When I studied physiology, I was fascinated by the elegance of how the body solves problems through distributed intelligence, how muscle memory works, how neural pathways adapt, and how the system organizes itself.
Years later, reading A Thousand Brains felt like a mirror to that fascination, but aimed at the brain. Seeing those same patterns echo in artificial intelligence is wonder-inducing. Because if AI models like Claude are independently arriving at the exact mechanisms the brain uses—reference frames, distributed modeling, spatial reasoning—it raises a radical possibility:
There may be universal laws of intelligence.
Perhaps this isn't just a fluke of biology. Maybe it's something more profound, something structural, something inevitable, written into the code of reality.
Final Thought
The more we learn about how intelligence works—whether through Hawkins' brain science or Anthropic's model internals—the more obvious it becomes: We're not just building machines. We're uncovering the patterns of cognition itself—across biology and code. And if we stay curious and keep looking under the hood, we might discover that intelligence isn't just a trait of living things but a fundamental structure of reality.