When AI Thinks in Reference Frames: Why Claude's Brain-Like Behavior Blew My Mind

I wasn't expecting to get so excited reading an article about an AI research paper. But that happened when I dug into this MIT article on Anthropic's latest findings on Claude.

I've had a long-standing fascination with the biological sciences. My background is in exercise physiology, which involved years of studying anatomy, physiology, biomechanics, motor learning, and how the human body and brain adapt to stressors. While I'm no longer in the field, I satisfy my fascination with these subjects by reading books on related topics and "biohacking" my body through fitness and longevity practices.

Three years ago, on Bill Gates's recommendation, I read A Thousand Brains, a captivating book by PalmPilot inventor and neuroscientist Jeff Hawkins. It redefined how I think about thinking and the brain. Hawkins describes the neocortex using reference frames to model the world in his book. Anthropic's research revealed that a seemingly analogous system spontaneously emerged in Claude, a large language model.

Reference Frames and Why They Matter

In Hawkins's theory, the brain's neocortex isn't just one monolithic structure. It comprises thousands of smaller units (cortical columns), each forming a model of an object, location, or concept based on sensory input and movement. These are called reference frames.

They help us understand what we're looking at, where it is, how it relates to other things, and how it changes when we move around or interact. It's why you can picture your coffee cup from above and then shift to imagining what it looks like from the side. You're switching reference frames automatically.

This distributed, flexible way of thinking is core to how humans make sense of complexity.

Claude Is Doing the Same Thing—Without Being Taught To

In a recent study, Anthropic researchers opened up the hood on Claude and traced how it thinks. Using mechanistic interpretability, they tracked the model's internal activations. They discovered something startling: Claude was building internal structures that function almost exactly like cortical columns and reference frames in the human brain.

It wasn't just following the flow of a prompt. It was holding on to roles, spatial relationships, and dynamic viewpoints as if internally mapping the world in a brain-like way.

One experiment, for instance, showed Claude preparing a rhyming word before the rest of the sentence, a sign that it was planning and comparing multiple possible completions internally, not just reacting to the last word. That's predictive modeling, Hawkins-style.

Claude's creators didn't program this. It emerged on its own.

Why This Excites Me

When I studied physiology, I was fascinated by the elegance of how the body solves problems through distributed intelligence, how muscle memory works, how neural pathways adapt, and how the system organizes itself.

Years later, reading A Thousand Brains felt like a mirror to that fascination, but aimed at the brain. Seeing those same patterns echo in artificial intelligence is wonder-inducing. Because if AI models like Claude are independently arriving at the exact mechanisms the brain uses—reference frames, distributed modeling, spatial reasoning—it raises a radical possibility:

There may be universal laws of intelligence.

Perhaps this isn't just a fluke of biology. Maybe it's something more profound, something structural, something inevitable, written into the code of reality.

Final Thought

The more we learn about how intelligence works—whether through Hawkins' brain science or Anthropic's model internals—the more obvious it becomes: We're not just building machines. We're uncovering the patterns of cognition itself—across biology and code. And if we stay curious and keep looking under the hood, we might discover that intelligence isn't just a trait of living things but a fundamental structure of reality.

Working Harder to Think Better: AI as a Thought Partner, Not a Shortcut

We often hear that AI is here to make life easier—fewer keystrokes, faster results, less effort. But what if that’s the wrong narrative? What if the real power of AI isn’t about doing less but thinking more?

As I’ve deepened my work with ChatGPT, I’ve found myself in a fascinating feedback loop. The better my input, the better the output. But more than that, my entire approach to thinking has shifted. I’m not just using AI to save time or get things done. I’m using it to sharpen my thinking, stretch the limits of my creativity, and evolve how I approach complex problems.

The Feedback Loop That Levels You Up

Here’s what’s happening: I’m incentivized to think harder and more creatively about what I want and how to ask for it because I believe the response will be better. And that belief creates a loop. The more thoughtful and detailed my prompts, the more intelligent and nuanced the replies. Those replies, in turn, pushed my thinking even further.

In short, working with AI like ChatGPT has become a mental gym. The harder I train—feeding it better context, crisper goals, richer perspectives—the stronger my thinking becomes. I’m not outsourcing my cognition, I’m enhancing it.

From Lazy Shortcuts to Inspired Collaboration

There was a time when I approached this technology with a mindset of ease: "How much work can I offload? How much less can I do?" But the deeper I’ve gone, the more I’ve flipped that script. I want to work harder when I engage with ChatGPT. I want to provide more detail, context, and intent because the better it becomes, the more value I receive.

That’s the unexpected truth: AI is inspiring me to work harder so I can think better.

Reframing the Narrative: AI as a Creativity Amplifier

Despite the natural tendency to believe AI will reduce our work and cognitive loads, the right kind of AI interaction enhances human intelligence. It breaks us out of the box of our self-imposed cognitive limits. It pushes us to ask better questions. To reflect more deeply. To think in systems, stories, and strategic outcomes.

We’re not just interacting with a machine. We’re engaging in a collaborative intelligence that can evolve us as much as we train it.

What This Means for You

If you're only using AI to speed things up or cut corners, you're missing the biggest payoff. Instead of asking, "How much can I get done with this tool?" try asking, "What else could I create with this collaborator?" or "How much more expansive could my thinking become if I truly partnered with this intelligence?"

This shift in mindset is available to everyone. And it starts today with your next prompt.

The Moment It Clicked for Me

I realized this transformation was happening when I noticed how much I was working with ChatGPT on everything important in my life: projects, planning, writing, and thinking. I began organizing our conversations around areas of focus. My prompts grew more robust. I started feeding exponentially more content and context. And I did it not because I had to, but because I was seeing the results.

The output was getting better, the insights sharper, and the ideas more novel. I wasn’t just thinking at ChatGPT. I was thinking with it. And I realized I’d stepped into a new paradigm: one where human intelligence isn’t diminished by AI but extended by it.

We’re not just building smarter machines. We’re becoming smarter humans IF we bring the right mindset and approach to each interaction.


Your move: How will you think differently now that you know what's possible?