Quantum Hand Through My Eyes by Jason Padgett; See artwork here — Used with permission from artist.
In 2002, a 31-year-old furniture salesman from Tacoma, Washington, found himself on the threshold of an unimaginable transformation. Leaving a karaoke bar one night, he was ambushed by two men who struck him repeatedly on the head, leaving him sprawled unconscious on the pavement. When he came to, battered and disoriented, he managed to pull himself up and stagger to a nearby hospital.
In the following days and weeks, Jason Padgett began to perceive the world in an entirely new dimension. Things like swirling water or clouds appeared as intricate lines and grids. He began to see everything as a canvas of complex mathematical structures, layered in fractals and highly complex geometric shapes.
Although he had no formal background in mathematics, Padgett had developed an intuitive grasp of advanced concepts. He meticulously sketched these shapes by hand. In the wake of his brain injuries, he had emerged as a math savant, equipped with a rare, almost mystical insight into the hidden architecture of nature, space, and time.
Brain injuries like Padgett’s expose one of nature’s secrets — the brain is designed to separate certain parts and keep them quiet. But when those boundaries break, when normal routes are closed off, the brain adapts. It forges new paths, allowing areas that usually work alone to join forces. The result can be astonishing, with abilities that seem almost supernatural. It’s a testament to the brain’s raw resilience and power to adapt and survive.
This dampening and separation, called inhibition, is a defining feature of human brains. Researcher Iain McGilchrist suggests it provides the “necessary distance” to step back from the world and think before we act. By suppressing neural circuits, it acts as a resource manager to reduce energy consumption and noise, provide signal clarity, and allow key areas to execute in complex sequences.
Current AI models, including the one behind ChatGPT, lack inhibition. They do not suppress and keep activities apart like the brain; rather, they generate outputs based on probabilistic predictions from vast amounts of secondary data derived from external sources. This makes them power-hungry, clunky, and unable to learn independently.
But new brain-based AI models now emulate certain aspects of inhibition. They are far more energy efficient and can learn from primary data collected directly from the real world. They don’t need to be retrained or constantly overseen by humans; they adapt on the fly.
As the costs to train and execute AI continue to skyrocket, these new models are increasingly viewed as a less expensive, viable alternative. But what will guide their behavior and decisions as they become more powerful? How will we control them? Can we control them?
The AI Energy Problem
The energy-intensive Transformer model dominates the AI landscape, commanding nearly all major investments in generative AI to date. Leading players such as OpenAI, Google’s DeepMind, and Anthropic have poured resources into advancing this architecture, betting on its unique capacity to handle complex tasks. Google, for instance, has doubled down on its commitment to Transformers, asserting that with enough reasoning layers, these models could, in theory, solve almost any problem. This vote of confidence underscores the sweeping ambitions behind Transformer technology — and the belief that it will unlock the full potential of AI.
The remarkable surge of investment in Transformer models can be traced to three pivotal forces shaping the tech landscape. First, Transformers have proven uniquely capable of handling vast volumes of “secondary” data — scraped text, images, and videos — making them indispensable for high-stakes tasks such as language modeling and visual processing.
Second, the influx of capital into AI was indirectly fueled by over a decade of central bank stimulus, which left venture capital firms and corporations flush with funds and reaching for yield. In a market that has lacked true innovation for years, these funds naturally gravitated toward AI, with Transformers at the forefront.
Finally, the software industry saw a major shift in 2023, as SaaS growth stagnated amid tighter budgets and slower sales cycles. In this more cautious environment, companies have turned to automation, eager to capitalize on AI models that promise not only productivity boosts but also a direct replacement of human labor. As Sarah Tavel points out in her blog, “AI Startups: Sell Work, Not Software,” the future lies not in traditional software seats but in AI that effectively takes a “seat” itself — redefining roles once thought essential.
However, cracks are starting to appear in the Transformer model’s foundation. Its insatiable energy demand is raising red flags. By some estimates, data centers could draw up 21% of the world’s electricity supply by 2030. And tech giants such as Apple are signaling caution, with recent research suggesting that large language models may be approaching a performance ceiling. This tension has tech leaders exploring alternative energy sources — nuclear, even fusion — and seeking advancements in model efficiency, quantization, and speed. Even quantum computing has entered the conversation as a possible solution.
As the limitations of Transformer models grow more evident and pressing, the market stands poised to embrace sustainable, efficient alternatives.
What Brain-Based AI Solves
Brain-based models address three core issues with current AI models:
Energy Efficiency: Brain-based models are estimated to be 100 to 1,000 times more energy-efficient, employing decentralized, sparse processing.
Use of Primary Data: Unlike Transformers, brain-based models can learn from direct, real-time interactions. This enables them to build richer, context-aware understandings of their environment — a leap forward from the static datasets Transformers depend on.
Self-Learning: Grounded in primary data, brain-inspired models can self-learn and adapt without needing constant retraining. Transformers, in contrast, rely on proxy scores and self-generated corrections, limiting their understanding to patterns rather than meaning. Once a system can truly self-learn, that process can be made recursive which could quickly lead to extremely advanced AI.
The Secret To Brain Efficiency
Hold your thumb out at arm’s length. That thumbnail is the only part of your visual field where your eyes can sharply focus, courtesy of the fovea — the tiny region of the retina dedicated to central vision. But the brain isn’t content with such a limited view, so it improvises. Your eyes constantly flicker with quick, subtle movements called saccades which scan the scene in fragments.
Your entire visual field spans 210 degrees horizontally and 150 degrees vertically. The outer regions of your retina are packed with rod cells, which detect light, motion, and contrast, especially in low-light conditions, forming the basis of peripheral vision. Cone cells, concentrated in the fovea but distributed across the retina, aid in color vision and detail across the visual field.
The real magic unfolds in the brain’s visual cortex, nestled in the occipital lobe. Here, fragments of visual data are stitched together to form a smooth, stable image. In essence, visual perception is a clever series of approximations drawn from imperfect inputs, yet it delivers a remarkably seamless experience and uses very little energy.
How human vision works is an introductory course on how new brain-based AI models function.
How Brain-Based Models Work
In 1992, Jeff Hawkins did what many PhD rejects dream of — he founded a successful tech company, Palm, Inc. that would revolutionize mobile computing with the Palm Pilot. But while the world was captivated by his sleek, pocket-sized device, Hawkins’ mind was on something far deeper: the neocortex.
If unfolded, the human neocortex would cover an area roughly the size of a large cloth dinner napkin. This thin, wrinkled outer layer of the brain governs higher-order functions like sensory perception, complex cognition, and motor control. At just 2.5 to 4 millimeters thick — roughly the thickness of two stacked pennies — it holds around 16 billion neurons and makes up about 75% of the brain’s volume.
Hawkins had a theory, related to his rejected 1986 PhD thesis at Berkeley, that the neocortex can be uniformly mathematically modeled. Hawkins stayed in the background for decades, more recently researching at his company Numenta. Then, in 2021, he published A Thousand Brains, A New Theory of Intelligence, laying out his neocortex-based AI model.
The Thousand Brains Theory of Intelligence represents a bold departure from today’s mainstream AI frameworks. Rooted in the concept of Hierarchical Temporal Memory, this model belongs to a burgeoning class of brain-inspired approaches that aspire to mimic the human brain’s inherent efficiency. Alongside neuromorphic computing, spiking neural networks, liquid state machines, and deep predictive coding networks, Thousand Brains points toward a future where AI doesn’t merely process data but learns to operate with biological efficiency.
Hawkins’ theory posits that human brains learn through neurosensory mapping by building three-dimensional models of reality from sensory inputs. He suggests that, unlike other brain regions, the neocortex is structurally uniform; areas responsible for vision, touch, and language look remarkably similar. This led him to propose a universal algorithm underlying all neocortical functions, centered around the cortical column — a vertically organized structure of six neuron layers, each specialized for distinct processing tasks. Humans possess around 150,000 to 200,000 of these columns, each with roughly 80,000 to 120,000 neurons.
cortical columns in rats — source wiki commons
Hawkins’ theory sees each cortical column as a small, independent processor, capable of recognizing patterns and sequences. These columns work together, each identifying patterns at various levels, but they communicate selectively — one column connects with only a few others, not all. This decentralized approach allows the brain to build a cohesive view without a central control point or a cartesian map. Instead, each column independently processes sensory data through predictive coding, collectively creating our perception of the world.
Imagine holding a baseball. Out of thousands of cortical columns, only about 2% “fire,” each contributing a fragment of sensation — touch, sight, sound. Together, these columns “vote” on what they’re perceiving, weaving a collective impression: the lived experience of a baseball.
As brain-based models evolve and improve, the AI infrastructure today swells with ever-greater investment. The question looms: how much is too much?
Is AI In a Bubble?
Investment in AI has become strikingly lopsided, heavily favoring semiconductors and infrastructure over the development of actual applications. NVIDIA’s data centers alone have an astounding $105B of annualized revenue, while prominent AI applications from OpenAI, Anthropic, and Midjourney together bring in a comparatively modest $5 billion. The cost to train models like GPT-4 is around $80 million, but projections for GPT-5 are an order of magnitude more with some estimates as high as $2.5 billion. This has prompted some to question whether we are amid an AI — or “Transformer” — bubble.
We’ve witnessed this kind of infrastructure boom before. Back in the late 1990s and early 2000s, companies poured resources into preparing for an anticipated explosion in internet traffic, with some projections forecasting data demand to soar by up to 1,000 percent per year.
As a result, Level 3 Communications, Global Crossing, Qwest Communications, and WorldCom rushed to lay miles of fiber-optic cable, anticipating demand that ultimately fell short. By 2004, only about 5 percent of this infrastructure was in use. WorldCom spent more than $30 billion (around $56 billion today) on acquisitions and network build-outs, only to collapse under an accounting scandal in 2002. Global Crossing, after investing $22 billion (about $41 billion today) in fiber, faced a similar fate, declaring bankruptcy that same year due to overcapacity and crushing debt. Level 3 and Qwest managed to survive, but their stock prices plummeted by up to 90 percent, weighed down by debt from their overbuilt networks.
Whether AI is in a bubble depends partly on how quickly more efficient models become commercially viable and whether AI applications can deliver on promises. If brain-inspired models see widespread commercial use in the next 5–10 years, we are certainly in a bubble. But how we design these systems is a far more important question.
Controlling Brain-based AI: The Physical & Philosophical Divide
Joscha Bach and other modern AI reductionists, such as Geoffrey Hinton and Yann LeCun at Meta, argue that intelligence can be created on any viable substrate by constructing mathematical models that break down complex phenomena into manageable pieces. They believe that holistic, emergent properties can arise from sufficiently complex systems and that this approach is a practical way to approximate and simulate intelligent behavior.
Others propose that we take a more holistic approach to design and that reductionist models will eventually run into the limits that Gödel discovered in his incompleteness theorems, and Pascal centuries before him and that John von Neumann, Bohr, Poincare, Russel, Wittgenstein, Hume, and Kant each described in different ways — rationality has hard limits. They suggest that AI be built to include a bigger-picture perspective that guides rationality because a reductionist approach could lead to unpredictable — and unwanted — outcomes.
Ironically — or perhaps through design — these two competing philosophical views mirror the physical separation within human brains. One side of the brain embodies pure reductionism and rationality. The other is a holistic observer.
At first glance, this divide might seem inconsequential. Humans, after all, thrive on debate, and in the end, one might assume that the most practical design will prevail in the marketplace. But, as you might have experienced already, AI output is notoriously difficult to quantify. As artificial intelligence grows increasingly complex, assessing accuracy and bias becomes more elusive.
The philosophical approach is critical because it will guide smarter systems than we are. The direction we choose has profound consequences. If our choice is inconsistent with reality, it could have dire consequences. Before delving into how these perspectives will shape the future of AI, let us first examine, because it is pivotal to the future of AI design, how each hemisphere of the human brain perceives the world and how that came to be.
[1] D. Eagleman, Livewired (2020), Pantheon Books.
[2] J. Hawkins, A Thousand Brains (2021), Basic Books.
[3] J. Hawkins, On Intelligence (2004), Times Books.
[4] S. Wolfram, What is ChatGPT Doing and How Does it Work? (2023), Stephen Wolfram’s official blog.
Daniel Sexton is a Partner at PW Holdings focused on bridging the gap between startups and corporations. For executives, founders, investors, and AI enthusiasts ready to engage at the frontier, connect with him here. Explore more insights at RightBrainCapitalist.