A Crypto Pioneer’s Bold Bet on Brain-Inspired Artificial Intelligence
The Vision Behind a Billion-Dollar Investment
In an era where artificial intelligence headlines dominate technology news, Jed McCaleb, one of cryptocurrency’s most influential pioneers, has emerged with a plan that could reshape how we think about AI development. McCaleb isn’t just another tech billionaire making promises—he’s someone who has consistently delivered groundbreaking innovations, from creating the Mt. Gox exchange (which later became infamous but was revolutionary in its early days) to co-founding Ripple and Stellar. Now, he’s turning his attention and substantial resources toward what many consider the holy grail of technology: artificial general intelligence (AGI). His approach, however, differs fundamentally from the path being pursued by companies like OpenAI, Google, and Anthropic. Rather than simply scaling up existing AI architectures with more data and computing power, McCaleb believes the answer lies in understanding and replicating the most sophisticated intelligence system we know—the human brain.
McCaleb’s commitment is staggering in its scope: $1 billion from his approximately $3.9 billion cryptocurrency fortune will be directed specifically toward developing an AGI system that mirrors how the human brain actually works. This isn’t pocket change or a tentative exploration—it represents roughly a quarter of his entire wealth, demonstrating an extraordinary level of conviction in this vision. The investment will flow through the Astera Institute, an organization McCaleb himself established to pursue ambitious scientific endeavors that might be too risky or long-term for traditional venture capital or even established research institutions. What makes this announcement particularly significant is its timing and scope. While much of the AI industry races to refine and expand upon existing large language models and transformer-based architectures, McCaleb is essentially betting that there’s a better path forward, one that requires us to look inward at our own biological intelligence rather than simply making incremental improvements to current technology.
A Multidisciplinary Approach: Where Neuroscience Meets AI
What distinguishes McCaleb’s initiative from typical AI investments is his recognition that building truly intelligent machines requires more than just computer science expertise. In addition to the $1 billion earmarked for AGI development, he has pledged another $600 million specifically for neuroscience research. This additional commitment underscores a fundamental philosophy: you cannot build artificial general intelligence without first understanding natural general intelligence. The human brain, with its roughly 86 billion neurons and trillions of synaptic connections, remains the most powerful and efficient information-processing system we know of, capable of learning from limited examples, adapting to novel situations, and generalizing knowledge across domains—capabilities that current AI systems struggle to match despite their impressive performance on specific tasks.
This multidisciplinary strategy acknowledges that the barriers to AGI aren’t purely technological but also scientific. We still don’t fully understand how consciousness emerges, how the brain consolidates memories, or how it performs certain computational tasks with remarkable efficiency while consuming only about 20 watts of power—roughly equivalent to a dim light bulb. By investing heavily in neuroscience alongside AI development, McCaleb is essentially funding the basic research that could unlock insights applicable to building better artificial systems. This approach requires patience and a tolerance for uncertainty that’s rare in the fast-paced tech industry, where quarterly results and rapid product iterations dominate thinking. The willingness to make such a substantial, long-term bet on fundamental research speaks to McCaleb’s vision and his position as someone whose wealth comes from successful bets on transformative technologies. He’s not answering to impatient shareholders or venture capitalists demanding returns within a typical fund lifecycle; instead, he can pursue what he believes is the right approach, even if it takes decades to bear fruit.
The Research Strategy: From Mice to Humans
The practical implementation of this vision begins at the Astera Institute’s facilities in Emeryville, California, where researchers are planning to use cutting-edge brain-computer interface technologies to record neural activity in mice as they perform specific tasks. This might sound straightforward, but it represents an enormously complex undertaking. Modern neuroscience has developed increasingly sophisticated tools—from multi-electrode arrays that can record from hundreds of neurons simultaneously to advanced imaging techniques that can observe brain activity across entire regions—but translating this data into actionable insights for AI architecture remains a formidable challenge. The mice will be trained to complete various tasks while researchers monitor exactly which neural circuits activate, how different brain regions communicate, and what patterns of activity correspond to different aspects of cognition like perception, decision-making, and motor control.
The data collected from these experiments won’t simply inform theoretical understanding—the Astera Institute plans to directly transform these biological insights into a next-generation artificial intelligence architecture. This represents a fundamentally different approach than the one that produced today’s AI systems, which were inspired by neuroscience at a very abstract level (the basic concept of artificial neurons) but have evolved in directions quite distant from biological reality. After establishing methods and gaining insights from mice, the research will expand to include primates and eventually humans. Each step up this ladder of complexity brings researchers closer to understanding the neural mechanisms underlying the sophisticated cognitive abilities that define human intelligence. Primate brains, sharing much of their organizational structure with human brains, offer insights into higher-order cognitive functions, while direct study of human neural activity (through willing participants, often epilepsy patients with already-implanted electrodes for medical purposes) provides the ultimate reference for the kind of intelligence the project aims to replicate.
Challenging the Transformer Paradigm
McCaleb’s critique of current AI systems centers on the Transformer architecture, the technology that powers models like GPT-4, Claude, and most other state-of-the-art AI systems. While these models have achieved remarkable results—generating coherent text, writing code, analyzing images, and even passing professional examinations—McCaleb argues they’re fundamentally limited. Specifically, he points to their lack of genuine planning abilities, decision-making capabilities that account for long-term consequences, and intrinsic motivation that drives behavior. These systems are essentially sophisticated pattern-matching engines, brilliant at predicting what comes next based on vast amounts of training data, but lacking the kind of goal-directed behavior, causal reasoning, and adaptability that characterizes even relatively simple biological organisms.
This critique has substance. Current AI systems, for all their impressive capabilities, exhibit strange blindspots and limitations. They can write eloquently about topics they’ve been trained on but struggle with novel problems requiring genuine reasoning. They can’t truly plan in the sense of forming hierarchical goals and working backward from desired outcomes. They lack the kind of intrinsic motivation that drives biological learning—the curiosity that makes a child experiment with their environment, or the goal-seeking that allows animals to persist through challenges to obtain rewards. By exploring more biologically grounded approaches, McCaleb believes we can create AI systems that don’t just mimic intelligence superficially but genuinely replicate its underlying mechanisms. Such systems would potentially be more robust, more generalizable, and—perhaps most importantly—more understandable and controllable. When AI systems are built on principles derived from biological intelligence, their behavior may be more predictable and their limitations more transparent, addressing some of the safety concerns that have emerged as AI systems have become more powerful and their decision-making processes more opaque.
The Promise of Understandable and Controllable AI
One of the most compelling aspects of McCaleb’s vision is the emphasis on developing AI that can be understood and controlled. As AI systems have grown more powerful, concerns about their opacity have intensified. Today’s large neural networks function as “black boxes”—even their creators often can’t explain exactly why they produce specific outputs or predict how they’ll behave in novel situations. This unpredictability becomes increasingly concerning as these systems are deployed in high-stakes domains like healthcare, autonomous vehicles, financial systems, and potentially even military applications. An AI system based on principles derived from neuroscience might be more interpretable because we’d understand not just what it does but why—the computational principles underlying its behavior would map to biological mechanisms we can study, test, and comprehend.
Furthermore, systems developed along these lines might offer better pathways for alignment and control. If we build AI based on a deep understanding of how motivation, goal-seeking, and decision-making work in biological systems, we might be better positioned to instill appropriate values and constraints. Rather than trying to control opaque systems through external constraints or hoping that sufficient training data will produce desirable behavior, we could potentially build in motivation structures and decision-making frameworks that are inherently aligned with human values because they’re based on the same fundamental architecture that produces human cognition. This doesn’t mean such systems would automatically be safe—biological intelligence certainly has its flaws and failure modes—but it could provide a more principled foundation for developing AI that serves human interests. The transparency that might come from understanding these systems at a mechanistic level could also help with debugging, improvement, and ensuring that as these systems become more capable, their increased power doesn’t come with increased unpredictability.
A Long-Term Bet on Transformative Science
McCaleb’s commitment represents more than just another investment in AI—it’s a bet on a fundamentally different approach to one of humanity’s most ambitious scientific endeavors. While the mainstream AI industry continues to iterate on existing paradigms, investing in larger models trained on more data with more computing power, McCaleb is funding what amounts to a parallel research program that questions those fundamental assumptions. This kind of contrarian thinking has served him well in the past; his early recognition of blockchain technology’s potential made him a billionaire, and his willingness to pursue visions others dismissed as impractical has repeatedly proven prescient. Whether this bet on brain-inspired AGI will prove equally visionary remains to be seen, but the commitment of such substantial resources ensures that the attempt will be serious and sustained.
The timeline for such research is necessarily long. Understanding the brain well enough to replicate its computational principles in artificial systems isn’t a project measured in months or even a handful of years—it’s likely a decades-long endeavor. However, the potential payoff is correspondingly enormous. Artificial general intelligence, should it be achieved, would represent perhaps the most significant technological development in human history, with implications touching virtually every aspect of society, economy, and human experience. By pursuing a path grounded in neuroscience and biological principles, McCaleb and the Astera Institute are contributing not just to AI development but to our fundamental understanding of intelligence itself. Even if the primary goal takes longer than anticipated, the research will almost certainly yield valuable insights into both neuroscience and artificial intelligence, advancing both fields in ways that benefit subsequent researchers. In an era dominated by short-term thinking and quarterly earnings calls, there’s something refreshing—and potentially transformative—about someone willing to commit billions to answering fundamental questions about the nature of intelligence, regardless of how long that journey might take.












