The debate over whether artificial intelligence could ever achieve consciousness has long been stuck between two opposing camps. Now, a provocative new theory published in Neuroscience and Biobehavioral Reviews argues that both sides have been asking the wrong questions entirely—and that building truly conscious machines may require abandoning our fundamental assumptions about what computation actually means.
Researchers Borjan Milinkovic of the Paris-Saclay Institute of Neuroscience and Jaan Aru of the University of Tartu propose what they call “biological computationalism,” a framework suggesting that the brain performs a radically different kind of computation than anything running on silicon chips. Their argument carries significant implications for the future of AI development and the possibility of machine consciousness.

“The traditional computational paradigm is broken—or at least badly mismatched to how real brains operate,” the authors write. “For decades, it has been tempting to assume that brains ‘compute’ in roughly the same way conventional computers do: as if cognition were essentially software, running atop neural hardware. But brains do not resemble von Neumann machines.”
The consciousness debate has traditionally split between computational functionalists, who believe consciousness emerges from the right pattern of information processing regardless of what material implements it, and biological naturalists, who insist that the wet, living tissue of the brain is essential to subjective experience. The new framework charts a middle course, arguing that while consciousness need not be exclusive to carbon-based life, it likely requires a specific style of computation that current AI systems simply cannot perform.

The researchers identify three critical features that distinguish biological computation from its digital counterpart.
First, brain computation is “hybrid,” blending discrete events like neural spikes with continuous processes such as chemical gradients and electric fields. “Continuous fields, ion flows, dendritic integration, local oscillatory coupling, and emergent electromagnetic interactions are not just biological ‘details’ we might safely ignore while extracting an abstract algorithm,” the authors explain. “In our view, these are the computational primitives of the system.”

Second, biological computation is what the researchers term “scale-inseparable.” Unlike software running on a computer, where algorithms can be cleanly distinguished from hardware, the brain’s computations are entangled across multiple levels simultaneously—from individual ion channels to entire brain regions. “The causal story runs through multiple scales at once, from ion channels to dendrites to circuits to whole-brain dynamics and the levels do not behave like modular layers in a stack.”
Third, the brain operates under severe metabolic constraints that actively shape how it processes information. Though the brain represents just 2% of body mass, it consumes roughly 20% of the body’s energy. This isn’t merely a limitation—it’s a design principle. Research published in Behavioral and Brain Sciences earlier this year reinforced this point, demonstrating that metabolism structures and guides information flow in neural systems in ways that cognitive models have largely ignored.

“In conventional computing, we can draw a clean line between software and hardware. In brains, there is no such separation of different scales,” the researchers note. “In the brain, everything influences everything else, from ion channels to electric fields to circuits to whole-brain dynamics.”
These findings arrive at a moment of intense speculation about AI consciousness, fueled by the increasingly sophisticated behavior of large language models. A 2023 report surveying consciousness theories found significant academic disagreement about whether computational equivalence—the idea that matching the right computational architecture should produce consciousness—is a valid criterion for attributing awareness to machines.

The new framework suggests that even if AI systems continue advancing in capability, they may never achieve consciousness through current approaches. “Scaling digital AI alone may not be sufficient,” the authors write. “Not because digital systems can’t become more capable, but because capability is only part of the story. The deeper challenge is that we might be optimizing the wrong thing: improving algorithms while leaving the underlying computational ontology untouched.”
Rather than searching for the right program to run, the researchers argue, those interested in artificial consciousness should be searching for the right kind of computing matter—physical systems whose dynamics mirror the hybrid, scale-integrated, energy-constrained processing of biological brains. This might include neuromorphic systems that implement computation through ion flows rather than transistors, or even synthesized biological tissue.
“If we want something like synthetic consciousness,” they conclude, “the problem may not be, ‘What algorithm should we run?’ The problem may be, ‘What kind of physical system must exist for that algorithm to be inseparable from its own dynamics?’”
Sources:
- Milinkovic, B., & Aru, J. (2026). On biological and artificial consciousness: A case for biological computationalism. Neuroscience and Biobehavioral Reviews, 181, 106524. https://doi.org/10.1016/j.neubiorev.2025.106524
- Estonian Research Council press release. (2025, December 22). A new theory of biological computation might explain consciousness. EurekAlert! https://www.eurekalert.org/news-releases/1110849
- Cambridge University Press. (2025). Metabolic considerations for cognitive modeling. Behavioral and Brain Sciences. https://doi.org/10.1017/S0140525X25103956



Leave a Reply