โ† Back to Science Brain neural networks showing AI-like computational patterns and backpropagation mechanisms
๐Ÿง  Neuroscience: Brain-AI Research

How the Human Brain Uses AI-Like Algorithms Scientists Never Expected

๐Ÿ“… 12 February 2026 โฑ๏ธ 7 min read
When researchers designed the first artificial neural networks, they drew inspiration from the human brain. Now, decades later, a series of striking discoveries shows that the brain resembles AI systems far more than we ever imagined โ€” from the way it learns to the way it organizes knowledge. The similarities are not merely superficial: they involve fundamental learning mechanisms.

๐Ÿ“– Read more: Exercise Beats Depression Better Than Medication

๐Ÿง  Backpropagation: The Algorithm Shared by Brain and AI

Backpropagation is the foundational algorithm behind every modern deep learning system, from ChatGPT to self-driving cars. Its principle is simple: when a neural network makes an error, it sends error signals โ€œbackwardsโ€ โ€” from output to input โ€” adjusting the weights of each connection to improve the next prediction.

For decades, neuroscientists considered it impossible for something like this to occur in the biological brain. Neurons send signals in only one direction, so how could error signals travel โ€œbackwardsโ€? The answer came from a series of studies that revealed the brain uses analogous โ€” though not identical โ€” mechanisms.

Measurements in mouse neurons showed that error signals do indeed travel in reverse through neural circuits. The brain does not use exactly the same algorithm as artificial networks, but the result is remarkably similar: synapses are adjusted based on error signals, in a way that gradually optimizes system performance.

๐Ÿ”ฌ Dendrites: The Brain's Hidden Computers

One of the most groundbreaking discoveries came in 2022 from Bar-Ilan University in Israel. Professor Ido Kanter's team published findings in Scientific Reports that shake a 70-year-old assumption: that learning in the brain occurs exclusively through modification of synaptic strength (the connections between neurons).

Instead, the research showed that learning occurs primarily in dendritic trees โ€” the long, branching โ€œarmsโ€ of each neuron. The trunk and branches of the tree modify their strength, not just the โ€œleavesโ€ (the synapses). This means that a single neuron can implement deep learning algorithms that previously required thousands of artificial neurons.

๐Ÿ’ก Why This Is So Important

For 70 years, neuroscience was based on the assumption that learning happens at synapses โ€” the connections between neurons. Now it appears that each neuron is far more complex than we believed: it functions like an entire neural network on its own. This represents "a step toward the biologically plausible implementation of backpropagation algorithms," according to the researchers.

This discovery highlights a deep irony: artificial neural networks were designed as abstract simplifications of the brain, yet the brain ultimately uses mechanisms closer to AI algorithms than their creators ever realized.

๐Ÿ“– Read more: Photons Mimic Brain Function: Quantum Neuromorphic Memory

๐Ÿ”ฎ Predictive Coding: The Brain as a Prediction Machine

A second striking similarity concerns how the brain processes sensory information. According to the theory of predictive coding, the brain does not passively wait to receive information from the eyes or ears. Instead, it continuously generates predictions about incoming signals and processes only the prediction errors โ€” that is, the difference between what it expected and what it actually received.

This is remarkably similar to how modern AI language models work. Models like GPT are trained in exactly this way: they predict the next word and learn from their mistakes. The brain does something analogous across every sensory function โ€” from vision to hearing โ€” continuously minimizing โ€œprediction error.โ€

Analyses of brain activity during speech comprehension revealed that neuron activation patterns strikingly resemble the internal representations of large language models. The brain processes language in hierarchical layers โ€” from sounds, to words, to meaning โ€” exactly like multilayered neural networks.

86 bn neurons in the human brain โ€” each one potentially a micro-network
20 W brain power consumption โ€” vs. GW for AI data centers
100 trn synapses โ€” each one adjusted through mechanisms analogous to backpropagation

๐Ÿ“– Read more: Plants That Touch Each Other Are More Resilient to Stress

๐ŸŒ Convexity: The Geometry That Unites Mind and Machine

A 2025 study from the Technical University of Denmark (DTU), published in Nature Communications, added another piece to the puzzle. Researchers discovered that convexity โ€” a geometric property โ€” is a common feature of both the human mind and artificial neural networks.

What does this mean in practice? When we learn what a โ€œcatโ€ is, we don't store a single image but build a flexible, convex region in our mental space that encompasses every possible cat โ€” large, small, black, white. Lenka Tetkova's team proved that artificial neural networks develop exactly this structure. AI models, trained on images, text, audio, and medical data, form convex concept regions in a way identical to the human mind.

"We discovered that the same geometric principle that helps humans form and share concepts โ€” convexity โ€” also shapes the way machines learn, generalize, and align with us."

โ€” Lenka Tetkova, Postdoctoral Researcher, DTU Compute (Nature Communications, 2025)

If convexity proves to be a reliable performance indicator, it could lead to AI models that learn faster and more efficiently โ€” especially in cases where available training data is scarce.

๐Ÿ“– Read more: Robots Share Resources Like an Insect Colony

โšก Super-Turing AI: When Technology Learns from Biology

These discoveries don't remain in theory. In March 2025, engineers at Texas A&M published in Science Advances the creation of a "Super-Turing AI" that reproduces fundamental characteristics of the biological brain. Unlike current AI systems, where training and memory are separated across different hardware components, this system integrates them โ€” exactly like the brain.

"AI data centers consume energy in gigawatts, while our brain consumes just 20 watts," explained Dr. Suin Yi of Texas A&M. His team leveraged principles of Hebbian learning โ€” โ€œcells that fire together, wire togetherโ€ โ€” instead of traditional backpropagation. In testing, a drone equipped with this system navigated a complex environment without prior training, learning and adapting in real time โ€” faster and with less energy than conventional AI.

๐Ÿ’ก Why This Changes Everything

These findings are not just about fundamental science. They have practical implications in two directions. First, in neuroscience: understanding that the brain uses mechanisms analogous to deep learning could help treat neurological disorders, from Alzheimer's disease to schizophrenia โ€” as these conditions likely disrupt precisely these learning mechanisms.

Second, in artificial intelligence: if we can replicate the brain's efficiency in artificial systems, we could create AI that consumes a fraction of today's energy. This is critical at a time when AI data centers pose an increasingly significant environmental challenge.

The biggest surprise, perhaps, is philosophical. For decades, the question was whether AI can โ€œthink like us.โ€ Now it seems that, to some degree, it always did โ€” because it was inspired by the very mechanisms the brain actually uses. The convergence between biological and artificial intelligence is no coincidence โ€” it's evidence that there are fundamental learning principles that hold regardless of the substrate, whether neurons or transistors.

brain-ai-parallels backpropagation neural-networks predictive-coding dendrites neuroscience artificial-intelligence computational-biology

๐Ÿ“š Sources