
Moving neurons. (Midjourney)
Today’s most powerful artificial intelligence (AI) systems are still far less capable than even the brains of fruit flies, which can learn and adapt using just a fraction of the energy AI requires.

Fruit flies are what USC researchers call a “generalization machine,” despite having only 1 million neurons. (iStock)
While AI systems can process enormous amounts of data, they struggle to generalize, adapt and integrate information as efficiently as even the simplest living organisms. One of the biggest obstacles is that scientists still do not fully understand how the brain’s structure gives rise to intelligence.
Researchers at the USC Viterbi School of Engineering have taken a major step toward closing that gap in their study published in Nature Communications. The team discovered that the electrical activity of just a few individual neurons contains enough information to reveal the structure and function of an entire brain network, offering a new roadmap for building AI systems that think more like brains do.
Led by Paul Bogdan, the Jack Munushian Early Career Chair and associate professor at USC Viterbi’s Ming Hsieh Department of Electrical and Computer Engineering, the study is the first to introduce a mathematical framework that allows scientists to decode how neural networks work without observing every neuron at once—a long-standing challenge in both neuroscience and AI.
Breaking ‘Mission Impossible’: Decoding the Human Brain
For decades, scientists have struggled to map out the entire brain structure showing how the neurons interact to produce thoughts, a challenge known as the “scaling problem.” Human brains contain billions of neurons and trillions of connections, yet researchers can currently monitor less than 1% of the neurons in even a single brain region, making it impossible to observe all neural interactions simultaneously.
To work around this, scientists have long relied on computational models to predict large-scale brain activity without measuring every neuron. But those models have been limited as they assumed neuronal firing was random and “memoryless,” a simplification that stripped away key information about how neural networks actually operate.
Bogdan’s team was the first to show that this assumption is fundamentally wrong. The researchers discovered that neurons possess what they call “causal fractal memory,” meaning a neuron’s past electrical activity influences its future behavior across multiple timescales. This hidden memory allows individual neurons to carry information about the larger network they belong to.
Building on this key insight, the team used a mathematical framework known as multifractal analysis to examine the timing of neuronal “spikes” — the electrical pulses neurons use to communicate. Those timing patterns encode the network’s underlying topology, revealing whether it is organized like a branching tree or a densely connected mesh, even without observing every neuron.

Multifractal analysis of spiking dynamics as a tool to infer functional network topology. (Paul Bogdan)
Building Blocks of Intelligence
Like Lego building blocks, individual neurons in a neural network perform basic tasks that make up the brain’s complex cognitive abilities.
As part of the mission to map out the brain network, Bogdan’s team wanted to study how a single neuron fires while performing some of these building-block tasks. The team trained artificial spiking neural networks (SNN) to perform three tasks: integration, differentiation, and delay. Integration involves summing information, like adding numbers; differentiation is about comparing, like distinguishing an apple from a pear; and delay is used when a system must wait for more data before making a decision.
Researchers discovered that each type of task leaves a distinct mathematical, or “multifractal” signature in the firing patterns of individual neurons.
“It’s almost like talking to one person and being able to figure out everything that the entire organization is planning to do,” said Bogdan. By observing just one neuron, the team could determine which task the entire network was performing.
This discovery provides scientists with a powerful new way to map neural networks and understand how intelligence emerges, a critical step toward building AI systems that can generalize and adapt like biological brains.

The graphs show how the artificial spiking neurons performs 3 basic cognitive tasks: integration, differentiation, delay replication (Paul Bogdan)
Impact on NeuroAI and Beyond
This study’s findings have major implications for NeuroAI, a growing field that aims to design AI inspired by biological efficiency. While modern AI systems, like large language models, excel at processing text, they often struggle to integrate vision, logic and decision-making. By uncovering the mathematical rules that govern biological intelligence, researchers hope to create models that are more robust, energy-efficient and self-optimizing.
Beyond AI, the framework could accelerate drug discovery and agricultural research. By reducing the need to analyze massive biological datasets, scientists could more quickly identify genes that make plants resistant to disease and apply those insights to improve crops, with better solutions to global issues like food insecurity and climate change.
The paper, “Spiking dynamics of individual neurons reflect changes in the structure and function of neuronal networks,” was a multi-year collaboration between USC researchers, Paul Bogdan, Ruochen Yang, Heng Ping and Xiongye Xiao, and Roozbeh Kiani of New York University. The research was supported by the National Science Foundation (NSF), DARPA, the U.S. Army Research Office and the National Institutes of Health, highlighting its broad scientific significance. After the study’s publication, Bogdan was awarded a $310,000 grant from NSF to support further research in this area.
Published on February 9th, 2026
Last updated on February 9th, 2026

