June 17, 2021
How fish in dark water form sensory representations of what’s around them and how mammals learn to recognize shapes on a flashcard may seem like different research questions, but for Brent Doiron, professor of neurobiology and statistics at the University of Chicago, they both come down to neurons.
As a theorist with a background in physics, Doiron models the brain, and the network of neurons that compose it, as a complex system where interactions among individual components produce activity in a group or system that can’t be explained by just looking at the components in isolation. For example, to know the activity of a single neuron involved in vision doesn’t help us understand how the brain learns familiar shapes and patterns. We also need to also know how that one neuron connects to and works with thousands of other neurons.
“The whole is greater than the simpler parts,” Doiron says. “When we look at thousands of connected neurons, we find phenomena in that network that isn’t just an inheritance of phenomena at a lower scale.”
Doiron’s pursuit of complex systems, though, didn’t start with neurons. It was more circuitous.
“I grew up in Ottawa, went to University of Ottawa, and didn’t plan out a career in neuroscience,” says Doiron. “Instead, I knew I was interested in complex systems, so, as an undergraduate, I thought I would apply that approach to social insects.”
Doiron learned of a local professor of physics, Wendy Brandts, who modeled how simple rules of interactions between ants resulted in colonies of thousands of cooperating individuals. He decided to visit her office to try to get involved in her research.
“When I knocked on her office door, another physicist, her husband André Longtin, answered instead. He and Wendy shared an office.” The two sat down to talk and, by the end of the meeting, Doiron left the office convinced he should train with Longtin, a physicist researching not social insects, but neural systems, who ultimately became his PhD advisor.
This twist of fate offered Doiron an unexpected entry into computational neuroscience, which was in its infancy as a field.
“Today, a lot of students are wise to understanding computational neuroscience as a discipline and start positioning themselves earlier on in undergraduate and graduate work, but when I was starting out that really wasn’t the case,” he recalls. “There were very few programs devoted to computational neuroscience. I was fortunate that Longtin was a pioneer in the field and was connected to established experimentalists.”
In particular, Longtin collaborated with Leonard Maler, a professor of cellular and molecular medicine, whose research focused on electric fish. These species produce electric fields which surround the fish and are warped by objects they encounter. The fish use these electric distortions to build a 3D representation of their surrounding environment. To detect the distortions, fish bear in their skin specialized organs capable of detecting electric fields and which signal to the brain the presence of nearby objects such as potential prey. It is much like having a 3D retina yet always only seeing the nearby world, like in a fog.
Under these mentors, Doiron recalls, “I began to use the same techniques I would have for ant colonies on the brain, because both are complex systems of interacting individuals. Instead of a group of insects, I began to analyze a group of neurons.”
While Maler measured how the electrosensory signals coalesce in the brain, Longtin and Doiron modeled these coalescing signals. Together, the team realized the environmental map the brain creates depends not solely on these sensory signals, but also on the connections between sensory neurons, and the intrinsic noise these connections allow to flow through the neural system.
Most researchers see noise, or variability in how a neural network reacts to the same sensory stimulus, as a challenge that obscures measurement of a sensory signal, the representation of that stimulus encoded in neural activity. The most common tactic is to average data from many trials, interpret that average as the signal, and the between-trial variability as noise. Doiron, however, believes, that applying this approach to neural systems overlooks an important part of their coding.
“Because neurons are synaptically connected, and so influence each other’s activity, they share a lot of noise with each other,” he says. Doiron’s work shows that this shared noise allows coordination between neurons, even in the absence of a sensory signal. In the case of electric fish, this coordinated activity primes neurons to work together to respond to the sensory signal. In particular, the combination of noise and sensory signal facilitates prolonged activity among sensory neurons, and thus a stronger environmental map.
Since his graduate work, Doiron has moved from studying neural interactions in fish to those occurring in the mammalian brain, including those interactions that allow plasticity—the ability of neural networks to change in connectivity, and thus re-wire themselves.
Once Doiron started thinking about how network structure gave rise to coordinated activity, he realized this could be applied to learning. A network structure with more connections can allow neurons to share more noise with each other, but shared noise can also build new connections between neurons. These new connections are essentially a memory of the network’s past activity, which in turn alters its future activity.
“Learning can occur because when neurons in a system are co-activated, their connectivity increases over time,” he says. “In this way, noise shared between neurons can impact learning in the systems.”
For example, a network of visual neurons might learn to recognize an image such as a black-and-white shape presented on a flash card. Initially, the neurons activated include two sets: those activated by the shape and those activated by random noise. But the shared noise builds synapses between these active neuronal sets regardless of what drives their activity. Consequently, the next presentation of the same image will again reliably drive the set activated by the shape, but also indirectly drive the latter set through these new connections. As a result, an image presented repeatedly uses noise to “recruit” neurons, building a sub-network which serves to remember and recognize it.
“The correlated activity between neurons tells you about network structure, predicts changes in that structure, and, in turn, future activity,” Doiron says. Thus, simple rules for connectivity between any two neurons, applied to a network of thousands, can help researchers derive how activity—derived from noise, sensory signal, or both—impacts brains and the computations they perform.
Doiron loves the challenge his field presents.
“Computational neuroscience definitely has that kind of frontier, pre-Copernican feeling to it,” he says. “The lack of hard-wired paradigms that we all agree on infuriates some and excites others, and I guess I’m in the latter camp.”
This enthusiasm will serve as a strength in his new role as inaugural director of the Grossman Institute for Quantitative Biology and Human Behavior, which seeks to integrate the work of theorists and experimentalists at UChicago spanning the Physical, Biological, and Social Science Divisions, towards a better understanding of neural systems. Bringing together researchers from these disparate disciplines, Doiron’s goal as director reflects his own goals as a researcher: to use computation to unify experimental observations of neural systems.
“The point of theory is to find links,” he says, and this is the strength of his approach. “Neuroscience keeps finding these new and interesting aspects of the nervous system and cataloging them, which is important. But I hope theory starts to unify these findings under a common umbrella.”