Major NIH award to study how the brain infers structure from sensory signals may have applications for disorders like schizophrenia and offer insights for artificial intelligence
Imagine you're sitting on a train. You look out the window and see another train on an adjacent track that appears to be moving. But, has your train stopped while the other train is moving, or are you moving while the other train is stopped?
The same sensory experience—viewing a train—can yield two very different perceptions, leading you to feel either a sensation of yourself in motion or a sensation of being stationary while an object moves around you.
Human brains are constantly faced with such ambiguous sensory inputs. In order to resolve the ambiguity and correctly perceive the world, our brains employ a process known as causal inference.
Causal inference is a key to learning, reasoning, and decision making, but researchers currently know little about the neurons involved in the process.
In order to bridge the gap, a team of researchers at the University of Rochester, including Greg DeAngelis, the George Eastman Professor of Brain and Cognitive Sciences, and Ralf Haefner, an assistant professor of brain and cognitive sciences, received a $12.2 million grant award from the National Institutes of Health for a project to better understand how the brain uses causal inference to distinguish self-motion from object motion.
The five-year award is part of the NIH's Brain Research through Advancing Innovative Neurotechnologies (BRAIN) initiative. The insights generated by the award, which also involves researchers at New York University, Harvard Medical School, Rice University, and the University of Washington, may have important applications in developing treatments and therapies for neural disorders such as autism and schizophrenia, as well as inspire advances in artificial intelligence.
"This NIH BRAIN Initiative Award is the biggest research award in the history of the Department Brain and Cognitive Sciences," says Duje Tadin, professor and chair of the department at Rochester. "It aims to solve the key question of how our brains interpret the information collected by our senses. This research builds on a longstanding strength of BCS of using computational methods to understand both behavior and underlying neural mechanisms."
Unraveling a complicated circuit of neurons
Causal inference involves a complicated circuit of neurons and other sensory mechanisms that are not widely understood, DeAngelis says, because "sensory perception works so well most of the time, so we take for granted how difficult of a computational problem it is."
In actuality, sensory signals are noisy and incomplete. Additionally, there are many possible events that could happen in the world that would produce similar patterns of sensory input.
Consider a spot of light that moves across the retina of the eye. The same visual input could be the result of a variety of situations: it could be caused by an object that moves in the world while the viewer remains stationary, such as a person standing still at a window and observing a moving ambulance with a flashing light; it could be caused by a moving observer viewing a stationary object, such as a runner noticing a lamppost from a distance; or it could be caused by many different combinations of object motion, self-motion, and depth.
The brain has a difficult problem to solve: it must infer what most likely caused the specific pattern of sensory signals that it received. It can then draw conclusions about the situation and plan appropriate actions in response.
Using data science, lab experiments, computer models, and cognitive theory, DeAngelis, Haefner, and their colleagues will pinpoint single neurons and groups of neurons that are involved in the process. Their goal is to identify how the brain generates a consistent view of reality through interactions between the parts of the brain that process sensory stimuli and the parts of the brain that make decisions and plan actions.
Developing therapies and artificial intelligence
Recognizing how the brain uses causal inference to separate self-motion from object motion may help in designing artificial intelligence and autopilot devices.
"Understanding how the brain infers self-motion and object motion might provide inspiration for improving existing algorithms for autopilot devices on planes and self-driving cars," Haefner says.
For example, a plane's circuitry must take into account the plane's self-motion in the air while also avoiding other moving planes appearing around it.
The research may additionally have important applications in developing treatments and therapies for neural disorders such as autism and schizophrenia, conditions in which casual inference is thought to be impaired.
"While the project is basic science focused on understanding the fundamental mechanisms of causal inference, this knowledge should eventually be applicable to the treatment of these disorders," DeAngelis says.