ATLANTA—Sergey Plis, associate professor of computer science at Georgia State University, and his collaborators have received $1.3 million from the Collaborative Research in Computational Neuroscience Program, jointly run by the National Science Foundation and the National Institutes of Health (NIH), to study causal connections in the brain.
The four-year award, funded through the NIH, will support interdisciplinary research to build causal learning models that can produce a blueprint of how brain regions interact.
A maxim in science is that “correlation is not causation.” In other words, simply because you observe an association between two variables does not mean there is a cause-and-effect relationship between them. Ethically, however, brain researchers are often limited to collecting observational data and using patterns of correlation to infer patterns of causation.
“We can’t just poke around in the brain to see how it works,” said Plis, who is also director of machine learning at the university’s Center for Translational Research in Neuroimaging & Data Science (TReNDS).
There is a longstanding problem associated with observational data about the brain: the speed of human brain connections and the speed of measurement modality are not equal. For example, Plis noted, human neurons fire much faster than functional magnetic resonance imaging (fMRI) can measure brain activity.
“The inferences that scientists draw using this data are statistically sound, but they rely on a false assumption—that the timescales are the same,” said Plis. “As a result, these methods can produce incorrect or unreliable information about how brain regions influence each other.”
Other types of brain imaging, such as magnetoencephalography (MEG) or electroencephalography (EEG), can measure processes at a faster rate than fMRI—but they produce less detailed data. Scientists lack methods to integrate causal information across multiple imaging modalities with significantly varied timescales.
The project aims to develop novel theories and methods that enable learning about the brain’s causal structure and connectivity, even when there is a significant mismatch between the speed of the brain and the measurement. The resulting set of algorithms will provide the neuroimaging community with a more robust, reliable understanding of directed connectivity in the brain.
“We will take data collected at different speeds by different modalities and combine it to reveal more about how brain regions influence each other,” said Plis. “For example, we can take slow modality like fMRI and learn causal information at faster neural scale, and then fuse it with what we learn from MEG or EEG. By combining them, one could partially correct the other.”
In addition to providing scientists with a new set of methodological tools, the project will advance scientific knowledge about the neural bases of diseases. The team plans to apply their models to schizophrenia, which is considered a disorder of “disconnectivity.”
“We know that something has gone wrong with the connectivity inside these patients’ brains, but there are competing theories about what exactly has happened,” said Plis. “Using our dynamic causal fusion models, we’ll be able to test predictions and validate those theories.”
The computational models could also be applied toward other problems that deal with varied, complex data sets, Plis added, such as questions around causal relationships related to weather or climate.
Plis’s collaborators include Vince Calhoun, Distinguished University Professor of Psychology and director of TReNDS; Godfrey Pearlson, professor of psychiatry and neuroscience at Yale School of Medicine; and David Danks, professor of philosophy and psychology at the University of California-San Diego. They are seeking additional collaborators to join the project.
An abstract of the grant, 1R01MH129047, is available at the NIH Reporter website.
With a background in engineering, artificial intelligence and computer science, Plis is focused on developing computational instruments that enable knowledge extraction from observational multimodal data collected at different temporal and spatial scales.