Bayesian methods for discovering structure in neural and behavioral data

Scott Linderman

(Columbia University)

Please LOG IN to view the video.

Date: April 5, 2018

License: CC BY-NC-ND 2.5


New recording technologies are transforming neuroscience, allowing us to precisely quantify neural activity, sensory stimuli, and natural behavior. How can we discover simplifying structure in these high-dimensional data and relate these domains to one another? I will present my work on developing Bayesian methods to answer this question. First, I will develop state-space models to study global brain states and recurrent dynamics in the neural activity of C. elegans. In doing so, I will draw on prior knowledge and theory to build interpretable models. When our initial models fall short, I will show we criticize and revise them by inserting flexible components, like artificial neural networks, at judiciously chosen locations. Next, I will discuss the Bayesian inference algorithms I have developed to fit such models at the scales required by modern neuroscience. The key to efficient inference will be augmentation schemes and approximate methods that exploit the structure of the model. This example is illustrative of a broader framework for harnessing recent advances in machine learning, statistics, and neuroscience. Prior knowledge and theory provide the starting point for interpretable models, machine learning techniques lend additional flexibility where needed, and new Bayesian inference algorithms provide the means to fit these models and discover structure in neural and behavioral data.

Created: Thursday, April 12th, 2018