Using modern, deep Bayesian inference to analyse neural data and understand neural systems

Laurence Aitchison


Please LOG IN to view the video.

Date: January 30, 2018

License: CC BY-NC-ND 2.5

Description:

I consider how Bayesian inference can address the analytical and theoretical challenges presented by increasingly complex, high-dimensional neuroscience datasets.

With the advent of Bayesian deep neural networks, GPU computing and automatic differentiation it is becoming increasingly possible to perform large-scale Bayesian analyses of data, simultaneously inferring complex biological phenomena and experimental confounds.   I present a proof-of-principle: inferring causal connectivity from an all-optical experiment combining calcium imaging and cell-specific optogenetic stimulation.  The model simultaneously infers spikes from fluorescence, models low-rank activity and the extent of off-target optogenetic stimulation, and explicitly gives uncertainty estimates about the inferred connection matrix.

Further, there is considerable evidence that humans and animals use Bayes theorem to reason optimally about uncertainty. I show that one particular Bayesian inference method — sampling — emerges naturally when combining classical sparse-coding models with a biophysically motivated energetic cost of achieving reliable responses.  We understand these results theoretically by noting that the resulting combined objective approximates the objective for a classical Bayesian method: variational inference.  Given this strong theoretical underpinning, we are able to extend the model to multi-layered networks modelling MNIST digits, recurrent networks, and fast recurrent networks.




Created: Tuesday, January 30th, 2018