The goal of the Stanford Neurosciences Institute is to understand how the brain gives rise to mental life and behavior, both in health and in disease. Our research community draws from and informs multiple disciplines, including neuroscience, medicine, engineering, psychology, education and law. New discoveries will transform our understanding of the human brain, provide novel treatments for brain disorders, and promote brain health throughout the lifespan. We aim to create positive benefits for individual people, families and society.
SNI seminar videos
Scott Linderman
(Columbia University)
Bayesian methods for discovering structure in neural and behavioral data

Please LOG IN to view the video.
Date: April 5, 2018
Description:
New recording technologies are transforming neuroscience, allowing us to precisely quantify neural activity, sensory stimuli, and natural behavior. How can we discover simplifying structure in these high-dimensional data and relate these domains to one another? I will present my work on developing Bayesian methods to answer this question. First, I will develop state-space models to study global brain states and recurrent dynamics in the neural activity of C. elegans. In doing so, I will draw on prior knowledge and theory to build interpretable models. When our initial models fall short, I will show we criticize and revise them by inserting flexible components, like artificial neural networks, at judiciously chosen locations. Next, I will discuss the Bayesian inference algorithms I have developed to fit such models at the scales required by modern neuroscience. The key to efficient inference will be augmentation schemes and approximate methods that exploit the structure of the model. This example is illustrative of a broader framework for harnessing recent advances in machine learning, statistics, and neuroscience. Prior knowledge and theory provide the starting point for interpretable models, machine learning techniques lend additional flexibility where needed, and new Bayesian inference algorithms provide the means to fit these models and discover structure in neural and behavioral data.
Todd Coleman
(UC San Diego)
Enteric neural engineering

Please LOG IN to view the video.
Date: March 12, 2018
Description:
Gastrointestinal (GI) problems are the second leading cause for missing work or school in the US, giving rise to 10% of the reasons a patient visits their physician. Although structural and biochemical abnormalities are easy to diagnose, more than half of GI disorders involve abnormal functioning of the GI tract. Such “functional” GI disorders, considered the result of abnormal individual or interactive functioning of the enteric and central nervous systems, are typically managed with symptom-based questionnaires or invasive, intermittent procedures in specialized centers. In this talk, we will discuss our advancement of the high-resolution electrogastrogram (HR-EGG), acquired non-invasively from cutaneous multi-electrode electrode arrays. We will discuss how the HR-EGG combined with advanced statistical signal processing methods allows for non-invasive extraction of GI motility parameters (propagation patterns, propagation velocity) that correlate with symptoms (the Gastroparesis Cardinal Symptom Index). We will discuss implications of this finding, since numerous studies have shown no evidence between symptom improvement and gastric emptying for various drugs used to treat gastroparesis. With the ability to better assess, will next discuss advances in modifying the enteric nervous system, with our development of a red/far-red light switch to control genes optically. Lastly, we will discuss the potential of removing bottlenecks and benefitting large populations with our development of ambulatory monitoring systems and adhesive-integrated flexible electronic systems.
Further Information:
Todd P. Coleman received B.S. degrees in electrical engineering (summa cum laude), as well as computer engineering (summa cum laude) from the University of Michigan. He received M.S. and Ph.D. degrees from MIT in electrical engineering, and did postdoctoral studies at MIT in neuroscience. He is currently a Professor in the Bioengineering Department at UCSD, where he directs the Neural Interaction Laboratory. Dr. Coleman’s research is very multi-disciplinary, using tools from applied probability, physiology, and bio-electronics. His work has been featured on CNN, BBC, and the New York Times. Dr. Coleman has been selected as a National Academy of Engineering Gilbreth Lecturer and a TEDMED speaker.
Rishidev Chaudhuri
(University of Texas)
Low- and High- dimensional computations in neural circuits

Please LOG IN to view the video.
Date: March 6, 2018
Description:
Computation in the brain is distributed across large populations. Individual neurons are noisy and receive limited information but, by acting collectively, neural populations perform a wide variety of complex computations. In this talk I will discuss two approaches to understanding these collective computations.
First, I will introduce a method to identify and decode unknown variables encoded in the activity of neural populations. While the number of neurons in a population may be large, if the population encodes a low-dimensional variable there will be low-dimensional structure in the collective activity, and the method aims to find and parameterize this low-dimensional structure. In the rodent head direction (HD) system, the method reveals a nonlinear ring manifold and allows encoded head direction and the tuning curves of single cells to be recovered with high accuracy and without prior knowledge of what neurons were encoding. When applied to sleep, it provides mechanistic insight into the circuit construction of the ring manifold and, during nREM sleep, reveals a new dynamical regime possibly linked to memory consolidation in the brain.
I will then address the problem of understanding genuinely high-dimensional computations in the brain, where low-dimensional structure does not exist. Modern work studying distributed algorithms on large sparse networks may provide a compelling approach to neural computation, and I will use insights from recent work on error correction to construct a novel architecture for high-capacity neural memory. Unlike previous models, which yield either weak (linear) increases in capacity with network size or exhibit poor robustness to noise, this network is able to store a number of states exponential in network size while preserving noise robustness, thus resolving a long-standing theoretical question.
These results demonstrate new approaches for studying neural representations and computation across a variety of scales, both when low-dimensional structure is present and when computations are high-dimensional.
Scott Linderman
(Columbia University)
Discovering structure in neural and behavioral data

Please LOG IN to view the video.
Date: February 20, 2018
Description:
New recording technologies are transforming neuroscience, allowing us to precisely quantify neural activity, sensory stimuli, and natural behavior. How can we discover simplifying structure in these high-dimensional data and relate these domains to one another? I will present my work on developing statistical tools and machine learning methods to answer this question. With two examples, I will show how we can leverage prior knowledge and theories to build models that are flexible enough to capture complex data yet interpretable enough to provide new insight. Alongside these examples, I will discuss the Bayesian inference algorithms I have developed to fit such models at the scales required by modern neuroscience. First, I will develop models to study global brain states and recurrent dynamics in the neural activity of C. elegans. Then, I will show how similar ideas apply to data that, on the surface, seem very different: movies of freely behaving larval zebrafish. In both cases, these models reveal how complex patterns may arise by switching between simple states, and how state changes may be influenced by internal and external factors. These examples illustrate a framework for harnessing recent advances in machine learning, statistics, and neuroscience. Prior knowledge and theory serve as the main ingredients for interpretable models, machine learning methods lend additional flexibility for complex data, and new statistical inference algorithms provide the means to fit these models and discover structure in neural and behavioral data.
John Cunningham
(Columbia University)
Computational structure in large-scale neural population recordings

Please LOG IN to view the video.
Date: February 15, 2018
Description:
One central challenge in neuroscience is to understand how neural populations represent and produce the remarkable computational abilities of our brains. Indeed, neuroscientists increasingly form scientific hypotheses that can only be studied at the level of the neural population, and exciting new large-scale datasets have followed. Capitalizing on this trend, however, requires two major efforts from applied statistical and machine learning researchers: (i) methods for finding structure in this data, and (ii) methods for statistically validating that structure. First, I will review our work that has used factor modeling and dynamical systems to advance understanding of the computational structure in the motor cortex of primates and rodents. Second, while these methods and the broader class of such methods are promising, they are also perilous: novel analysis techniques do not always consider the possibility that their results are an expected consequence of some simpler, already-known feature of the data. I will present two works that address this growing problem, the first of which derives a tensor-variate maximum entropy distribution with user-specified moment constraints along each mode. This distribution forms the basis of a statistical hypothesis test, and I will use this test to answer two active debates in the neuroscience community over the triviality of structure in the motor and prefrontal cortices. I will then discuss how to extend this maximum entropy formulation to arbitrary constraints using deep neural network architectures in the flavor of implicit generative modeling.
Gal Mishne
(Yale University)
Revealing multiscale structures of neuronal networks

Please LOG IN to view the video.
Date: February 13, 2018
Description:
Experimental advances in neuroscience enable the acquisition of increasingly large-scale, high-dimensional and high-resolution neuronal and behavioral datasets, however addressing the full spatiotemporal complexity of these datasets poses significant challenges for data analysis and modeling. I present a new geometric analysis framework, and demonstrate its application to the analysis of calcium imaging from the primary motor cortex in a learning mammal. To extract neuronal regions of interest, we develop Local Selective Spectral Clustering, a new method for identifying high-dimensional overlapping clusters while disregarding noisy clutter. We demonstrate the capability of this method to extract hundreds of detailed somatic and dendritic structures with demixed and denoised time-traces. Next, we propose to represent and analyze the extracted time-traces as a rank-3 tensor of neurons, time-frames and trials. We introduce a nonlinear data-driven method for tensor analysis and organization, which infers the coupled multi-scale structure of the data. In analyzing neuronal activity from the motor cortex we identify in an unsupervised manner: functional subsets of neurons, activity patterns associated with particular behaviors, and long-term temporal trends. This general framework can be applied to other biomedical datasets, in neuroscience and beyond, such as fMRI, EEG and BMI.
Joint work with Ronen Talmon, Ron Meir, Jackie Schiller, Maria Lavzin, Uri Dubin and Ronald Coifm
Ashok Litwin Kumar
(Columbia University)
Randomness and structure in neural representations for learning

Please LOG IN to view the video.
Date: February 6, 2018
Description:
Synaptic connectivity varies widely across neuronal types, from the hundreds of thousands of connections received by cerebellar Purkinje cells to the handful received by granule cells. In this talk, I will discuss recent work that addresses what determines the optimal number of connections for a given neuronal type and what this means for neural computation. The theory I will describe predicts optimal values for the number of inputs to cerebellar granule cells and Kenyon cells of the Drosophila mushroom body, shows that random wiring can be optimal for certain computations, and also provides a functional explanation for why the degrees of connectivity in cerebellum-like and cerebrocortical systems are so different. I will also discuss applications of the theory to an analysis of a complete electron-microscopy reconstruction of the larval Drosophila mushroom body and to recordings of cortical activity in mice performing a decision-making task. These analyses point toward new ways of analyzing neural data and provide a framework for understanding the neural representations that support learned behaviors.
Laurence Aitchison
Using modern, deep Bayesian inference to analyse neural data and understand neural systems

Please LOG IN to view the video.
Date: January 30, 2018
Description:
I consider how Bayesian inference can address the analytical and theoretical challenges presented by increasingly complex, high-dimensional neuroscience datasets.
With the advent of Bayesian deep neural networks, GPU computing and automatic differentiation it is becoming increasingly possible to perform large-scale Bayesian analyses of data, simultaneously inferring complex biological phenomena and experimental confounds. I present a proof-of-principle: inferring causal connectivity from an all-optical experiment combining calcium imaging and cell-specific optogenetic stimulation. The model simultaneously infers spikes from fluorescence, models low-rank activity and the extent of off-target optogenetic stimulation, and explicitly gives uncertainty estimates about the inferred connection matrix.
Further, there is considerable evidence that humans and animals use Bayes theorem to reason optimally about uncertainty. I show that one particular Bayesian inference method — sampling — emerges naturally when combining classical sparse-coding models with a biophysically motivated energetic cost of achieving reliable responses. We understand these results theoretically by noting that the resulting combined objective approximates the objective for a classical Bayesian method: variational inference. Given this strong theoretical underpinning, we are able to extend the model to multi-layered networks modelling MNIST digits, recurrent networks, and fast recurrent networks.
Tatyana Sharpee
How theory can be used to find principles of neural circuits and behavior

Please LOG IN to view the video.
Date: January 17, 2018
Description:
Neural circuits are notorious for the complexity of their organization, and animal behavior is similarly diverse. I will discuss several different ways in which theory and statistics can contribute to organizing this complexity. First, I will discuss two theorems that together provide a framework that, in principle, makes it possible to quantify how useful a given sensory system is to an animal in the context of its motor repertoire. Second, I will describe how metabolic constraints drive specialization and diversification between neuronal cell types. This same theory can also be used to systematize our understanding of diversity within ion channels, neuropeptides, and other signaling molecules. Third, I will show how biological constraints added to large-scale models of neural circuits produce interpretable descriptions of mid-level sensory stimuli. These results reveal new organizing principles for feature selectivity in the secondary visual area V2 and drive new lines of experimental inquiry. Finally, I will discuss a tentative organizing principle for closing the loop between perception and action, and its early tests using C. elegans navigation.
Chenghua Gu
(Harvard University)
Interactions between nervous and vascular systems in the CNS

Please LOG IN to view the video.
Date: January 8, 2018
Description:
My laboratory studies the interface of the nervous and vascular systems. Proper brain function depends on two unique features of the brain vasculature. First, brain blood vessels form the blood-brain barrier (BBB), which maintains the brain’s chemical milieu and ensures proper neural function. Second, neural activity rapidly increases local blood flow to meet moment-to-moment changes in regional brain energy demand – a process called neurovascular coupling. I will share with you our current and future work to understand the molecular and cellular mechanisms underlying BBB and neurovascular coupling.
Uri Eden
State-space modeling of neural spiking systems

Please LOG IN to view the video.
Date: June 5, 2017
Description:
Although it is well known that brain areas receive, process and transmit information via sequences of sudden, stereotyped electrical impulses, called action potentials or spikes, most analyses of neural data ignore the localized nature of these events. The theory of point processes offers a unified, principled approach to modeling the firing properties of spiking neural systems, and assessing goodness-of-fit between a neural model and observed spiking data. We develop a point process modeling framework and state space estimation algorithms to describe and track the evolution of dynamic representations from individual neurons and neural ensembles. This allows us to derive a toolbox of estimation algorithms and adaptive filters to address questions of static and dynamic encoding and decoding.
These methods will be illustrated through a couple of examples. First, we will model spatially specific spiking activity in the rat hippocampus and use a point process filter to reconstruct the animal’s movement trajectory during a spatial navigation task. Next, we will develop a sequential importance sampling procedure for estimating biophysical parameters of conductance based neural models using only the resulting spike times. Issues of model identification and misspecification will also be discussed.
Terry Sejnowski
(UCSD)
Making Waves in the Brain

Please LOG IN to view the video.
Date: May 22, 2017
Description:
Traveling waves of electrical activity have been observed in the hippocampus and cortex, but their origin and function is unknown. Two recent studies will be presented on 10-14 Hz cortical circular traveling waves during sleep spindles in humans and 4-10 Hz cortical traveling waves in awake and behaving monkeys. They point toward a new class of globally organized activity in the brain.
Nicolas Brunel
(The University of Chicago)
Attractor dynamics in networks with learning rules inferred from data

Please LOG IN to view the video.
Date: April 17, 2017
Description:
The attractor neural network (ANN) scenario is a popular scenario for memory storage in association cortex, but there is still a large gap between these models and experimental data. In particular, the distributions of the learned patterns and the learning rules are typically not constrained by data. In primate IT cortex, the distribution of neuronal responses is close to lognormal, at odds with bimodal distributions of firing rates used in the vast majority of theoretical studies. Furthermore, we recently showed that differences between the statistics of responses to novel and familiar stimuli are consistent with a Hebbian learning rule whose dependence on post-synaptic firing rate is non-linear and dominated by depression. We investigated the dynamics of a network model in which both distributions of the learned patterns and the learning rules are inferred from data. Using both mean field theory and simulations, we show that this network exhibits attractor dynamics. Furthermore, we show that the storage capacity of networks with learning rules inferred from data is close to the optimal capacity, in the space of unsupervised Hebbian rules. These networks lead to unimodal distributions of firing rates during the delay period, consistent with data from delay match to sample experiments. Finally, we show there is a transition to a chaotic phase at strong coupling strength, with a extensive number of chaotic attractor states correlated with the stored patterns.
Todd Coleman
(UCSD)
Neuro-Gastroenterologic Engineering

Please LOG IN to view the video.
Date: February 21, 2017
Description:
The discoordination between the central and autonomic nervous systems is increasingly being identified as playing a key role in affecting neurological, psychiatric, and gastroenterologic problems; the causal role that the enteric nervous system may play in Parkinson’s disease serves as an example. However, traditionally, the brain and GI system have been studied scientifically and treated clinically, separately. There is a dearth of approaches to use engineering perspectives to better measure, characterize, and modulate these inter-relationships. In this talk, we will discuss our recent contributions to address this unmet need. Specifically, we will discuss our recent development of the high-resolution electrogastrogram, an approach to non-invasively measure and statistically characterize the propagation velocity and propagation patterns consistent with gastric serosal slow wave myoelectric activity, which had not been accomplished until now. We will also highlight our recent development and use of directed information graphs, a new class of probabilisitic graphical models that provides minimal descriptions of causal relationships in multiple time series. To enable the recording of multiple physiologic time series simultaneously and unobtrusively, we will lastly discuss our development of multi-electrode arrays embedded within skin-mounted adhesives for ambulatory monitoring. We will highlight how all of these methods and technologies are being used within the context of neuro-gastroenterologic engineering. We will conclude with a vision of how advancing this field with principles of applied mathematics, engineering, and synthetic biology has the potential to improve health, reduce healthcare costs, and advance science.
Xiao-Jing Wang
(New York University)
Working memory and decision-making: From microcircuits to the global brain

Please LOG IN to view the video.
Date: January 30, 2017
Description:
In this talk, I will first discuss “cognitive-type” microcircuits capable of working memory and decision-making computation. In particular, I will briefly summarize a model characterized by slow (NMDA-receptor dependent) recurrent attractor dynamics, its experimental tests as well as applications to shed insights into deficits associated with psychiatric disorders. This line of research has led us to study multi-region brain systems based on mesoscopic connectome and physiological experiments. We have developed large-scale cortex modeling of macaque monkey and mouse. By taking into account cortical heterogeneity, the model naturally gives rise to a hierarchy of timescales and, endowed with a laminar structure of the cortex, it captures frequency-dependent interactions between bottom-up and top-down processes. Moreover, in a complex brain system, routing of information between areas must be flexibly gated according to behavioral demands. We proposed such a gating mechanism with a disinhibitory circuit motif implemented by three subtypes of (PV+, SOM+ and VIP+) inhibitory neurons, and I will report a recent finding that the relative distribution of these three interneuron classes varies markedly across the whole mouse cortex. Circuit modeling across levels, combined with training multi-module recurrent networks, represents a promising approach to elucidate high-dimensional dynamics and functions of the global brain.
John W. Krakauer
(Johns Hopkins)
Rethinking neurorehabilitation of stroke

Date: May 20, 2016
Description:
Dr. Krakauer received his bachelor and master deegree from Cambridge University, and his medical degree from Columbia University College of Physicians and Surgeons where he was elected to Alpha Omega Alpha Medical Honor Society. After completing an internship in Internal Medicine at The Johns Hopkins Hospital, he returned to Columbia University for his residency in Neurology at the Neurological Institute of New York. He subsequently completed a research fellowship in motor control in the Center of Neurobiology and Behavior at Columbia and a clinical fellowship in stroke at the Neurological Institute at Columbia University Medical Center. He is currently Professor of Neurology and Neurological Sciences at Johns Hopkins University School of Medicine where he directs the Brain, Learning, Animation, and Movement Lab (BLAM). He is a neurologist who sees patients with stroke and other cerebrovascular diseases. His research investigates motor control and learning in people, stroke recovery, and neuro-rehabilitation.
This event is sponsored by Stanford Neurosciences Institute Big Ideas Brain Machine Interface and SCAN
James Fitzgerald
(Harvard University)
Towards the principles of neural circuit mechanisms of motion guided behavior

Date: April 25, 2016
Description:
Understanding general principles that underlie the functional organization of biological systems is a central aim of biophysics. Here I will discuss how lessons from biophysics can help us understand the neural basis of the optomotor response, a behavior whereby animals orient and translate themselves in response to moving sensory environments. I’ll first emphasize how optimality principles can help us understand the computation and structure of peripheral visual circuits for motion processing in flies, zebrafish, and humans. I’ll then proceed to central brain circuits in zebrafish to discuss how model reduction techniques can reveal interpretable models for the sensorimotor transformations underlying specific motion guided behaviors. Each of these topics is generally relevant for understanding how brains flexibly generate a broad range of ethological behaviors.
Wei Zhang
(UCSF)
Mechanosensation: From ion channels to animal behaviors

Date: March 7, 3016
Description:
Mechanosensation, the sensation for mechanical force, employs mechanosensitive ion channels to detect touch, pain, body movement, sound and other cues. In this talk, I will describe our work using Drosophila sensory neurons as a model to elucidate the molecular and cellular basis for mechanosensory behaviors. I will show the neural structure for gentle touch sensation in the fly larvae, and the mechanosensitive channel that mediates its extraordinary sensitivity. I will then describe the intriguing properties of this channel, and how it fulfills the mechanical-electrical transduction with its unique features. I will also discuss how this research strategy could be applied to the highly diverse mechanosensory behaviors.
Further Information:
This talk is available to Stanford faculty, students, and staff only.
Robert Carillo
(Cal Tech)
The Fly “Dpr-ome” in nervous system development and synaptic connectivity

Date: February 17, 2016
Description:
The development of neural circuits requires a highly regulated series of steps culminating in the precise connectivity seen in the mature circuit. One mechanism proposed by Roger Sperry and others hypothesized that synaptic partners express a complementary set of cell surface proteins (CSPs) which allows for their specific interaction. We have defined a network of interacting Drosophila CSPs, the “Dpr-ome”, in which a 21-member IgSF subfamily, the Dprs, binds to a 9-member subfamily, the DIPs (in collaboration with Christopher Garcia’s lab at Stanford and Hugo Bellen’s group at Baylor). Preliminary studies of 18 of the 30 genes in the Dpr-DIP subfamilies shows that they are expressed in small and unique subsets of neurons in the Drosophila nervous system supporting the idea of neuronal surface labels in controlling connectivity. We focused on one of these interacting pairs: Dpr11 and DIP-γ. In the larval neuromuscular system, Dpr11 and DIP-γ are expressed pre- and postsynaptically and required for proper development of the presynaptic terminal. dpr11 and DIP-γ mutants also show an impairment in spontaneous neurotransmission and genetic interaction experiments reveal a role in modulating BMP signaling. In the visual system, dpr11 is selectively expressed by a subtype of R7 photoreceptors. Their primary synaptic targets, Dm8 amacrine neurons, express DIP-γ. In dpr11 or DIP-γ mutants, the terminals of these R7 photoreceptors extend beyond their normal termination zones in the medulla. In addition, survival of Dm8 neurons is dependent on DIP-γ. Recent data focused on a different hub of the Dpr-ome suggests that other members function in motor neuron target selection. Our findings suggest that Dpr-DIP interactions are important determinants of synaptic development and circuit formation.
Further Information:
This talk is available to Stanford students, faculty, and staff.
Reza Vafabakhsh
(University of California, Berkeley)
Quantitative single molecule analysis of conformational dynamics in a GPCR signaling system

Date: February 9, 2016
Description:
Metabotropic glutamate receptors (mGluRs) are members of the group C family of GPCRs and are responsible for the regulation of neuronal excitability and synaptic transmission across the CNS. In this seminar, I will present our work on developing quantitative methods to probe the conformation of mammalian mGluRs at the single protein level and I will describe a general model for the activation of mGluRs. I will then discuss the quantitative analysis of the complexity of the mGluR activation process among structurally similar subtypes. Next I will present our results on quantifying the conformational dynamics of a potassium channel, a model system for downstream effector of the GPCR signaling. Finally, I will discuss how this experimental strategy should be widely applicable to quantitatively study conformational dynamics in GPCRs and other membrane protein complexes.
Further Information:
This talk is available to Stanford faculty, staff, and students.
Alexander Pollen
(UCSF School of Medicine)
Development and evolution of the human cerebral cortex

Date: February 11, 2016
Description:
What are the genetic and developmental changes underlying the expansion of the human brain over the last six million years? I will discuss our recent identification of a human-specific mutation that increases brain size and improves learning behavior. Next, I will highlight how we have used single cell genomics to characterize the neural stem cell populations that contribute to the expansion of the cerebral cortex during development. Finally, I will describe ongoing work establishing a new model system for studying human-specific biology.
Further Information:
Content available to Stanford students, faculty, and staff only.
Julia Kaltschmidt
(Sloan Kettering Institute)
Wiring the nervous system: Interneuron circuitry in the mouse spinal cord

Date: February 4, 2016
Description:
The regulation of information flow by local inhibitory microcircuits has a fundamental role in shaping animal behavior. In the mammalian spinal cord GABAergic inhibitory interneurons serve key functions in sensory-motor transformation. One class of GABAergic interneurons, termed GABApre neurons, forms axo-axonic synapses with the terminals of proprioceptive sensory afferents and exerts an inhibitory constraint on sensory processing. I am using the GABApre interneuron circuitry to understand (i) how distinct neuronal populations are generated, (ii) how these distinct neuronal populations recognize and choose their correct synaptic partners from among different available targets, and (iii) how postsynaptic signals induce the differentiation of presynaptic terminals in service of balanced microcircuit function.
In the future, I expect to maintain a focus on multiple aspects of neuronal circuit biology, including: the intrinsic and extrinsic mechanisms directing synaptic connectivity; the nature and mechanisms of circuit adaptation in health and disease; and the functional organization of GABAergic interneurons in spinal locomotor circuits, sexual response circuitry and the enteric nervous system.
Further Information:
This talk is available to Stanford faculty and staff only.
Naiara Akizu
(The Scripps Research Institute)
Protein homeostasis and cell type specific vulnerability in childhood neurodegeneration

Date: February 8, 2016
Description:
Childhood neurodegenerative disorders are heterogeneous and individually rare conditions. Yet, collectively they represent one of the most common clinical problems in pediatric neurology given that a large proportion of them are of unknown cause and treatment options are mostly non-existent. In order to uncover novel disease mechanisms we focused on children with inherited cerebellar degeneration and ataxia. Exome sequencing on a cohort of 111 consanguineous families revealed several novel causative genes, AMPD2 and SNX14 being the most recurrently mutated ones. We found that mutations in SNX14 lead to lysosome-autophagosome dysfunction, for which cerebellar cells are more sensitive. Mutations in AMPD2 lead to a reduction of energy for protein synthesis and a potentially preventable loss of brainstem and cerebellar structures. Being widely expressed, both AMPD2 and SNX14 inactivation mostly perturb specific neuronal populations. Moreover, both have an impact on protein homeostasis. Thus, it is our goal to better understand how protein homeostasis regulation may determine selective neuronal vulnerability. Indeed, specific neuronal vulnerability to ubiquitous stimuli is common in many neurological diseases. This emphasizes the need to better characterize particular features that distinguish each neuronal population in health and disease conditions, which will ultimately lead to uncover specific therapeutic targets.
Further Information:
This talk is available to Stanford faculty and staff only.
Erich Jarvis
(Duke University Medical Center)
Dissecting the molecular mechanisms of vocal learning and spoken language

Date: October 12, 2015
Description:
My long-term goal is to decipher the molecular mechanisms that construct, modify, and maintain neural circuits for complex behavioral traits. One such trait is vocal learning, which is critical for song in song-learning birds and spoken-language in humans. Remarkably, although all are distantly related, we found that song-learning birds (songbirds, parrots, and hummingbirds) and humans have convergent forebrain pathways that control the acquisition and production of learned sounds. This convergent anatomy and behavior is associated with convergent changes in multiple genes that control neural connectivity and brain development, of which some when mutated are associated with speech deficits. Non-human primates and vocal non-learning birds have limited or no such forebrain vocal pathways, but yet possess forebrain pathways for learning and production of other motor behaviors. To explain these findings, I propose a motor theory of vocal learning origin, in which brain pathways for vocal learning evolved by brain pathway duplication of an ancestral motor learning pathway. Once a vocal learning circuit is established, it functions similarly as the adjacent motor learning circuits, but with some divergences in neural connectivity. To test this hypothesis, we are attempting to genetically engineer brain circuits for vocal learning. These experiments should prove useful in elucidating basic mechanisms of speech and other complex behaviors, as well as their pathologies and repair.
Further Information:
Pfenning AR, Hara E, Whitney O, Rivas MR, Wang R, et al., & Jarvis ED. Convergent transcriptional specializations in the brains of humans and song learning birds. (2014) Science 346 (6215): 1333 & online 1256846-1 to -13.
Whitney O, Pfenning AR, Howard JT, Blatti CA, et al., West AE, & Jarvis ED. Core and region enriched gene expression networks of behaviorally-regulated genes and the singing genome. (2014) Science 346 (6215): 1334 & online 1256780-1 to -11.
Jarvis ED, Mirarab S, Aberer AJ, Li B, et al., Gilbert MTP, & Zhang G. Whole genome analyses resolve the early branches to the Tree of Life of modern birds. (2014) Science 346 (6215): 1320-1331.
Zhang G*, Li C, Li Q, et al., Jarvis ED*, Gilbert MTP*, & Wang J*. *co-corresponding authors.Comparative genomics reveals insights into avian genome evolution and adaptation. (2014) Science 346 (6215): 1311-1320.
Guojie Z, Jarvis ED, & Gilbert MTP. A flock of genomes. (2014) Science 346 (6215) 1308.
Petkov CI & Jarvis ED. Birds, primates, and spoken language origins: behavioral phenotypes and neurobiological substrates. (2012) Front. Evol. Neurosci. 4(12):1-24.
Andrew Leifer
(Princeton University)
Whole-brain neural dynamics and behavior in freely moving nematodes

Date: February 17, 2015
Description:
How does a nervous system control an animal’s behavior? We are investigating this question by manipulating and monitoring neural activity of populations of neurons in the brain of a simple transparent organism and correlating the observed neural dynamics with behavior. I will present a suite of optical tools to control and record activity in the nematode C. elegans as it moves, including the first instrument to perform whole-brain calcium imaging with cellular resolution in an awake and unrestrained behaving animal. We have used these techniques to gain insight into the underlying neural mechanisms behind mechanosensation, forward locomotion, and the C. elegans escape response. These measurements are a critical first step for investigating neural coding of behavior, decision-making and the time evolution of internal brain states.
- Scott Linderman » Bayesian methods for discovering structure in neural and behavioral data
- Todd Coleman » Enteric neural engineering
- Rishidev Chaudhuri » Low- and High- dimensional computations in neural circuits
- Scott Linderman » Discovering structure in neural and behavioral data
- John Cunningham » Computational structure in large-scale neural population recordings
- Gal Mishne » Revealing multiscale structures of neuronal networks
- Ashok Litwin Kumar » Neural representations for learning
- Laurence Aitchison » Using modern, deep Bayesian inference to analyse neural data and understand neural systems
- Tatyana Sharpee » How theory can be used to find principles of neural circuits and behavior
- Chenghua Gu » Interactions between nervous and vascular systems in the CNS
- Uri Eden » State-space modeling of neural spiking systems
- Terry Sejnowski » Making Waves in the Brain
- Nicolas Brunel » Attractor dynamics in networks with learning rules inferred from data
- Todd Coleman » Neuro-Gastroenterologic Engineering
- Xiao-Jing Wang » Working memory and decision-making
- John W. Krakauer » Rethinking neurorehabilitation of stroke
- James Fitzgerald » Neural circuit mechanisms of motion guided behavior
- Wei Zhang » Mechanosensation: From ion channels to animal behaviors
- Robert Carillo » The Fly “Dpr-ome”
- Reza Vafabakhsh » Quantitative single molecule analysis of conformational dynamics in a GPCR signaling system
- Alexander Pollen » Development and evolution of the human cerebral cortex
- Julia Kaltschmidt » Wiring the nervous system
- Naiara Akizu » Protein homeostasis and cell type specific vulnerability in childhood neurodegeneration
- Erich Jarvis » Molecular mechanisms of vocal learning and spoken language
- Andrew Leifer » Whole-brain neural dynamics and behavior in freely moving nematodes