The hidden players in human reinforcement learning

Anne Collins

Anne Collins

(Brown University)

Please LOG IN to view the video.

Date: January 21, 2015


Classic models of reinforcement learning use a single computational principle to describe how we learn values and decision policies for well-defined states and actions. They account for a host of behavioral and neural data from the basal ganglia-dopamine system. However, additional mechanisms make human reinforcement learning efficient and flexible: they help us organize states and actions, and control how, what, and when to learn. I study the neuro-cognitive processes that contribute to human learning, and their interactions. In this talk, I will present one theme from this research program.

In a series of behavioral experiments, I show that when learning stimulus-action mappings through reinforcement, adult human subjects tend to structure their policy into abstract rules, even when this does not afford any immediate advantage. Algorithmic and neural-network-level computational models predict that creating such abstract rule structure enables later generalization of new information and transfer of known rules to new contexts. Results from behavioral and EEG experiments support model predictions, with EEG evidence for hidden structure predicting individual differences in transfer.

Throughout my research, computational models play a crucial role to identify the latent processes that jointly contribute to learning behavior, and to relate them to their neural substrates. I will show examples of combining computational modeling with genetic studies, functional imaging, and patient and developmental studies to understand interactive processes in learning.

Created: Wednesday, January 21st, 2015