Deep [meta] learning robots: Building unsupervised, versatile agents


Chelsea Finn

(Google Brain)

Play Video

Date: November 19, 2018


Machine learning excels primarily in settings where an engineer can first reduce the problem to a particular function, and collect a substantial amount of labeled input-output pairs for that function. In drastic contrast, humans are capable of learning a range of versatile behaviors from streams of raw sensory data with minimal external instruction. How can we develop machines that learn more like the latter? In this talk, I will discuss recent work on enabling robots to learn versatile behaviors from raw sensory observations with minimal human supervision. In particular, I will show how we can use meta-learning to infer goals and intentions from humans with only a few positive examples, how robots can leverage large amounts of unlabeled experience to develop and plan with visual predictive models of the world, and how we can combine elements of meta-learning and unsupervised learning to develop agents that propose their own goals and learn to achieve them.

Further Information:

Curriculum Vitae

[1] Finn, C., Abbeel, P., & Levine, S. (2017). Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. ArXiv. doi:arXiv:1703.03400

[2] Finn, C., & Levine, S. (2017). Deep Visual Foresight for Planning Robot Motion. ArXiv. doi:arXiv:1610.00696

Event Sponsor:
Stanford Center for Mind, Brain Computation and Technology

Created: Monday, November 19th, 2018