AlphaGo, Hamiltonian descent, and the computational challenges of machine learning

Christopher Maddison

(Oxford University)

Please LOG IN to view the video.

Date: January 22, 2019


Many computational challenges in machine learning involve the three problems of optimization, integration, and fixed-point computation. These three can often be reduced to each other, so they may also provide distinct vantages on a single problem. In this talk, I present a small part of this picture through a discussion of my work on AlphaGo and Hamiltonian descent methods. AlphaGo is the first computer program to defeat a world-champion player, Lee Sedol, in the board game of Go. My work laid the groundwork of the neural net components of AlphaGo, and culminated in our Nature publication describing AlphaGo’s algorithm, at whose core hide these three problems. The work introducing Hamiltonian descent methods presents a family of gradient-based optimization algorithms inspired by the Monte Carlo literature and recent work on reducing the problem of optimization to that of integration. These methods expand the class of convex functions on which fast linear convergence is achievable by using a nonstandard kinetic energy to condition the optimization.

Created: Tuesday, January 22nd, 2019