How to Learn a Camera

Jon Barron


Please LOG IN to view the video.

Date: December 4, 2019


Traditionally, the image processing pipelines of consumer cameras have been carefully designed, hand-engineered systems. But treating an imaging pipeline as something to be learned instead of something to be engineered has the potential benefits of being faster, more accurate, and easier to tune. Relying on learning in this fashion presents a number of challenges, such as fidelity, fairness, and data collection, which can be addressed through careful consideration of neural network architectures as they relate to the physics of image formation. In this talk I’ll be presenting recent work from Google’s computational photography research team on using machine learning to replace traditional building blocks of a camera pipeline. I will present learning based solutions for the classic tasks of denoising, white balance, and tone mapping, each of which uses a bespoke ML architecture that is designed around the specific constraints and demands of each task. By designing learning-based solutions around the structure provided by optics and camera hardware, we are able to produce state-of-the-art solutions to these three tasks in terms of both accuracy and speed.

Created: Wednesday, December 4th, 2019