Deep Learning for Practical and Robust View Synthesis

Ben Mildenhall
(UC Berkeley)
Please LOG IN to view the video.
Date: February 26, 2020
Description:
I will present recent work (“Local Light Field Fusion”) on a practical and robust deep learning solution for capturing and rendering novel views of complex real world scenes for virtual exploration. Our view synthesis algorithm operates on an irregular grid of sampled views, first expanding each sampled view into a local light field via a multiplane image (MPI) scene representation, then rendering novel views by blending adjacent local light fields. We extend traditional plenoptic sampling theory to derive a bound that specifies precisely how densely users should sample views of a given scene when using our algorithm. In practice, we can apply this bound to capture and render views of real world scenes that achieve the perceptual quality of Nyquist rate view sampling while using up to 4000x fewer views.
Further Information:
Ben Mildenhall is a PhD student at UC Berkeley. He is advised Professor Ren Ng and supported by a Hertz Foundation Fellowship. He received his bachelor’s degree in CS and math from Stanford University and has worked at Pixar, Google, and Fyusion in the past. His current research focuses on applying deep learning to 3D reconstruction, view synthesis, and other inverse graphics problems.
Created: Wednesday, February 26th, 2020