3D Asset Generation using Neural Radiance Fields

Matthew Tancik

(UC Berkeley)

Please LOG IN to view the video.

Date: April 13, 2022


Neural Radiance Fields (NeRFs) enable novel view synthesis of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. In the past two year these representations have received interest from the community due to their simplicity to implement and their high quality results. In this talk I will discuss the core concepts behind NeRF and dive into the details behind one specific technique that enables the networks to represent high frequency signals. Finally I will discuss a recent project where we scale up NeRFs to represent large scale scenes. Specifically we utilize data captured from autonomous vehicles to reconstruct a neighborhood in San Francisco.

Further Information:

Matt Tancik is a PhD student at UC Berkeley advised by Ren Ng and Angjoo Kanazawa and is supported by the NSF graduate research fellowship program. He received his bachelor’s degree in CS and physics at MIT. He received a master’s degree in CS working on non-line-of-sight imaging while advised by Ramesh Raskar at MIT. His current research lies at the intersection of machine learning and graphics.

Created: Wednesday, April 13th, 2022