Emphasizing Depth and Motion

Hendrik Lensch

(Tübingen University)


Please LOG IN to view the video.

Description:

Monocular displays are rather bad at transporting relative distances and velocities between objects to the observer as some of the binocular cues are missing. In our framework we use a stereo camera to first observe depth, relative distances and velocity and then modify the captured images in different ways to convey the lost information. Depth for example can be emphasized even on a monocular display using depth-of-field-rendering, local intensity or color contrast enhancement or using unsharp masking of the depth buffer. Linear motion, on the other hand can be emphasized by motion blur, streaks, rendered bursts or simply color coding the remaining distances between vehicles. These are a few ways of modifying pictures of the real world for actively controlling the user’s attention while trying to introduce only rather subtle modifications. We will present a real-time frame-work based on edge-optimized wavelets that optimizes depth estimation and emphasizes depth or motion.

Further Information:

Hendrik P. A. Lensch holds the chair for computer graphics at Tübingen University. He received his diploma in computers science from the University of Erlangen in 1999. He worked as a research associate at the computer graphics group at the Max-Planck-Institut für Informatik in Saarbrücken, Germany, and received his PhD from Saarland University in 2003. Hendrik Lensch spent two years (2004-2006) as a visiting assistant professor at Stanford University, USA, followed by a stay at the MPI Informatik as the head of an independent research group. From 2009 to 2011 he has been a full professor at the Institute for Media Informatics at Ulm University, Germany. In his career, he received the Eurographics Young Researcher Award 2005, was awarded an Emmy-Noether-Fellowship by the German Research Foundation (DFG) in 2007 and received an NVIDIA Professor Partnership Award in 2010. His research interests include 3D appearance acquisition, computational photography, global illumination and image-based rendering, and massively parallel programming.




Created: Thursday, February 13th, 2014