Event-based Vision Sensors: Challenges and Opportunities

Kwabena Boahen

(Stanford University)

Please LOG IN to view the video.

Date: February 2, 2022


Event-based vision (EBV) sensors preprocess their photodetectors ’signals to produce spatiotemporally sparse “events”, readout, not frame-by-frame, but rather event-by-event. For example, a Dynamic Vision Sensor (DVS) reports temporal contrast—changes in log-luminance. Event-based readout leverages events’ sparsity to realize higher (effective) sampling rate and shorter latency than frame-based readout does. I will discuss two key challenges EBV cameras present and propose solutions. First, coherent optical flow triggers incoherent events (temporally dispersed) that disappear and reappear at different speeds, depending on local spatial contrast. This makes it excruciatingly difficult to interpret a cluttered scene filmed from a DVS camera mounted on a moving platform (e.g., a drone). Second, when more than ~6M eps (events per second) occur, latency and jitter, its standard deviation, shoot up 400-fold (from 0.2 to 40 μs). That severely limits throughput, the usable fraction of the maximum readout-rate (~1 Geps). These challenges can be tackled by bonding a Back-side Illuminated (BI) CMOS Image Sensor (CIS) wafer directly to a deep-submicron, mixed-signal CMOS wafer that receives photodetector signals via pixel-wise Cu-Cu bonds. This stacked-wafer process accommodates dense mixed-signal preprocessing and performant network-on-a-chip (NoC) routing without sacrificing fill-factor or image resolution.

Further Information:

Kwabena Boahen (M’89, SM’13, F’16) received the B.S. and M.S.E. degrees in electrical and computer engineering from the Johns Hopkins University, Baltimore, MD, both in 1989, and the Ph.D. degree in computation and neural systems from the California Institute of Technology, Pasadena, in 1997. He was on the bioengineering faculty of the University of Pennsylvania from 1997 to 2005, where he held the first Skirkanich Term Junior Chair. He is presently Professor of Bioengineering and Electrical Engineering at Stanford University, with a courtesy appointment in Computer Science. He is also an investigator in Stanford’s Bio-X Institute and Wu Tsai Neurosciences Institute. He founded and directs Stanford’s Brains in Silicon lab, which develops silicon integrated circuits that emulate the way neurons compute and computational models that link neuronal biophysics to cognitive behavior. This interdisciplinary research bridges neurobiology and medicine with electronics and computer science, bringing together these seemingly disparate fields. His scholarship is widely recognized, with over a hundred publications, including a cover story in Scientific American featuring his lab’s work on a silicon retina and a silicon tectum that “wire together” automatically (May 2005). He has been invited to give over a hundred seminar, plenary, and keynote talks, including a 2007 TED talk, “A computer that works like the brain”, with over seven hundred thousand views. He has received several distinguished honors, including a Packard Fellowship for Science and Engineering (1999) and a National Institutes of Health Director’s Pioneer Award (2006). He was elected a fellow of the American Institute for Medical and Biological Engineering (2016) and of the Institute of Electrical and Electronic Engineers (2016) in recognition of his lab’s work on Neurogrid, an iPad-size platform that emulates the cerebral cortex in biophysical detail and at functional scale, a combination that hitherto required a supercomputer. In his lab’s most recent research effort, the Brainstorm Project, he led a multi-university, multi-investigator team to co-design hardware and software that makes neuromorphic computing easier to apply. A spin-out from his Stanford lab, Femtosense Inc (2018), is commercializing this breakthrough.


Created: Saturday, February 5th, 2022