Designing a Mixed-Signal ConvNet Vision Sensor for Continuous Mobile Vision

Robert_LiKamWa

Robert LiKamWa

(Rice University)


Please LOG IN to view the video.

Date: October 7, 2015

Description:

Continuously providing our computers with a view of what we see will enable novel services to assist our limited memory and attention. In this talk, we show that today’s system software and imaging hardware, highly optimized for photography, are ill-suited for this task. We present our early ideas towards a fundamental rethinking of the vision pipeline, centered around a novel vision sensor architecture, which we call RedEye. Targeting object recognition, we shift early convolutional processing into RedEye’s analog domain, reducing the workload of the analog readout and of the computational system. To ease analog design complexity, we design a modular column-parallel design to promote physical circuitry reuse and algorithmic cyclic reuse. RedEye also includes programmable mechanisms to admit noise for energy reduction, further increasing the sensor’s energy efficiency. Compared to conventional systems, RedEye reports an 85% reduction in sensor energy and a 45% reduction in computational energy.

Further Information:

Robert LiKamWa is a final year Ph.D. Student at Rice University. His research focus is on efficient support for continuous mobile vision. To supplement his research, he has interned and collaborated with Microsoft Research and Samsung Mobile Processor Innovation Lab on various projects related to vision systems. Robert received best paper awards from ACM MobiSys 2013 and PhoneSense 2011.




Created: Wednesday, October 7th, 2015