TensorFlow Overview and Future Directions

Jeff_Dean

Jeff Dean

(Google)

Play Video (Stanford)

Play Video (SystemX Members)

❏ Lecture Slides (Stanford)

❏ Lecture Slides (SystemX Members)

Date: January 21, 2016

Description:

Over the past few years, we have built two large-scale computer systems for training neural networks, and then applied these systems to a wide variety of problems that have traditionally been very difficult for computers. We have made significant improvements in the state-of-the-art in many of these areas, and our software systems and algorithms have been used by dozens of different groups at Google to train state-of-the-art models for speech recognition, image recognition, various visual detection tasks, language modeling, language translation, and many other tasks. Our second-generation system, TensorFlow, has been designed and implemented based on what we have learned from building and using DistBelief, our first generation system. The TensorFlow API and an initial implementation was released as an open-source project in November, 2015 (see tensorflow.org). In this talk, I’ll discuss the design and implementation of TensorFlow, and discuss some future directions for improving the system. This talk describes joint work with a large number of people at Google.

Further Information:

Jeff joined Google in 1999 and is currently a Google Senior Fellow. He currently works in Google’s Research division, where he co-founded and leads Google’s deep learning research team in Mountain View. He has co-designed/implemented multiple generations of Google’s crawling, indexing, and query serving systems, and major pieces of Google’s initial advertising and AdSense for Content systems. He is also a co-designer and co-implementor of Google’s distributed computing infrastructure, including the MapReduce, BigTable, Spanner, DistBelief and TensorFlow systems, protocol buffers, LevelDB, systems infrastructure for statistical machine translation, and a variety of internal and external libraries and developer tools. He received a Ph.D. in Computer Science from the University of Washington in 1996, working with Craig Chambers on compiler techniques for object-oriented languages. He is a Fellow of the ACM, a Fellow of the AAAS, a member of the U.S. National Academy of Engineering, and a recipient of the Mark Weiser Award and the ACM-Infosys Foundation Award in the Computing Sciences.




Created: Friday, January 22nd, 2016