Flexibility, Interpretability, and Scalability in Time Series Modeling

Emily Fox

(University of Washington)

Please LOG IN to view the video.

Date: March 12, 2019


We are increasingly faced with the need to analyze complex data streams; for example, sensor measurements from wearable devices have the potential to transform healthcare. Machine learning — and moreover deep learning — has brought many recent success stories to the analysis of complex sequential data sources, including speech, text, and video. However, these success stories involve a clear prediction goal combined with a massive (benchmark) training dataset. Unfortunately, many real-world tasks go beyond simple predictions, especially in cases where models are being used as part of a human decision-making process or medical intervention. Such complex scenarios necessitate notions of interpretability and measures of uncertainty. Furthermore, in aggregate the datasets might be large, but we might have limited data about an individual, requiring parsimonious modeling approaches. In this talk, we first discuss how sparsity-inducing penalties can be deployed on the weights of deep neural networks to enable interpretable structure learning, in addition to yielding more parsimonious models that better handle limited data scenarios. We then turn to Bayesian dynamical modeling of individually sparse data streams, flexibly sharing information and accounting for uncertainty. Finally, we discuss our recent body of work on scaling Markov chain Monte Carlo methods to massive time series. Throughout the talk, we provide analyses of activity, neuroimaging, genomic, housing and homelessness data sources.

Created: Tuesday, March 12th, 2019