Speakers: David Duvenaud, Harvard University
Title: Composing differentiable procedures for modeling, optimization, and inference
Many useful methods in machine learning are based on optimizing simple feedforward procedures, such as neural networks, using gradients. Surprisingly, many complex procedures such as message passing, filtering, inference, and even optimization itself can be meaningfully differentiated through as well. Composing such procedures lets us build sophisticated models that generalize existing methods, but retain their good properties.
I'll show three applications of this idea. First, by optimizing the computation of graph features, we achieve state-of-the-art predictive performance on material design tasks, and allow the automatic design of chemical compounds. Second, by differentiating through entire learning procedures, we address the difficult problem of hyperparameter tuning, moving from optimizing tens of hyperparameters to optimizing tens of thousands. Third, by using gradient-based curvature estimates to track entropy loss during training, we can sometimes remove the need for validation sets to tune models.
David Duvenaud is a postdoc in the Harvard Intelligent Probabilistic Systems group, working with Ryan Adams on model-based optimization, synthetic chemistry, and neural networks. He did his Ph.D. at the University of Cambridge with Carl Rasmussen and Zoubin Ghahramani. Previous to that, he worked on machine vision both with Kevin Murphy at the University of British Columbia, and later at Google Research. David also co-founded Invenia, an energy forecasting and trading firm which now employs 20 full-time employees.