Speaker: Vlad Mnih, DeepMind
Efficient Multi-Task Deep Reinforcement Learning
Abstract: Deep reinforcement learning methods have recently mastered a wide variety of domains such as Atari games and Go. While the improvements in performance on these tasks have been dramatic, the progress has been primarily in single task performance, where an agent is trained on each task, game, or level separately. I will discuss some of the challenges involved in training an agent on many tasks at once and present a new architecture for distributed training of agents in multi-task reinforcement learning environments. I will show results on a new multi-task reinforcement learning benchmark based on the 3D DeepMind Lab environment.
Biography: Volodymyr Mnih is a Research Scientist at DeepMind. He completed an MSc at the University of Alberta working under the supervision of Csaba Szepesvari and a PhD at the University of Toronto working under the supervision of Geoffrey Hinton. His PhD work focused on applying deep neural networks to the analysis of satellite imagery. Since joining DeepMind, he has been working at the intersection of deep learning and reinforcement learning, co-developing Deep Q Networks (DQN), the asynchronous advantage actor critic (A3C), and reinforcement learning-based hard attention mechanisms.