**Professor Ng will also be speaking on online learning in a bonus lecture in the afternoon, for details visit http://web.cs.toronto.edu/news/events/DLS_Bonus_Lecture_Feb28.htm
and Director of the Stanford AI Lab
Computer Science Department Stanford University
Talk title: Machine Learning and AI via Large-Scale Brain Simulations
By building large-scale simulations of cortical (brain) computations, can we enable revolutionary progress in AI and machine learning? Machine learning often works very well, but can be a lot of work to apply because it requires spending a long time engineering the input representation (or "features") for each specific problem. This is true for machine learning applications in vision, audio, text/NLP and other problems. To address this, researchers have recently developed "unsupervised feature learning "and "deep learning" algorithms that can automatically learn feature representations from unlabeled data, thus bypassing much of this time-consuming engineering. Many of these algorithms are developed using simple simulations of cortical (brain) computations, and build on such ideas as sparse coding and deep belief networks. By doing so, they exploit large amounts of unlabeled data (which is cheap and easy to obtain) to learn a good feature representation. These methods have also surpassed the previous state-of-the-art on a number of problems in vision, audio, and text. In this talk, I describe some of the key ideas behind unsupervised feature learning and deep learning, and present a few algorithms. I also speculate on how large-scale brain simulations may enable us to make significant progress in machine learning and AI, especially perception. This talk will be broadly accessible, and will not assume a machine learning background.
Andrew Ng received his PhD from Berkeley, and is now an Associate Professor of Computer Science at Stanford University, where he works on machine learning and AI. He is also Director of the Stanford AI Lab, which is home to about 12 professors and 150 PhD students and post docs. His previous work includes autonomous helicopters, the Stanford AI Robot (STAIR) project, and ROS (probably the most widely used open-source robotics software platform today). He current work focuses on neuroscience-informed deep learning and unsupervised feature learning algorithms. His group has won best paper/best student paper awards at ICML, ACL, CEAS, 3DRR. He is a recipient of the Alfred P. Sloan Fellowship, and the 2009 IJCAI Computers and Thought award. He also works on free online education, and recently taught a machine learning class
to over 100,000 students.
This lecture is open to the public. Space is limited and there is no registration; coming early is strongly recommended. For more information, contact the department
or at 416-978-3619.