Distinguished Lecture Series
2022-2023 Speakers
Human-Centered Explainable AI: From Algorithms to User Experiences
Monday, November 21, 2022
Abstract:
Artificial Intelligence technologies are increasingly used to aid human decisions and perform autonomous tasks in critical domains. The need to understand AI in order to improve, contest, develop appropriate trust, and better interact with AI systems has spurred great academic and public interest in Explainable AI (XAI). The technical field of XAI has produced a vast collection of algorithms in recent years. However, explainability is an inherently human-centric property and the field is starting to embrace human-centered approaches. Human-computer interaction (HCI) research and user experience (UX) design in this area are increasingly important especially as practitioners begin to leverage XAI algorithms to build XAI applications. In this talk, I will draw on my own research and broad HCI works to highlight the central role that human-centered approaches should play in shaping XAI technologies, including driving technical choices by understanding users’ explainability needs, uncovering pitfalls of existing XAI methods, and providing conceptual frameworks for human-compatible XAI.
Bio:
Q. Vera Liao is a Principal Researcher at Microsoft Research Montréal, where she is part of the FATE (Fairness, Accountability, Transparency, and Ethics in AI) group. Her current research interests are in human-AI interaction, explainable AI, and responsible AI. Prior to joining MSR, she worked at IBM Research and studied at the University of Illinois at Urbana-Champaign and Tsinghua University. Her research received multiple paper awards at ACM and AAAI venues. She currently serves as the Co-Editor-in-Chief for Springer HCI Book Series, in the Editors team for ACM CSCW conferences, and on the Editorial Board of ACM Transactions on Interactive Intelligent Systems (TiiS).
The Multi-Stakeholder Nature of Today’s Systems and their Security Challenges
Thursday, April 6, 2023
Abstract:
Today's systems consist of many components owned by different stakeholders that need to operate together to provide service. This multi-stakeholder nature often raises significant challenges to making the entire system secure. For example, a system's security might require different stakeholders to share information they consider confidential. Another example is one in which the hand-off between different system components cannot be made secure without significant engineering costs. Our final example is one where stakeholders' ownership of components changes over time invalidating the original security assumptions. In this talk, we will present three examples of such security challenges in different domains: DRAM security, OS security, and confidential computing. In two cases, we describe solutions to these challenges and their trade-offs. The last case is more open-ended and is posed as a challenge to the security research community.
Bio:
Stefan Saroiu is a researcher at Microsoft, now in the Office of the CTO, Azure for Operators, and until 2020, at Microsoft Research. Stefan's research interests span many aspects of systems and networks although his most recent work focuses on systems security. Stefan's work has been published at top conferences in security, systems, networking, and mobile computing. Stefan takes his work beyond publishing results. With his colleagues at Microsoft, (1) he is helping DRAM industry to address the threat of Rowhammer attacks once and for all, (2) he designed a methodology for testing cloud servers for the susceptibility to Rowhammer attacks, (3) he designed, deployed, and operated Microsoft Embedded Social, a cloud service aimed at user engagement in mobile apps that had 20 million users, (4) he designed the reference implementation of a software-based Trusted Platform Module (TPM) used in millions of smartphones and tablets, and (5) he designed and operated Zero-Effort Payments (ZEP), one of the first face recognition-based payment systems in the world. Before joining Microsoft in 2008, Stefan spent three years as an Assistant Professor at the University of Toronto, and four months at Amazon.com as a visiting researcher where he worked on the early designs of their new shopping cart system (aka Dynamo). Stefan is an ACM Distinguished Member.
Toward Foundational Robot Manipulation Skills
Tuesday, April 18, 2023
Abstract:
The last years have seen astonishing progress in the capabilities of generative AI techniques, particularly in the areas of language modeling and image generation. Key to the success of these techniques is the availability of very large sets of images and text along with models that are able to digest such large datasets. Unfortunately, we have not been able to replicate the success of generative AI models in the context of robotics. An important problem is the lack of data suitable to train powerful, general models for robot decision making and control.
In this talk, I will discuss our ongoing efforts toward developing the models and generating the kind of data that might lead to foundational manipulation skills for robotics. To generate large amounts of data, we sample many object rearrangement tasks in physically realistic simulation environments and apply task and motion planning to generate high quality solutions for them. We will then train manipulation skills so that they can be used across a broad range of object rearrangement tasks in unknown, real-world environments. We believe that such skills could provide the glue between generative AI reasoning and robust execution in the real world.
Bio:
Dieter Fox is Senior Director of Robotics Research at NVIDIA and Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where he heads the UW Robotics and State Estimation Lab. Dieter obtained his Ph.D. from the University of Bonn, Germany. His research is in robotics and artificial intelligence, with a focus on state estimation and perception applied to problems such as robot manipulation, mapping, and object detection and tracking. He has published more than 200 technical papers and is the co-author of the textbook “Probabilistic Robotics”. He is a Fellow of the IEEE, AAAI, and ACM, and recipient of the 2020 Pioneer in Robotics and Automation Award. Dieter also received several best paper awards at major robotics, AI, and computer vision conferences. He was an editor of the IEEE Transactions on Robotics, program co-chair of the 2008 AAAI Conference on Artificial Intelligence, and program chair of the 2013 Robotics: Science and Systems conference.
Negative probabilities: What they are and what they are for
Friday, April 28, 2023
Abstract:
Richard Feynman, the pioneer of quantum computing, wrote in his 1982 book Simulating Physics with Computers: “The only difference between a probabilistic classical world and the equations of the quantum world is that somehow or other it appears as if the probabilities would have to go negative.” Negative probabilities make no sense. Yet they are tolerated in quantum tomography and elsewhere. So what reality, or at least intuition, is behind negative probabilities? A related question is what negative probabilities are good for. We address these and related questions. The talk does not presume quantum expertise, though having such expertise would be helpful of course.
Bio:
Yuri Gurevich is Professor Emeritus at the University of Michigan. The last 20 years of his career he spent at Microsoft Research as a Principal Researcher. He is a Fellow of AAAS, ACM, EATCS, and Guggenheim, a foreign member of Academia Europaea, and Dr. Honoris Causa of a Belgian and a Russian universities.
Latent Dynamics Discovery
Tuesday, August 8, 2023
Abstract:
When even small animals enter an environment, they can understand the environment well enough to orient, plan to traverse, and execute that plan effectively. Although some engineered systems (e.g. SLAM for self-driving cars) have exhibited the same capability, the approaches tend to not be robust to sensor damage, augmentation, or adaptation. This leads to a natural question: Can we systematically and robustly learn to develop the capability to orient, plane and execute in an environment? The answer turns out to be “yes”.
Bio:
John Langford studied Physics and Computer Science at the California Institute of Technology, earning a double bachelor’s degree in 1997, and received his Ph.D. from Carnegie Mellon University in 2002. Since then, he has worked at Yahoo!, Toyota Technological Institute, and IBM‘s Watson Research Center. He is also the primary author of the popular Machine Learning weblog, hunch.net and the principle developer of Vowpal Wabbit. Previous research projects include Isomap, Captcha, Learning Reductions, Cover Trees, and Contextual Bandit learning.