Distinguished Lecture Series
2025-2026 Speakers
David Duvenaud
Associate Professor, Department of Computer Science
University of Toronto
Austin Roorda
Professor of Optometry and Vision Science
University of Waterloo
Jonathan Lazar
Professor,
College of Information
University of Maryland
Dale Schuurmans
Research Director, Google DeepMind
Professor of Computing Science, University of Alberta
Sarita Adve
Professor,
Department of Computer Science
University of Illinois at Urbana-Champaign
Anind Dey
Dean and Professor,
Information School
University of Washington
Leo Porter
Professor,
Computer Science and Engineering Department
UC San Diego
The big picture of LLM dangerous capability evals
Wednesday, October 15, 2025
12:30 p.m.
Schwartz Reisman Innovation Campus, Room W240
108 College Street, Toronto, ON M5G 0C6
We gratefully acknowledge the support of the Webster Family Charitable Giving Foundation for this event.
Abstract:
How can we avoid AI disasters? The plan so far is mostly to check the extent to which AIs could cause catastrophic harms based on tests in controlled conditions. However, there are obvious problems with this approach, both technical and due to their limited scope. I'll give an overview of the work my team at Anthropic did to evaluate risks due to models feigning incompetence, colluding, or sabotaging human decision-making. I'll also discuss the idea of “control” techniques, which use AIs to monitor and set traps to look for bad behavior in other AIs. Finally, I'll outline the main problems beyond the scope of these approaches, in particular that of robustly aligning our institutions to human interests.
Bio:
David Duvenaud is an associate professor in the Department of Computer Science and Statistical Sciences at the University of Toronto, where he holds a Schwartz Reisman Chair in Technology and Society. A leading voice in AI safety and artificial general intelligence (AGI) governance, Duvenaud’s current work focuses on evaluating dangerous capabilities in advanced AI systems, mitigating catastrophic risks from future models, and developing institutional designs for post-AGI futures. Duvenaud is a Canada CIFAR AI Chair and a founding faculty member at the Vector Institute, a member of Innovation, Science and Economic Development Canada’s Safe and Secure AI Advisory Group, and recently completed an extended sabbatical with the Alignment Science team at Anthropic.
Duvenaud’s early helped shape the field of probabilistic deep learning, with contributions including neural ordinary differential equations, gradient-based hyperparameter optimization, and generative models for molecular design. He has received numerous honors, including the Sloan Research Fellowship, Ontario Early Researcher Award, and best paper awards at NeurIPS, ICML, and ICFP. Before joining the University of Toronto, Duvenaud was a postdoctoral fellow in the Harvard Intelligent Probabilistic Systems group and completed his PhD at the University of Cambridge under Carl Rasmussen and Zoubin Ghahramani.
Oz Vision: A New Principle for Visual Display
Tuesday, October 28, 2025
11 a.m.
Bahen Centre for Information Technology, BA 3200
Abstract:
Humans have exquisite spatial vision, colour vision, and motion detection despite what appear to be serious limits imposed by a seemingly suboptimal photoreceptor sensor array, an optical system that is fraught with aberrations, and an inability to hold the eye still, even during steady fixation. The Oz vision display can investigate the effects of these limits and overcome them. This is accomplished through a combination of adaptive optics, scanning light imaging and projection, and high-speed eye tracking. Collectively, these technologies enable control of the visual sensory input at the individual photoreceptor level. I will describe two experiments: I will first describe a paradoxical finding whereby the detectability of relative motion is disrupted — a finding that sheds light on the processes underlying our ability to perceive the world as stable despite constant eye motion. For colour vision, I will show how we can directly manipulate sensory input at the cone level to elicit colour experiences — like ‘olo’ — that are outside the human gamut. I will finish with a broader discussion of ongoing and future experiments enabled by the Oz display.
Bio:
Austin Roorda received a joint degree in Ph.D. in Vision Science and Physics from Waterloo in 1996. He has pioneered multiple applications of adaptive optics for the eye, including mapping of the human trichromatic cone mosaic at the University of Rochester (1997-1998), inventing the adaptive optics scanning laser ophthalmoscope (AOSLO) at the University of Houston (1998-2004), and tracking and targeting light delivery to individual cones in the human eye at UC Berkeley (2005-2025), where he was a member of the Vision Science, Bioengineering and Neuroscience programs. He started a new position at the University of Waterloo in July 2025. He is a Fellow of Optica and the Association for Research in Vision and Ophthalmology. Notable awards include the Distinguished Alumni Award from Waterloo, the Glenn Fry Award from the American Academy of Optometry, a Guggenheim Fellowship, a Leverhulme Visiting Professorship (Oxford University) and the Rank Prize in Optoelectronics.
Methods and Tools for Born-Accessible Design
Thursday, November 20, 2025, 11 a.m.
Bahen Centre for Information Technology, BA 3200
Abstract:
Digital technologies, software applications, websites, and documents are often created without considering accessibility for people with disabilities. The inaccessible technology or content is then either remediated for accessibility, remediated for accessibility only when there is a complaint from a person with a disability, or is never remediated for accessibility. Remediating technologies after-the-fact is not a cost-effective approach, and the time delay between when digital technologies and content are built and released and when they are made accessible can itself be a form of societal discrimination. For years, disability rights groups have demanded born-accessible design, and some government policies are starting to require born-accessible design, yet the research literature in human-computer interaction and user experience does not yet define born-accessible design or any methods for born-accessible design. This presentation will focus on describing our work on born-accessible design in two areas: tools and methods. We have been collaborating with Adobe on developing software tools to help support content creators in adding accessibility markup during their workflow, leading to the creation of born-accessible content which needs no remediation. And on a broader level, we have been working with disability rights groups, technology companies, and policymakers, to build a methodological framework for implementing born-accessible design.
Bio:
Jonathan Lazar is a Professor in the College of Information at the University of Maryland, where he is the founding director of the Maryland Initiative for Digital Accessibility (MIDA) and is a faculty member in the Human-Computer Interaction Lab (HCIL). He is currently on sabbatical leave from UMD and is a visiting professor at the University of Toronto. He has previously authored or edited 18 books and published over 200 refereed articles in journals, conference proceedings, edited books, and magazines, related to human-computer interaction, user-centered design, accessibility, policy, and law. He has received research funding from the U.S. National Science Foundation, the U.S. National Institute on Disability Independent Living and Rehabilitation Research (NIDILRR), Google, and Adobe. He is the recipient of the 2024 IAAP Accessibility Initiatives Award, the 2020 ACM SIGACCESS Award for Outstanding Contributions to Computing and Accessibility, and the 2016 ACM SIGCHI Social Impact Award, is a member of the ACM SIGCHI Academy, and served as the general chair of the 2021 ACM ASSETS conference.
Large Language Models and Computation
Thursday, November 27, 2025, 11 a.m.
Bahen Centre for Information Technology, BA 3200
Abstract:
The ability of large generative models to respond naturally to text, image and audio inputs has created significant excitement. Particularly interesting is the ability of these models to generate outputs that resemble coherent reasoning and computational sequences. I will discuss the inherent computational capability of large language models and show that autoregressive decoding supports universal computation, even without pre-training. The co-existence of informal and formal computational systems in the same model does not change what is computable, but does provide new means for eliciting desired behaviour. I will then discuss how post-training, in an attempt to make a model more directable, faces severe computational limits on what can be achieved, but that accounting for these limits can improve outcomes.
Bio:
Dale Schuurmans is a Research Director at Google DeepMind, Professor of Computing Science at the University of Alberta, Canada CIFAR AI Chair, and Fellow of AAAI. He has served as an Associate Editor in Chief for IEEE TPAMI, an Associate Editor for JMLR, AIJ, JAIR and MLJ, and a Program Co-chair for AAAI-2016, NeurIPS-2008 and ICML-2004. He has published over 250 papers in machine learning and artificial intelligence, and received paper awards at ICLR, NeurIPS, ICML, IJCAI, and AAAI.
Talk title:
Coming soon
Thursday, December 4, 2025, 11 a.m.
Bahen Centre for Information Technology, BA 3200
Abstract:
Coming soon
Bio:
Coming soon
Talk title:
Coming soon
Tuesday, December 9, 2025, 11 a.m.
Bahen Centre for Information Technology, BA 3200
Abstract:
Coming soon
Bio:
Coming soon
Effects of GenAI on Computing Education
Thursday, January 22, 2026, 11 a.m.
Bahen Centre for Information Technology, BA 3200
Abstract:
The advent of GenAI necessitates changes to the CS curriculum and our courses. But what exactly should our learning outcomes and assessments look like now? Some impacts of GenAI are reasonably well-understood, such as their capacity to solve programming problems and their roles as tutors. We know much less about generative AI’s impact on student learning and how learning outcomes should change. This talk will begin with a brief summary of the main areas of ongoing research related to generative AI and early findings from incorporating GenAI into the introductory programming course at UC San Diego. Moving beyond introductory programming, I will then discuss how the CS Curriculum as a whole should be changing. I’ll finish by describing how the newly founded GenAI in CS Education Consortium provides support for faculty integrating GenAI into the CS curriculum and their courses.
Bio:
Leo Porter is a Professor in the Computer Science and Engineering Department at UC San Diego. He is best known for his research on the impact of Peer Instruction in computing courses, the development of the Basic Data Structures Concept Inventory, and integrating GenAI into the CS curriculum. He co-wrote the first book on integrating LLMs into the instruction of programming with Daniel Zingaro. He has received seven Best Paper Awards, an ICER Lasting Impact Award, the SIGCSE 50th Anniversary Top Ten Symposium Papers of All Time Award, and the Academic Senate Distinguished Teaching Award at UC San Diego. He co-Directs the GenAI in CS Education Consortium, aimed at helping faculty and institutions integrate GenAI into the CS Curriculum.