Speaker:
Inioluwa Deborah Raji
Talk Title:
Beyond the Benchmarking Paradigm: Audits & Evaluation in the Age of Artificial Intelligence
Date and Location:
Tuesday, March 17, 2026
Bahen Centre for Information Technology, BA 3200
This lecture is open to the public. No registration is required, but space is limited.
The grad roundtable that follows the talk is open only to current University of Toronto Department of Computer Science graduate students.
Abstract:
Despite great potential, there is a growing gap between what AI systems promise and what they deliver, with real human costs.
AI auditing is the practice of independently evaluating deployed AI systems to determine how they behave, what risks they pose, and whether they meet their intended objectives. This interdisciplinary endeavor requires both a technical expansion of our current AI evaluation paradigm and a framework for ensuring that audit investigations are sufficiently material for downstream legal actions and normative debates. At the intersection of law and public policy, applied economics and computer science, we can advance AI auditing policy & practice in material ways — by anchoring notions of engineering responsibility in AI development, expanding our vocabulary of AI evaluation methods, and pushing to connect AI audit outcomes to organizational and legal consequences. Through case studies of AI use in healthcare and government, we demonstrate how novel evaluation methods such as incident reporting, workflow simulations and pilot experiments can supplement standard practices like data benchmarking to more adequately inform AI governance, shaping a range of outcomes from documentation and procurement to regulatory enforcement and product safety compliance. As auditing makes its way into key policy proposals as a primary mechanism for AI accountability, we must think critically about the necessary technical and institutional infrastructure required for this form of oversight to successfully enable safe widespread AI adoption.
About Inioluwa Deborah Raji:
Inioluwa Deborah Raji is a researcher interested in algorithmic auditing. She has worked closely with industry, civil society and within academia to push forward various projects to operationalize ethical considerations in machine learning practice, and push forward benchmarking and model evaluation norms in the field. In particular, she aims to study how model engineering choices (from evaluation to data choices) impact consumer protection, product liability, procurement, anti-discrimination practice and other forms of legal and institutional accountability related to functional harms. She is on the advisory boards for the Center for Democracy and Technology AI Governance Lab, the Health AI Partnership, TeachAI, REALML and the Center for Civil Rights and Technology. For her efforts, she has been named to Forbes 30 Under 30, MIT Tech Review 35 Under 35 Innovators and the TIME 100 Most Influential in AI. She is also the recipient of the 2024 Tech For Humanity Prize, and the 2024 Mozilla Rise 25 award, as well as the co-recipient of the EFF Pioneer Barlow Award with Joy Buolamwini and Timnit Gebru. She received her Bachelors of Applied Science in Engineering Science from the University of Toronto. She is currently completing her PhD in computer science from the University of California, Berkeley.
