David Duvenaud (supplied photo)
David Duvenaud has received the 2024 CS-Can | Info-Can Outstanding Early Career Computer Science Researcher Award in recognition of his contributions to machine learning, AI safety and AI governance.
Duvenaud, an associate professor in the University of Toronto’s Department of Computer Science, is recognized for his contributions to machine learning, particularly through his development of neural ordinary differential equations (Neural ODEs). This research allows deep learning models to represent the continuous changes of real-world phenomena like physical systems or biological processes, earning a Best Paper Award at the NeurIPS 2018 conference.
Duvenaud's work has influenced how researchers build systems that can learn from complex, real-world data. His research group applies these methods to challenges like analyzing medical data and improving how computers make predictions.
His research also spans cancer genomics, DNA design and molecular chemistry. In collaboration with biologists and chemists, Duvenaud has developed machine learning tools to simulate tumour evolution, optimize DNA sequences and propose new molecules based on experimental data.
In 2022, Duvenaud pivoted to AI safety research. He first co-developed protocols that allow model trainers to prove claims about their training data — essential for AI governance.
In a full-time position at Anthropic in 2023–24, Duvenaud led the AI company’s alignment evaluations team. The team developed tests for deceptive behaviors, situational awareness and coordination in large language models (LLMs). With Anthropic, Duvenaud co-authored several highly impactful papers on misalignment, including one that showed how human feedback can incentivize AI systems to tell users what they want to hear rather than the truth.
Duvenaud has also served as an advisor to AI startup Cohere, helping direct research projects about personalizing LLMs.
In 2025 he was appointed to the federal Safe and Secure Advisory Group, providing guidance to the Government of Canada on AI safety, responsible development, and international collaboration on global AI standards.
Duvenaud is co-chair of the Schwartz Reisman Institute for Technology and Society, a co-founder of the Vector Institute and a founding member of the AI Safety Foundation. His previous honours include a Sloan Research Fellowship, a CIFAR AI Chair, a Google Faculty Award and multiple best paper awards at leading conferences.
“David’s work is a powerful example of how foundational research can shape the future of computing,” said Eyal de Lara, professor and chair of the Department of Computer Science. “This well-deserved honour recognizes his leadership in both technical innovation and AI safety.”