Speaker:
Vasilis Kontonis
Talk Title:
Beyond Worst-Case ML
Date and Location:
Tuesday, March 4, 2025
Bahen Centre for Information Technology, BA 3200
This lecture is open to the public. No registration is required, but space is limited.
Abstract:
Worst-case theoretical frameworks provide an overly pessimistic view of what is computationally feasible in machine learning. In this talk, I will present new frameworks and algorithmic results that move beyond worst-case assumptions to enable efficient learning in realistic settings. We will examine why fundamental problems are computationally intractable in the worst case and how to circumvent these barriers. I will first discuss robust classification under label noise, introducing efficient algorithms that challenge long-standing impossibility results while improving and generalizing prior algorithmic work. Then, I will present an application to semi-supervised knowledge distillation, where our principled methods outperform prior works.
About Vasilis Kontonis:
Vasilis Kontonis is a postdoctoral fellow at the Institute for Foundations of Machine Learning (IFML) at the University of Texas at Austin, working with Adam Klivans and Raghu Meka. He earned his PhD from the University of Wisconsin-Madison, advised by Christos Tzamos. His research focuses on developing computationally efficient and provably reliable algorithms in machine learning and statistics. His work has been published in top venues in theoretical computer science and machine learning (FOCS, STOC, COLT, ICML, NeurIPS) and has been recognized with awards, including the Best Paper Award at the Conference on Learning Theory (COLT) 2024.