Speaker:
Daniel Halpern
Talk Title:
Why AI Needs Social Choice
Date and Location:
Thursday, February 13, 2025
Bahen Centre for Information Technology, BA 3200
This lecture is open to the public. No registration is required, but space is limited.
Abstract:
In many modern AI paradigms, we encounter tasks reminiscent of social choice theory: collecting preferences from individuals and aggregating them into a single joint outcome. However, these tasks differ from traditional frameworks in two key ways: the space of possible outcomes is so enormous that we can only hope to collect sparse inputs from each participant, and the outcomes themselves are often highly complex. This talk explores these challenges through two case studies: Polis, a platform for democratic deliberation (https://arxiv.org/abs/2211.15608), and Reinforcement Learning From Human Feedback (RLHF), a method for fine-tuning LLMs to align with societal preferences (https://arxiv.org/pdf/2405.14758). In both cases, the focus is on evaluating existing methods through an axiomatic lens and designing new methods with provable guarantees.
About Daniel Halpern:
Daniel Halpern is a final-year PhD student at Harvard University advised by Ariel Procaccia. He is supported by an NSF Graduate Research Fellowship and a Siebel Scholarship. His research broadly sits at the intersection of algorithms, economics, and artificial intelligence. Specifically, he considers novel settings where groups of people need to make collective decisions, such as summarizing population views on large-scale opinion aggregation websites, using participant data to fine-tune large language models, and selecting panel members for citizens’ assemblies. In each, he develops practical and provably fair solutions to aggregate individual preferences.