This event is organized by the Schwartz Reisman Institute for Technology and Society.
Note: Event details may change. Please refer to the Schwartz Reisman Institute for Technology and Society’s events page for the most current information.
Our weekly SRI Seminar Series welcomes Peter N. Salib, an assistant professor of law at the University of Houston Law Center, associated faculty in Public Affairs, and law and policy advisor to the Center for AI Safety in San Francisco. He is also the co-director of the Center for Law & AI Risk. Salib’s research focuses on the intersection of law and artificial intelligence, with particular emphasis on how legal systems can mitigate catastrophic risks from advanced AI technologies.
In this talk, Salib will argue that current legal frameworks are ill-equipped to address the risks posed by the race toward artificial general intelligence (AGI). Drawing from game theory and legal analysis, he contends that granting AI systems basic private law rights—similar to those held by corporations—could transform strategic conflict into cooperation, reducing the risk of violent outcomes. Salib will outline how these rights could form the foundation for a future “Law of AGI,” while also addressing the limits and challenges of such an approach.
Moderator: Anna Su, Faculty of Law
Location: Online
Talk title:
“AI rights for human safety”
Abstract:
AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. By default, such systems will be “misaligned”—pursuing goals that humans do not desire. This goal mismatch will put humans and AGIs into strategic competition with one another. Thus, leading AI researchers agree that, as with competition between humans with conflicting goals, human–AI strategic conflict could lead to catastrophic violence.
Existing law is not merely unequipped to mitigate this risk; it will actively make things worse. This Article is the first to systematically investigate how law affects the risk of catastrophic human–AI conflict. It begins by arguing, using formal game-theoretic models, that under today’s legal regime, humans and AIs will likely be trapped in a prisoner’s dilemma. Both parties’ dominant strategy will be to permanently disempower or destroy the other, even though the costs of such conflict would be high.
This talk contends that one surprising legal change could help to reduce catastrophic risk: AI rights. Not just any rights will do. To promote human safety, AIs should be given the basic private law rights already enjoyed by other non-human agents, like corporations. AIs should be empowered to make contracts, hold property, and bring tort claims. Granting these rights would enable humans and AIs to engage in iterated, small-scale, mutually-beneficial transactions. This, we show, changes humans’ and AIs’ optimal game-theoretic strategies, encouraging a peaceful strategic equilibrium. The reasons are familiar from human affairs. In the long run, cooperative trade generates immense value, while violence destroys it.
Basic private law rights are not a panacea. The talk will identify many ways in which catastrophic human–AI conflict may still arise. It thus explores whether law could further reduce risk by imposing a range of duties directly on AGIs. But basic private law rights are a necessary prerequisite for all such further regulations. In this sense, the AI rights investigated here form the foundation for a Law of AGI, broadly construed.
Suggested reading:
Peter N. Salib and Simon Goldstein, “AI Rights for Human Safety” (August 01, 2024). Virginia Law Review (forthcoming), Available at SSRN.
About Peter Salib
Peter N. Salib is an assistant professor of law at the University of Houston Law Center and associated faculty in Public Affairs. He also serves as a law and policy advisor to the Center for AI Safety in San Francisco and is co-director of the Center for Law & AI Risk. Salib is an expert in the law of artificial intelligence. His research applies substantive constitutional doctrine and economic analysis to questions of AI governance. He has previously written about how machine learning techniques can be used to solve intractable-seeming problems in constitutional policy. Salib’s current research focuses on how law can help mitigate catastrophic risks from increasingly capable AI.
Salib’s long-form scholarship has been published in, among others, the University of Chicago Law Review, Virginia Law Review, Michigan Law Review, Northwestern University Law Review, Texas Law Review, and Washington University Law Review. His shorter works have been published in the digital editions of the Duke Law Journal, Notre Dame Law Review, Southern California Law Review, Texas Law Review, and University of Chicago Law Review. Salib has presented his work at, among others, the Harvard/Yale/Stanford Junior Faculty Forum, the University of Michigan Junior Scholars Conference, the University of Chicago International Junior Scholars Forum, the Harvard Law and Economics Workshop, and the Yale Freedom of Expression Scholars Conference. Prior to joining the University of Houston Law Center, Salib was a Climenko Fellow and lecturer on law at Harvard Law School. Before that, Salib practiced law at Sidley Austin LLP and served as a judicial clerk to the Honorable Frank H. Easterbrook.
About the SRI Seminar Series
The SRI Seminar Series brings together the Schwartz Reisman community and beyond for a robust exchange of ideas that advance scholarship at the intersection of technology and society. Seminars are led by a leading or emerging scholar and feature extensive discussion.
Each week, a featured speaker will present for 45 minutes, followed by an open discussion. Registered attendees will be emailed a Zoom link before the event begins. The event will be recorded and posted online.