Our weekly SRI Seminar Series welcomes Sven Nyholm, Professor of the Ethics of Artificial Intelligence at the Ludwig Maximilian University of Munich. Nyholm’s research focuses on applied ethics and the philosophy of technology, including topics such as human-robot interaction, self-driving cars, autonomous weapons, human enhancement, and self-tracking technologies.
In this session, Nyholm will discuss “responsibility gaps” and asymmetries regarding praise and blame for outcomes produced by artificial intelligence (AI) technologies. Using contemporary examples such as text produced by large language models, accidents caused by self-driving cars, and medical diagnoses and treatment, Nyholm will demonstrate how praise for good outcomes produced by AI is typically harder to deserve than blame for bad outcomes.
Abstract:
In my presentation, I will discuss what I think are some interesting asymmetries with respect to praise and blame for good and bad outcomes produced by AI technologies. I will suggest that if we apply widely agreed-upon criteria for under what circumstances people deserve praise, on the one hand, and widely agreed-upon criteria for under what circumstances people deserve blame, on the other hand, it might be harder to be praiseworthy for good outcomes produced when we hand over tasks to AI technologies than it is to deserve blame for bad outcomes that might be produced when we hand over tasks to AI technologies.
The topic of who is responsible for outcomes produced by AI technologies is usually called the topic of “responsibility gaps.” That is, there might be unclarity or gaps with respect to who is responsible for what AI technologies do or outcomes they produce. This problem is usually discussed in relation to bad outcomes caused by AI technologies (e.g., such as when a self-driving car hits and harms a human being). I suggest it is also important to discuss possible gaps in responsibility related to good outcomes that might be produced with the help of AI technologies. This can be important, for example, in workplaces where people want to get recognition for the work they do, but where more and more tasks are being handed over to AI. In general, the very idea of AI is to create technologies that can take over tasks from us human beings that we need our natural intelligence to perform. If tasks can be performed without any need for our intelligence—or perhaps without need for much effort, or any particular talents of ours—there will be less justification for us to claim credit for the performance of these tasks (e.g., work that we used to performed but that has been handed over to AI technologies). In contrast, if we hand over tasks we used to perform to AI, and we allow these technologies to sometimes cause harm, then we might not be off the hook but might deserve blame.
Using various examples and theories from the history of philosophy and contemporary ethics research, I will try to illustrate that praise for good outcomes produced by AI technologies is harder to deserve than blame for bad outcomes produced by AI technologies might be. As I discuss this asymmetry between praise and blame for good and bad outcomes caused by AI technologies, I will consider examples such as text produced by large language models (such as ChatGPT), accidents caused by self-driving cars, medical diagnoses or treatment recommendations made by medical AI technologies, AI used in military contexts, and more.