Top

New research on training decision-making AI reveals insights into normative judgments

Artificial intelligence (AI) systems are increasingly called on to apply normative judgments to real-world facts on behalf of human decision-makers. Already today, AI systems perform content moderation, offer sentencing recommendations, and assess the creditworthiness of prospective debtors. Before our very eyes, the issue of aligning AI’s behaviour with human values has leapt from the pages of science fiction and into our lives.

It’s no wonder then that concern for the calibration of machine behaviour to human norms is pervasive.

How can we train machines to make normative decisions? By “normative,” we mean what we should or shouldn’t do. Machines already make factual decisions, but in order to be effective and fair decision-makers, they need to make normative decisions much like human beings do.

New research from Aparna Balagopalan, a graduate of the University of Toronto’s Master of Science in Applied Computing program, demonstrates empirical evidence on the relationship between the methods used to label the data that trains machine learning (ML) models and the performance of those models when applying norms.

The results from the recently published paper, “Judging facts, judging norms: Training machine learning models to judge humans requires a modified approach to labeling data,” in Science Advances challenge conventional wisdom on human-computer interaction and reducing bias in AI.

Co-authors include graduate student David Madras of U of T’s Department of Computer Science, Schwartz Reisman Institute (SRI) Director and Chair Gillian Hadfield, SRI Faculty Affiliate and CS Assistant Professor, Status-Only Marzyeh Ghassemi, David H. Yang and Dylan Hadfield-Menell.

The new research presented by Balagopalan and her co-authors suggests that labels explicitly reflecting value judgments, rather than the facts used to reach those judgments, might yield ML models that assess rule adherence and rule violation in a manner that we humans would deem acceptable.

On the other hand, using factual data to teach AI about norms could produce AI agents that apply norms more harshly than humans do. The researchers say this presents important considerations for the application of ML to normative settings. For example, they point to courts, which are using models that make factual predictions about the likelihood a defendant will reoffend to reach normative judgments about bail, sentencing and probation.

Read more on the Schwartz Reisman Institute’s website.