Noise: A Flaw in Algorithmic Judgement?
Human judgment is flawed and limited. Our reasoning degrades when we are tired or hungry; we are capable of thinking only in low dimensions; we process many forms of information slowly; and so on. Algorithm judgment promises to correct our flaws and exceed our limits. Algorithms do not get hungry or tired; they can “think” in high dimensions: they process many forms of information at break-neck speed; and so on. This paper is concerned with a particular flaw in human judgment—noise, understood in Kahneman et. al’s (2021) sense, as unwanted variability in judgment. A judge is noisy, for example, if she sometimes hands down harsh sentences and sometimes lenient sentences—with no particular rhyme or reason—to defendants who ought to receive the same sentences. Her judgment exhibits unwanted variability. We ask: are algorithmic systems susceptible to noise? At first glance, the answer is no—and indeed, Kahnemen et. al argue that it is no—since many algorithmic systems compute the same function every time, and so by definition are free from a certain kind of variability. This first glance, we argue, is misleading. The kind of variability that algorithms are free from can, and often does, come apart from the kind of variability that is unwanted in cases of noise. Algorithms are susceptible to noise, just like we are.
Date: 24 May 2023, 12:30 (Wednesday, 5th week, Trinity 2023)
Venue: Please register to receive venue details
Speakers: Dr Carina Prunkl (Philosophy, Oxford), Dr Milo Phillips-Brown (Philosophy, Oxford)
Organiser contact email address: aiethics@philosophy.ox.ac.uk
Host: Professor John Tasioulas (University of Oxford)
Part of: Ethics in AI Lunchtime Seminars
Booking required?: Required
Booking url: https://www.oxford-aiethics.ox.ac.uk/ethics-ai-lunchtime-seminars-noise-flaw-algorithmic-judgement
Booking email: aiethics@philosophy.ox.ac.uk
Cost: Free
Audience: Public
Editors: Marie Watson, Lauren Czerniawska