A widely accepted requirement of democratic government is that decisions must be explained to those who are bound by them. What counts as a good explanation depends on the context and the needs of those receiving it. However, the use of complex machine learning algorithms in governmental decision making raises a challenge for the relational and contextual aspects of explanation. What kind of explanations should we demand when government decisions rely on algorithmic predictions? Which explanations are necessary, which ones sufficient, and which ones simply bad? The computer science literature has so far focused on the technical challenges of designing explainable AI, taking the kind of explanation as a matter of feasibility, and finding the optimal solution to the tradeoff between performance and explainability. The legal literature, meanwhile, has focused on how to make AI explainable in a way that meets existing legal requirements of due process and anti-discrimination. Neither have explored how different kinds of explanations fare with respect to a broader range of social or political values we might seek to realize in a democratic society.
This talk will answer this question by analyzing the relationship between different kinds of explanation and the normative values they serve or hinder. I evaluate different types of explanations suggested in the literature by focusing on what they empower citizens to do individually and collectively, and how they change the power dynamics between citizens, AI experts and government officials. The broader theoretical aim of the paper is to examine the link between explanation and power. I argue that good explanations enhance the recipient’s power-to, that some explanations create relationships of power-over, and explore the possibility that the two may not be mutually exclusive.