Start Using Justifications When Explaining AI Systems to Decision Subjects

Start Using Justifications When Explaining AI Systems to Decision Subjects

Abstract

Every AI system that makes decisions about people has stakeholders who are affected by its outcomes. These stakeholders, whom we call decision subjects, have a right to understand how their outcome was produced and to challenge it. Explanations should support this process by making the algorithmic system transparent and creating an understanding of its inner workings. However, we argue that while current explanation approaches focus on descriptive explanations, decision subjects also require normative explanations or justifications. In this position paper, we advocate for justifications as a key component in explanation approaches for decision subjects and make three claims to this end, namely that justifications i) fulfill decision subjects' information needs, ii) shape their intent to accept or contest decisions, and iii) encourage accountability considerations throughout the system's lifecycle. We propose four guiding principles for the design of justifications, provide two design examples, and close with directions for future work. With this paper, we aim to provoke thoughts on the role, value, and design of normative information in explainable AI for decision subjects.

Grafik Top
Authors
  • Kolářová, Klára
  • Schmude, Timothée
Grafik Top
Editors
  • Hagedorn, Ludger
  • Schmid, Ute
  • Winter, Susan
  • Woltran, Stefan
Grafik Top
Shortfacts
Category
Paper in Conference Proceedings or in Workshop Proceedings (Paper)
Event Title
Digital Humanism
Divisions
Data Mining and Machine Learning
Subjects
Kuenstliche Intelligenz
Event Location
Vienna
Event Type
Conference
Event Dates
20.-21.11.2025
Series Name
Digital Humanism. DIGHUM 2025.
Publisher
Springer Nature Switzerland
Page Range
pp. 190-202
Date
12 November 2025
Export
Grafik Top