Explainability and Contestability for the Responsible Use of Public Sector AI

Explainability and Contestability for the Responsible Use of Public Sector AI

Abstract

Public institutions have begun to use AI systems in areas that directly impact people’s lives, including labor, law, health, and migration. Explainability ensures that these systems are understandable to the involved stakeholders, while its emerging counterpart contestability enables them to challenge AI decisions. Both principles support the responsible use of AI systems, but their implementation needs to take into account the needs of people without technical background, AI novices. I conduct interviews and workshops to explore how explainable AI can be made suitable for AI novices, how explanations can support their agency by allowing them to contest decisions, and how this intersection is conceptualized. My research aims to inform policy and public institutions on how to implement responsible AI by designing for explainability and contestability. The Remote Doctoral Consortium would allow me to discuss with peers how these principles can be realized and account for human factors in their design.

Grafik Top
Authors
  • Schmude, Timothée
Grafik Top
Shortfacts
Category
Paper in Conference Proceedings or in Workshop Proceedings (Paper)
Event Title
CHI EA '25: Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems
Divisions
Data Mining and Machine Learning
Subjects
Kuenstliche Intelligenz
Event Location
Yokohama, Japan
Event Type
Conference
Event Dates
April 2025
Series Name
CHI EA '25
Publisher
Association for Computing Machinery
Date
25 April 2025
Official URL
https://doi.org/10.1145/3706599.3721096
Export
Grafik Top