Compact Example-Based Explanations for Language Models

Compact Example-Based Explanations for Language Models

Abstract

Training data influence estimation methods quantify the contribution of training documents to a model’s output, making them a promising source of information for example-based explanations. As humans cannot interpret thousands of documents, only a small subset of the training data can be presented as an explanation. Although the choice of which documents to include directly affects explanation quality, previous evaluations of such systems have largely ignored any selection strategies. To address this, we propose a novel selection relevance score, a retraining-free metric that quantifies how useful a set of examples is for explaining a model's output. We validate this score through fine-tuning experiments, confirming that it can predict whether a set of examples supports or undermines the model's predictions. Using this metric, we further show that common selection strategies often underperform random selection. Motivated by this finding, we propose a strategy that balances influence and representativeness, enabling better use of selection budgets than naively selecting the highest-ranking examples.

Grafik Top
Authors
  • Schoenegger, Loris
  • Roth, Benjamin
Grafik Top
Projects
Grafik Top
Shortfacts
Category
Paper in Conference Proceedings or in Workshop Proceedings (Paper)
Event Title
Findings of the Association for Computational Linguistics: ACL 2026
Divisions
Data Mining and Machine Learning
Event Location
San Diego, USA
Event Type
Conference
Event Dates
02-07 Jul 2026
Date
2026
Export
Grafik Top