From Calculation to Adjudication: Examining LLM Judges on Mathematical Reasoning Tasks

From Calculation to Adjudication: Examining LLM Judges on Mathematical Reasoning Tasks

Abstract

To reduce the need for human annotations, large language models (LLMs) have been proposed as judges of the quality of other candidate models. The performance of LLM judges is typically evaluated by measuring the correlation with human judgments on generative tasks such as summarization or machine translation. In contrast, we study LLM judges on mathematical reasoning tasks. These tasks require multi-step reasoning, and the correctness of their solutions is verifiable, enabling a more objective evaluation. We perform a detailed performance analysis and find that easy samples are easy to judge, and difficult samples are difficult to judge. Our analysis uncovers a strong correlation between judgment performance and the candidate model task performance, indicating that judges tend to favor higher-quality models even if their answer is incorrect. As a consequence, we test whether we can predict the behavior of LLM judges using simple features such as part-of-speech tags and find that we can correctly predict 70%-75% of judgments. We conclude this study by analyzing practical use cases, showing that LLM judges consistently detect the on-average better model but largely fail if we use them to improve task performance.

Grafik Top
Authors
  • Stephan, Andreas
  • Zhu, Dawei
  • Aßenmacher, Matthias
  • Xiaoyu, Shen
  • Roth, Benjamin
Grafik Top
Shortfacts
Category
Paper in Conference Proceedings or in Workshop Proceedings (Poster)
Event Title
the 63rd Annual Meeting of the Association for Computational Linguistics
Divisions
Data Mining and Machine Learning
Subjects
Kuenstliche Intelligenz
Event Location
Vienna, Austria
Event Type
Workshop
Event Dates
28.7.2025
Date
27 July 2025
Export
Grafik Top