Exploring prompts to elicit memorization in masked language model-based named entity recognition
The possibility of identifying specific information about the training data a language model memorized poses a privacy risk. In this study, we analyze the ability of prompts to detect training data memorization in six masked language models, fine-tuned for named entity recognition. Specifically, we employ a diverse set of 1,200 automatically generated prompts for three entity types and a detection dataset that contains entity names present in the training set (in-sample names) and names not present (out-of-sample names). Here, prompts constitute patterns that can be instantiated with candidate entity names, and the prediction confidence of a corresponding entity name serves as an indicator of memorization strength. The prompt performance of detecting memorization is measured by comparing the confidences of in-sample and out-of-sample names. We show that the performance of different prompts varies by as much as 24.5 percentage points on the same model, and prompt engineering further increases the gap. Moreover, our experiments demonstrate that prompt performance is model-dependent but does generalize across different name sets. We comprehensively analyze how prompt performance is influenced by prompt properties (e.g., length) and contained tokens.
Top
- Xia, Yuxi
- Sedova, Anastasiia
- Luz de Araujo, Pedro Henrique
- Kougia, Vasiliki
- Nußbaumer, Lisa
- Roth, Benjamin
Top
Category |
Journal Paper |
Divisions |
Data Mining and Machine Learning |
Subjects |
Kuenstliche Intelligenz |
Journal or Publication Title |
PLoS ONE |
ISSN |
1932-6203 |
Publisher |
PLOS |
Place of Publication |
California, US |
Date |
15 September 2025 |
Export |
Top
