LLMs that Understand Processes: Instruction-tuning for Semantics-Aware Process Mining
Process mining is increasingly using textual information associated with events to tackle tasks such as anomaly detection and process discovery. Such semantics-aware process mining focuses on what behavior should be possible in a process (i.e., expectations), thus providing an important complement to traditional, frequency-based techniques that focus on recorded behavior (i.e., reality). Large Language Models (LLMs) provide a powerful means for tackling semantics-aware tasks. However, the best performance is so far achieved through task-specific fine-tuning, which is computationally intensive and results in models that can only handle one specific task. To overcome this lack of generalization, we use this paper to investigate the potential of instruction-tuning for semantics-aware process mining. The idea of instruction-tuning here is to expose an LLM to prompt-answer pairs for different tasks, e.g., anomaly detection and next-activity prediction, making it more familiar with process mining, thus allowing it to also perform better at unseen tasks, such as process discovery. Our findings demonstrate a varied impact of instruction-tuning: while performance considerably improved on process discovery and prediction tasks, it varies across models on anomaly detection tasks, highlighting that the selection of tasks for instruction-tuning is critical to achieving desired outcomes.
Top
- Pyrih, Vira
- Rebmann, Adrian
- van der Aa, Han
Top
Category |
Paper in Conference Proceedings or in Workshop Proceedings (Paper) |
Event Title |
7th International Conference on Process Mining 2025 |
Divisions |
Workflow Systems and Technology |
Subjects |
Informatik Allgemeines Kuenstliche Intelligenz |
Event Location |
Montevideo, Uruguay |
Event Type |
Conference |
Event Dates |
20 Oct - 24 Oct 2025 |
Date |
October 2025 |
Export |
Top
