Rule-Guided Reinforcement Learning Policy Evaluation and Improvement

Rule-Guided Reinforcement Learning Policy Evaluation and Improvement

Abstract

We consider the challenging problem of using domain knowledge to improve deep reinforcement learning policies. To this end, we propose LEGIBLE, a novel approach, following a multi-step process, which starts by mining rules from a deep RL policy, constituting a partially symbolic representation. These rules describe which decisions the RL policy makes and which it avoids making. In the second step, we generalize the mined rules using domain knowledge expressed as metamorphic relations. We adapt these relations from software testing to RL to specify expected changes of actions in response to changes in observations. The third step is evaluating generalized rules to determine which generalizations improve performance when enforced. These improvements show weaknesses in the policy, where it has not learned the general rules and thus can be improved by rule guidance. LEGIBLE supported by metamorphic relations provides a principled way of expressing and enforcing domain knowledge about RL environments. We show the efficacy of our approach by demonstrating that it effectively finds weaknesses, accompanied by explanations of these weaknesses, in eleven RL environments and by showcasing that guiding policy execution with rules improves performance w.r.t. gained reward.

Grafik Top
Authors
  • Martin, Tappler
  • Sebastian, Tschiatschek
  • Ignacio D., Lopez-Miguel
  • Ezio, Bartocci
Grafik Top
Shortfacts
Category
Paper in Conference Proceedings or in Workshop Proceedings (Paper)
Event Title
IJCAI
Divisions
Data Mining and Machine Learning
Subjects
Kuenstliche Intelligenz
Event Location
Monteal, Canada
Event Type
Conference
Event Dates
16-22 Aug
Page Range
pp. 6254-6262
Date
12 March 2025
Export
Grafik Top