Abstract
In safety-critical applications, it is crucial to verify and certify the decisions made by AI-driven Autonomous Systems (ASs). However, the black-box nature of neural networks used in these systems often makes it challenging to achieve this. The explainability of these systems can help with the verification and certification process, which will speed up their deployment in safety-critical applications. This study investigates the explainability of AI-driven air combat agents via semantically grouped reward decomposition. The paper presents two use cases to demonstrate how this approach can help AI and non-AI experts to evaluate and debug the behavior of RL agents.
Original language | English |
---|---|
Title of host publication | Proceedings - 2023 IEEE Conference on Artificial Intelligence, CAI 2023 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 85-86 |
Number of pages | 2 |
ISBN (Electronic) | 9798350339840 |
DOIs | |
Publication status | Published - 2023 |
Externally published | Yes |
Event | 2023 IEEE Conference on Artificial Intelligence, CAI 2023 - Santa Clara, United States Duration: 5 Jun 2023 → 6 Jun 2023 |
Publication series
Name | Proceedings - 2023 IEEE Conference on Artificial Intelligence, CAI 2023 |
---|
Conference
Conference | 2023 IEEE Conference on Artificial Intelligence, CAI 2023 |
---|---|
Country/Territory | United States |
City | Santa Clara |
Period | 5/06/23 → 6/06/23 |
Bibliographical note
Publisher Copyright:© 2023 IEEE.
Funding
This work is funded by BAE Systems.
Funders | Funder number |
---|---|
BAE Systems |
Keywords
- air combat
- explainable
- reinforcement learning
- reward decomposition