Explainability of AI-Driven Air Combat Agent

Emre Saldiran*, Mehmet Hasanzade, Gokhan Inalhan, Antonios Tsourdos

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Citation (Scopus)

Abstract

In safety-critical applications, it is crucial to verify and certify the decisions made by AI-driven Autonomous Systems (ASs). However, the black-box nature of neural networks used in these systems often makes it challenging to achieve this. The explainability of these systems can help with the verification and certification process, which will speed up their deployment in safety-critical applications. This study investigates the explainability of AI-driven air combat agents via semantically grouped reward decomposition. The paper presents two use cases to demonstrate how this approach can help AI and non-AI experts to evaluate and debug the behavior of RL agents.

Original languageEnglish
Title of host publicationProceedings - 2023 IEEE Conference on Artificial Intelligence, CAI 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages85-86
Number of pages2
ISBN (Electronic)9798350339840
DOIs
Publication statusPublished - 2023
Externally publishedYes
Event2023 IEEE Conference on Artificial Intelligence, CAI 2023 - Santa Clara, United States
Duration: 5 Jun 20236 Jun 2023

Publication series

NameProceedings - 2023 IEEE Conference on Artificial Intelligence, CAI 2023

Conference

Conference2023 IEEE Conference on Artificial Intelligence, CAI 2023
Country/TerritoryUnited States
CitySanta Clara
Period5/06/236/06/23

Bibliographical note

Publisher Copyright:
© 2023 IEEE.

Funding

This work is funded by BAE Systems.

FundersFunder number
BAE Systems

    Keywords

    • air combat
    • explainable
    • reinforcement learning
    • reward decomposition

    Fingerprint

    Dive into the research topics of 'Explainability of AI-Driven Air Combat Agent'. Together they form a unique fingerprint.

    Cite this