Energy-Efficient Offloading Decision for Beyond-5G Multi-Access Edge Computing-Enabled UAV Swarms

Homa Maleki, Mehmet Basaran, Lutfiye Durak-Ata

Research output: Contribution to journalArticlepeer-review

Abstract

Recent developments in unmanned aerial vehicle (UAV) technology have given UAVs more processing and storage resources, paving the way for the concept of edge computing-enabled UAV networks. In this paper, we propose a cooperative multi-agent reinforcement learning-based computation offloading framework for a UAV swarm. Flying UAVs with missions can offload part of their tasks to neighboring UAVs or to fixed edge servers at terrestrial base stations. This approach reduces the total energy consumption of all devices during a core mission. Our framework helps UAVs form stable sequences of offloading decisions under uncertainties in a dynamic environment. This study demonstrates the superiority of the proposed deep Q-learning (DQN) algorithm to the existing Q-learning, heuristic, and random decision-making algorithms.

Original languageEnglish
JournalIEEE Transactions on Vehicular Technology
DOIs
Publication statusAccepted/In press - 2025

Bibliographical note

Publisher Copyright:
© 2025 IEEE.

Keywords

  • UAV swarm
  • energy efficiency
  • multi-access edge computing
  • multi-agent cooperative reinforcement learning

Fingerprint

Dive into the research topics of 'Energy-Efficient Offloading Decision for Beyond-5G Multi-Access Edge Computing-Enabled UAV Swarms'. Together they form a unique fingerprint.

Cite this