Abstract
Recent developments in unmanned aerial vehicle (UAV) technology have given UAVs more processing and storage resources, paving the way for the concept of edge computing-enabled UAV networks. In this paper, we propose a cooperative multi-agent reinforcement learning-based computation offloading framework for a UAV swarm. Flying UAVs with missions can offload part of their tasks to neighboring UAVs or to fixed edge servers at terrestrial base stations. This approach reduces the total energy consumption of all devices during a core mission. Our framework helps UAVs form stable sequences of offloading decisions under uncertainties in a dynamic environment. This study demonstrates the superiority of the proposed deep Q-learning (DQN) algorithm to the existing Q-learning, heuristic, and random decision-making algorithms.
| Original language | English |
|---|---|
| Journal | IEEE Transactions on Vehicular Technology |
| DOIs | |
| Publication status | Accepted/In press - 2025 |
Bibliographical note
Publisher Copyright:© 2025 IEEE.
Keywords
- UAV swarm
- energy efficiency
- multi-access edge computing
- multi-agent cooperative reinforcement learning