Özet
Recent developments in unmanned aerial vehicle (UAV) technology have given UAVs more processing and storage resources, paving the way for the concept of edge computing-enabled UAV networks. In this paper, we propose a cooperative multi-agent reinforcement learning-based computation offloading framework for a UAV swarm. Flying UAVs with missions can offload part of their tasks to neighboring UAVs or to fixed edge servers at terrestrial base stations. This approach reduces the total energy consumption of all devices during a core mission. Our framework helps UAVs form stable sequences of offloading decisions under uncertainties in a dynamic environment. This study demonstrates the superiority of the proposed deep Q-learning (DQN) algorithm to the existing Q-learning, heuristic, and random decision-making algorithms.
| Orijinal dil | İngilizce |
|---|---|
| Sayfa (başlangıç-bitiş) | 5239-5243 |
| Sayfa sayısı | 5 |
| Dergi | IEEE Transactions on Vehicular Technology |
| Hacim | 75 |
| Basın numarası | 3 |
| DOI'lar | |
| Yayın durumu | Yayınlandı - Mar 2026 |
Bibliyografik not
Publisher Copyright:© 1967-2012 IEEE.
BM SKH
Bu sonuç, aşağıdaki Sürdürülebilir Kalkınma Hedefine/Hedeflerine katkıda bulunur
-
SKH 7 Erişilebilir ve Temiz Enerji
Parmak izi
Energy-Efficient Offloading Decision for Beyond-5G Multi-Access Edge Computing-Enabled UAV Swarms' araştırma başlıklarına git. Birlikte benzersiz bir parmak izi oluştururlar.Alıntı Yap
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver