Abstract
The computation offloading technique is a promising solution that empowers computationally limited resource devices to run delay-constrained applications efficiently. Vehicular edge computing incorporates the processing capabilities into the vehicles, and thus, provides computing services for other vehicles through computation offloading. Mobility affects the communication environment and leads to critical challenges for computation offloading. In this paper, we consider an intelligent task offloading scenario for vehicular environments including smart vehicles and roadside units, which can cooperate to perform resource sharing. Intending to minimize the average offloading cost which takes into account energy consumption together with delay in transmission and processing phases, we formulate the task offloading problem as an optimization problem and implement an algorithm based on deep reinforcement learning with Double Q-learning which allows user equipments to learn the offloading cost performance by observing the environment and make steady sequences of offloading decisions despite the uncertainties of the environment. Besides, concerning the high mobility of the environment, we propose a handover-enabled computation offloading strategy that leads to a better quality of service and experience for users in beyond 5G and 6G heterogeneous networks. Simulation results demonstrate that the proposed scheme achieves low-cost performance compared to the existing offloading decision strategies in the literature.
Original language | English |
---|---|
Pages (from-to) | 9394-9405 |
Number of pages | 12 |
Journal | IEEE Transactions on Vehicular Technology |
Volume | 72 |
Issue number | 7 |
DOIs | |
Publication status | Published - 1 Jul 2023 |
Bibliographical note
Publisher Copyright:© 1967-2012 IEEE.
Keywords
- Computation offloading
- handover
- intelligent transportation
- reinforcement learning
- vehicular edge computing