Abstract
As the complexity of wireless mobile networks increases significantly, artificial intelligence (AI) and machine learning (ML) have become key enablers for radio resource management and orchestration. In this paper, we propose a multi-agent reinforcement learning (RL) method for allocating radio resources to mobile users under random traffic arrivals, in which Ultra-Reliable Low-Latency Communications (URLLC) and enhanced Mobile Broad-Band (eMBB) services are jointly considered. The proposed system includes hierarchically placed RL agents, where the main-agent residing on the upper hierarchy performs inter-slice resource allocation between the URLLC and eMBB slices. The URLLC and eMBB sub-agents are responsible for the resource allocation within their own slice, where the objective is to maximize the eMBB throughput while satisfying the latency requirements of the URLLC slice. In the RL algorithm, the state space includes the queue occupancy and the channel quality information of mobile users while the action space specifies the resource allocation to the users. For a computationally efficient RL training, the state space is significantly reduced by quantizing the queue occupancy and grouping the users according to their channel qualities. The numerical results for the URLLC show that the proposed RL-based approach provides the average delay results of lower than 1 ms for all experiments while the worst case eMBB throughput degradation is limited to 4%.
Original language | English |
---|---|
Pages (from-to) | 1 |
Number of pages | 1 |
Journal | IEEE Access |
DOIs | |
Publication status | Accepted/In press - 2024 |
Bibliographical note
Publisher Copyright:Authors
Keywords
- Delays
- eMBB
- Network Slicing
- Network slicing
- Radio Access Networks
- Reinforcement-Learning
- Resource Allocation
- Resource management
- Signal to noise ratio
- Task analysis
- Throughput
- Ultra reliable low latency communication
- URLLC