Abstract
In the last few decades, dynamic job scheduling problems (DJSPs) has received more attention from researchers and practitioners. However, the potential of reinforcement learning (RL) methods has not been exploited adequately for solving DJSPs. In this work deep Q-network (DQN) model is applied to train an agent to learn how to schedule the jobs dynamically by minimizing the delay time of jobs. The DQN model is trained based on a discrete event simulation experiment. The model is tested by comparing the trained DQN model against two popular dispatching rules, shortest processing time and earliest due date. The obtained results indicate that the DQN model has a better performance than these dispatching rules.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2020 Winter Simulation Conference, WSC 2020 |
Editors | K.-H. Bae, B. Feng, S. Kim, S. Lazarova-Molnar, Z. Zheng, T. Roeder, R. Thiesing |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 1551-1559 |
Number of pages | 9 |
ISBN (Electronic) | 9781728194998 |
DOIs | |
Publication status | Published - 14 Dec 2020 |
Event | 2020 Winter Simulation Conference, WSC 2020 - Orlando, United States Duration: 14 Dec 2020 → 18 Dec 2020 |
Publication series
Name | Proceedings - Winter Simulation Conference |
---|---|
Volume | 2020-December |
ISSN (Print) | 0891-7736 |
Conference
Conference | 2020 Winter Simulation Conference, WSC 2020 |
---|---|
Country/Territory | United States |
City | Orlando |
Period | 14/12/20 → 18/12/20 |
Bibliographical note
Publisher Copyright:© 2020 IEEE.