Abstract
We consider a Markov decision process model mimicking a real-world food delivery service where the objective is to maximize the revenue derived from served requests given a limited number of couriers over a period of time. The model incorporates the courier location, order origin, and order destination. Each courier's task is to pick-up an assigned order and deliver it to the requested destination. We apply three different approaches to solve this problem. In the first approach, we simplify the model to a one courier case and then solve the resulting model using Q-Learning. The resulting policy is used for each courier in the model with more than one courier based on the assumption that all couriers are identical. In the second approach, we use the same logic, however, the underlying one courier model is solved using Double Deep Q-Networks (DDQN). In the third approach, the extensive model is considered where a system state consists of the positions of all couriers and all orders in the system. We use DDQN to solve the extensive model. Policies generated by these approaches are compared against a benchmark rule-based policy. We observe that the resulting policy of training a single courier with Q-learning accumulates higher rewards than the reward collected by the rule-based policy. In addition, DDQN algorithm for a single courier outperforms both the Q-learning and the rule-based approaches, however, DDQN performance is noted to be highly dependent on the hyper-parameters of the algorithm.
Original language | English |
---|---|
Article number | 107871 |
Journal | Computers and Industrial Engineering |
Volume | 164 |
DOIs | |
Publication status | Published - Feb 2022 |
Bibliographical note
Publisher Copyright:© 2021 Elsevier Ltd
Keywords
- Courier assignment
- Courier routing
- DDQN
- DQN
- Q-Learning