Abstract
In the future, the load demand due to charging of large numbers of electric vehicles (EVs) will be at such a high level that existing networks in some regions may not afford. Therefore, radical changes modernizing the grid will be required to overcome the technical and economic problems besides bureaucratic issues. Amendments to be made in the regulations on electrical energy and new tariff regulations can be considered within this scope. Smart charging of EVs is not often dealt with a solution using reinforcement learning (RL), which is one of the most effective methods for solving such decision-making problems. Most of the studies on this topic endeavor to estimate the state and action spaces and to tune the penalty coefficients within the RL models developed. In this paper, we solve the EV charging problem using expected SARSA with a novel rewarding strategy, as we propose a new approach to determine the state and action spaces. The efficacy of the proposed method is demonstrated on the problem of charging a single EV, as we compare it with a number of alternatives involving Q-Learning and constant charging approaches.
Original language | English |
---|---|
Pages (from-to) | 3933-3942 |
Number of pages | 10 |
Journal | Electrical Engineering |
Volume | 104 |
Issue number | 6 |
DOIs | |
Publication status | Published - Dec 2022 |
Bibliographical note
Publisher Copyright:© 2022, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
Keywords
- Demand-side management
- Electric vehicles
- Expected SARSA
- Markov decision process
- Q-learning
- Reinforcement learning
- Smart charging