Abstract
Reinforcement learning methods are being applied to control problems in robotics domain. These algorithms are well suited for dealing with the continuous large scale state spaces in robotics field. Even though policy search methods related to stochastic gradient optimization algorithms have become a successful candidate for coping with challenging robotics and control problems in recent years, they may become unstable when abrupt variations occur in gradient computations. Moreover, they may end up with a locally optimal solution. To avoid these disadvantages, a Markov chain Monte Carlo (MCMC) algorithm for policy learning under the RL configuration is proposed. The policy space is explored in a non-contiguous manner such that higher reward regions have a higher probability of being visited. The proposed algorithm is applied in a risk-sensitive setting where the reward structure is multiplicative. Our method has the advantages of being model-free and gradient-free, as well as being suitable for real-world implementation. The merits of the proposed algorithm are shown with experimental evaluations on a 2-Degree of Freedom robot arm. The experiments demonstrate that it can perform a thorough policy space search while maintaining adequate control performance and can learn a complex trajectory control task within a small finite number of iteration steps.
Original language | English |
---|---|
Pages (from-to) | 580-590 |
Number of pages | 11 |
Journal | ISA Transactions |
Volume | 125 |
DOIs | |
Publication status | Published - Jun 2022 |
Externally published | Yes |
Bibliographical note
Publisher Copyright:© 2021 ISA
Funding
The authors would like to thank Prof. Volkan Patoğlu and Dr. Mustafa Yalçın for their sincere help and efforts during the preparation of this work.
Keywords
- Bayesian learning
- Intelligent control
- Markov chain Monte Carlo
- Policy search
- Reinforcement learning