Ana gezinime geç Aramaya geç Ana içeriğe geç

A markov chain monte carlo algorithm for bayesian policy search

  • Vahid Tavakol Aghaei*
  • , Ahmet Onat
  • , Sinan Yıldırım
  • *Bu çalışma için yazışmadan sorumlu yazar

Araştırma sonucu: Dergiye katkıMakalebilirkişi

8 Atıf (Scopus)

Özet

Policy search algorithms have facilitated application of Reinforcement Learning (RL) to dynamic systems, such as control of robots. Many policy search algorithms are based on the policy gradient, and thus may suffer from slow convergence or local optima complications. In this paper, we take a Bayesian approach to policy search under RL paradigm, for the problem of controlling a discrete time Markov decision process with continuous state and action spaces and with a multiplicative reward structure. For this purpose, we assume a prior over policy parameters and aim for the ‘posterior’ distribution where the ‘likelihood’ is the expected reward. We propound a Markov chain Monte Carlo algorithm as a method of generating samples for policy parameters from this posterior. The proposed algorithm is compared with certain well-known policy gradient-based RL methods and exhibits more appropriate performance in terms of time response and convergence rate, when applied to a nonlinear model of a Cart-Pole benchmark.

Orijinal dilİngilizce
Sayfa (başlangıç-bitiş)438-455
Sayfa sayısı18
DergiSystems Science and Control Engineering
Hacim6
Basın numarası1
DOI'lar
Yayın durumuYayınlandı - 1 Oca 2018
Harici olarak yayınlandıEvet

Bibliyografik not

Publisher Copyright:
© 2018 The Author(s).

Parmak izi

A markov chain monte carlo algorithm for bayesian policy search' araştırma başlıklarına git. Birlikte benzersiz bir parmak izi oluştururlar.

Alıntı Yap