Guided Soft Actor Critic: A Guided Deep Reinforcement Learning Approach for Partially Observable Markov Decision Processes

Mehmet Haklidir*, Hakan Temeltas

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)

Abstract

Most real-world problems are essentially partially observable, and the environmental model is unknown. Therefore, there is a significant need for reinforcement learning approaches to solve them, where the agent perceives the state of the environment partially and noisily. Guided reinforcement learning methods solve this issue by providing additional state knowledge to reinforcement learning algorithms during the learning process, allowing them to solve a partially observable Markov decision process (POMDP) more effectively. However, these guided approaches are relatively rare in the literature, and most existing approaches are model-based, meaning that they require learning an appropriate model of the environment first. In this paper, we propose a novel model-free approach that combines the soft actor-critic method and supervised learning concept to solve real-world problems, formulating them as POMDPs. In experiments performed on OpenAI Gym, an open-source simulation platform, our guided soft actor-critic approach outperformed other baseline algorithms, gaining 720% more maximum average return on five partially observable tasks constructed based on continuous control problems and simulated in MuJoCo.

Original languageEnglish
Pages (from-to)159672-159683
Number of pages12
JournalIEEE Access
Volume9
DOIs
Publication statusPublished - 2021

Bibliographical note

Publisher Copyright:
© 2013 IEEE.

Keywords

  • Deep reinforcement learning
  • guided policy search
  • POMDP

Fingerprint

Dive into the research topics of 'Guided Soft Actor Critic: A Guided Deep Reinforcement Learning Approach for Partially Observable Markov Decision Processes'. Together they form a unique fingerprint.

Cite this