Addressing Challenges in Dynamic Modeling of Stewart Platform using Reinforcement Learning-Based Control Approach

Hadi Yadavari*, Vahid TAVAKOL Aghaei, Serhat Ikizoglu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)


In this paper, we focus on enhancing the performance of the controller utilized in the Stewart platform by investigating the dynamics of the platform. Dynamic modeling is crucial for control and simulation, yet challenging for parallel robots like the Stewart platform due to closed-loop kinematics. We explore classical methods to solve its inverse dynamical model, but conventional approaches face difficulties, often resulting in simplified and inaccurate models. To overcome this limitation, we propose a novel approach by replacing the classical feedforward inverse dynamic block with a reinforcement learning (RL) agent, which, to our knowledge, has not been tried yet in the context of the Stewart platform control. Our proposed methodology utilizes a hybrid control topology that combines RL with existing classical control topologies and inverse kinematic modeling. We leverage three deep reinforcement learning (DRL) algorithms and two model-based RL algorithms to achieve improved control performance, highlighting the versatility of the proposed approach. By incorporating the learned feedforward control topology into the existing PID controller, we demonstrate enhancements in the overall control performance of the Stewart platform. Notably, our approach eliminates the need for explicit derivation and solving of the inverse dynamic model, overcoming the drawbacks associated with inaccurate and simplified models. Through several simulations and experiments, we validate the effectiveness of our reinforcement learning-based control approach for the dynamic modeling of the Stewart platform. The results highlight the potential of RL techniques in overcoming the challenges associated with dynamic modeling in parallel robot systems, promising improved control performance. This enhances accuracy and reduces the development time of control algorithms in real-world applications. Nonetheless, it requires a simulation step before practical implementations.

Original languageEnglish
Pages (from-to)117-131
Number of pages15
JournalJournal of Robotics and Control (JRC)
Issue number1
Publication statusPublished - 2024

Bibliographical note

Publisher Copyright:
© 2024 Department of Agribusiness, Universitas Muhammadiyah Yogyakarta. All rights reserved.


  • Control
  • Deep Learning
  • Dynamic Modelling
  • Keywords—Stewart Platform
  • Reinforcement Learning


Dive into the research topics of 'Addressing Challenges in Dynamic Modeling of Stewart Platform using Reinforcement Learning-Based Control Approach'. Together they form a unique fingerprint.

Cite this