loading page

Using Deep Reinforcement Learning for Dynamic Gain Adjustment of a Disturbance Observer
  • Kyunghwan Choi,
  • Hyochan Lee,
  • Wooyong Kim
Kyunghwan Choi
Author Profile
Hyochan Lee
Wooyong Kim

Corresponding Author:


Increasing estimation accuracy and reducing noise sensitivity are challenging trade-offs in designing disturbance observers (DOBs). The DOB gain tuning process for overcoming this trade-off is not straightforward, nor does it guarantee optimal performance for the resulting DOBs. This paper presents a dynamic gain DOB that intelligently adjusts its gain based on deep reinforcement learning (DRL) to overcome this tradeoff. First, a variable gain DOB is designed by modifying the conventional DOB. The variable gain DOB can exponentially estimate a constant disturbance with a varying gain. Then, DRL is used to train a dynamic gain adjuster for the variable gain DOB. A case study demonstrated that the proposed dynamic gain DOB increases its gain only when needed (i.e., when the estimation error is significant) and otherwise decreases the gain to reduce noise. Comparison with the conventional DOB of various constant gains shows that the proposed DOB achieves superior performance.
22 Mar 2024Submitted to TechRxiv
29 Mar 2024Published in TechRxiv