TY - JOUR
T1 - Enhancing Robustness of Locomotion Policy for Quadrupedal Robot With Deep Disturbance Observer
AU - Muhamad, Fikih
AU - Kusuma, Anak Agung Krisna Ananda
AU - Park, Jae Han
AU - Kim, Jung Su
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2025
Y1 - 2025
N2 - This letter proposes a control framework to enhance the robustness of a locomotion policy against uncertainties by integrating it with a deep disturbance observer (DOB) network and a deep state estimator network. The deep DOB approximates the inverse model of a quadrupedal robot. The locomotion policy is trained to produce optimal actions, with the deep DOB estimating the overall uncertainties of the robot, and the deep state estimator estimates the body's linear velocities. All networks are trained under nominal conditions in IsaacGym. Subsequently, all the trained networks are transferred to Gazebo and a real robot with ROS2 are used to validate their robustness under uncertain conditions without additional tuning. Furthermore, validation results show that the proposed control framework performs best in velocity tracking compared to the baseline method in terms of lowest estimation errors. This emphasizes the effectiveness of the proposed control framework in improving robustness of the locomotion policy.
AB - This letter proposes a control framework to enhance the robustness of a locomotion policy against uncertainties by integrating it with a deep disturbance observer (DOB) network and a deep state estimator network. The deep DOB approximates the inverse model of a quadrupedal robot. The locomotion policy is trained to produce optimal actions, with the deep DOB estimating the overall uncertainties of the robot, and the deep state estimator estimates the body's linear velocities. All networks are trained under nominal conditions in IsaacGym. Subsequently, all the trained networks are transferred to Gazebo and a real robot with ROS2 are used to validate their robustness under uncertain conditions without additional tuning. Furthermore, validation results show that the proposed control framework performs best in velocity tracking compared to the baseline method in terms of lowest estimation errors. This emphasizes the effectiveness of the proposed control framework in improving robustness of the locomotion policy.
KW - deep learning
KW - disturbance observer (DO)
KW - Legged robot
KW - robust locomotion policy
UR - https://www.scopus.com/pages/publications/105012283932
U2 - 10.1109/LRA.2025.3595037
DO - 10.1109/LRA.2025.3595037
M3 - Article
AN - SCOPUS:105012283932
SN - 2377-3766
VL - 10
SP - 9376
EP - 9383
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 9
ER -