Bridging Reinforcement Learning and Iterative Learning Control:
Autonomous Reference Tracking for Unknown, Nonlinear Dynamics
Abstract
This work addresses the problem of reference tracking in autonomously
learning agents with unknown, nonlinear dynamics. Existing solutions
require model information or extensive parameter tuning, and have rarely
been validated in real-world experiments. We propose a learning control
scheme that learns to approximate the unknown dynamics by a Gaussian
Process (GP), which is used to optimize and apply a feedforward control
input on each trial. Unlike existing approaches, the proposed method
neither requires knowledge of the system states and their dynamics nor
knowledge of an effective feedback control structure. All algorithm
parameters are chosen automatically, i.e. the learning method works plug
and play. The proposed method is validated in extensive simulations and
real-world experiments. In contrast to most existing work, we study
learning dynamics for more than one motion task as well as the
robustness of performance across a large range of learning parameters.
The method’s plug and play applicability is demonstrated by experiments
with a balancing robot, in which the proposed method rapidly learns to
track the desired output. Due to its model-agnostic and plug and play
properties, the proposed method is expected to have high potential for
application to a large class of reference tracking problems in systems
with unknown, nonlinear dynamics.