Essential Maintenance: All Authorea-powered sites will be offline 9am-10am EDT Tuesday 28 May
and 11pm-1am EDT Tuesday 28-Wednesday 29 May. We apologise for any inconvenience.

loading page

Model-Based Safe Reinforcement Learning with Time-Varying State and Control Constraints: An Application to Intelligent Vehicles
  • +3
  • Xinglong Zhang ,
  • Yaoqian Peng ,
  • Biao Luo ,
  • Wei Pan ,
  • Xin Xu ,
  • Haibin Xie
Xinglong Zhang
National University of Defense Technology, National University of Defense Technology, National University of Defense Technology, National University of Defense Technology

Corresponding Author:[email protected]

Author Profile
Yaoqian Peng
Author Profile
Haibin Xie
Author Profile

Abstract

Recently, barrier function-based safe reinforcement learning (RL) with the actor-critic structure for continuous control tasks has received increasing attention. It is still challenging to learn a near-optimal control policy with safety and convergence guarantees. Also, few works have addressed the safe RL algorithm design under time-varying safety constraints. This paper proposes a model-based safe RL algorithm for optimal control of nonlinear systems with time-varying state and control constraints. In the proposed approach, we construct a novel barrier-based control policy structure that can guarantee control safety. A multi-step policy evaluation mechanism is proposed to predict the policy’s safety risk under time-varying safety constraints and guide the policy to update safely. Theoretical results on stability and robustness are proven. Also, the convergence of the actor-critic learning algorithm is analyzed. The performance of the proposed algorithm outperforms several state-of-the-art RL algorithms in the simulated Safety Gym environment. Furthermore, the approach is applied to the integrated path following and collision avoidance problem for two real-world intelligent vehicles. A differential-drive vehicle and an Ackermann-drive one are used to verify the offline deployment performance and the online learning performance, respectively. Our approach shows an impressive sim-to-real transfer capability and a satisfactory online control performance in the experiment.