loading page

Experimental validation of an Actor-Critic Model Predictive Force Controller for robot-environment interaction tasks
  • +4
  • Alessandro Pozzi ,
  • Enrico Ferrentino ,
  • Vincenzo Petrone ,
  • Luca Puricelli ,
  • Pasquale Chiacchio ,
  • Francesco Braghin ,
  • Loris Roveda
Alessandro Pozzi
Author Profile
Enrico Ferrentino
Author Profile
Vincenzo Petrone
University of Salerno

Corresponding Author:[email protected]

Author Profile
Luca Puricelli
Author Profile
Pasquale Chiacchio
Author Profile
Francesco Braghin
Author Profile
Loris Roveda
Author Profile

Abstract

In industrial settings, robots are typically employed to accurately track a reference force to exert on the surrounding environment to complete interaction tasks.
Interaction controllers are typically used to achieve this goal. Still, they either require manual tuning, which demands a significant amount of time, or exact modeling of the environment the robot will interact with, thus possibly failing during the actual application.
A significant advancement in this area would be a high-performance force controller that does not need operator calibration and is quick to be deployed in any scenario.
With this aim, this paper proposes an Actor-Critic Model Predictive Force Controller (ACMPFC), which outputs the optimal setpoint to follow in order to guarantee force tracking, computed by continuously trained neural networks.
This strategy is an extension of a reinforcement learning-based one, born in the context of human-robot collaboration, suitably adapted to robot-environment interaction.
We validate the ACMPFC in a real-case scenario featuring a Franka Emika Panda robot.
Compared with a base force controller and a learning-based approach, the proposed controller yields a reduction of the force tracking MSE, attaining fast convergence: with respect to the base force controller, ACMPFC reduces the MSE by a factor of 4.35.
21 Mar 2024Submitted to TechRxiv
29 Mar 2024Published in TechRxiv