loading page

Shared Autonomy Locomotion Synthesis with Virtual Wearable Robotic Devices
  • Balint Hodossy,
  • Dario Farina
Balint Hodossy
Imperial College London

Corresponding Author:[email protected]

Author Profile
Dario Farina
Imperial College London
Author Profile

Abstract

Objective: Virtual environments provide a safe and accessible way to test innovative technologies for controlling wearable robotic devices for assisting human movement. However, to apply them to systems that support walking, such as powered prosthetic legs, it is not enough to model the hardware itself. Predictive locomotion synthesizers can generate the movements of a virtual user, with whom the simulated device can be trained or evaluated.
Methods: We implemented a Deep Reinforcement Learning based motion controller in a physics engine, where autonomy over the humanoid model is shared between the simulated user and the control policy of an active prosthesis. A data-driven, continuous representations of user intent was used to simulate a Human Machine Interface that controlled a transtibial prosthesis. The system was tested in a complex non-steady-state locomotion task, involving turns and stops.
Results: Providing the intent to the device control policy did not improve performance of the human-prosthesis system if both policies were learnt simultaneously. However, if the human walking policy was frozen, the intent-driven prosthesis outperformed its counterpart at non-cyclic gait patterns.
Conclusion: The continuous intent representation used was shown to mitigate the need for compensatory gait patterns from their virtual users. Co-adaptation was identified as a potential challenge for training prosthesis control policies with human-in-the-loop.
Significance: The proposed framework outlines a way to explore the complex design space of robot-assisted gait, promoting the transfer of the next generation of intent driven controllers from the lab to real-life scenarios.
12 Feb 2024Submitted to TechRxiv
14 Feb 2024Published in TechRxiv