loading page

Deep learning for enhanced prosthetic control: Real-time motor intent decoding for simultaneous control of artificial limbs
  • Jan Zbinden ,
  • Julia Molin ,
  • Max Ortiz Catalan
Jan Zbinden
Chalmers University of Technology

Corresponding Author:[email protected]

Author Profile
Julia Molin
Author Profile
Max Ortiz Catalan
Author Profile

Abstract

The development of advanced prosthetic devices that can be seamlessly used during an individual’s daily life remains a significant challenge in the field of rehabilitation engineering. This study compares the performance of deep learning architectures to shallow networks in decoding motor intent for prosthetic control using electromyography (EMG) signals. Four neural network architectures, including a feedforward neural network with one hidden layer, a feedforward neural network with multiple hidden layers, a temporal convolutional network, and a convolutional neural network with squeeze-and-excitation operations were evaluated in real-time, human-in-the-loop experiments with able- bodied participants and an individual with an amputation. Our results demonstrate that deep learning architectures outperform shallow networks in decoding motor intent, with representation learning effectively extracting underlying motor control information from EMG signals. Furthermore, the observed performance improvements by using deep neural networks were consistent across both able-bodied and amputee participants. By employing deep neural networks instead of traditional machine learning approaches, more reliable and precise control of a prosthesis can be achieved, which has the potential to significantly enhance prosthetic functionality and improve the quality of life for individuals with amputations.