DeepBBWAE-Net: A CNN-RNN Based Deep SuperLearner For Estimating Lower
Extremity Sagittal Plane Joint Kinematics Using Shoe-Mounted IMU Sensors
In Daily Living
Abstract
Measurement of human body movement is an essential step in biomechanical
analysis. The current standard for human motion capture systems uses
infrared cameras to track reflective markers placed on the subject.
While these systems can accurately track joint kinematics, the analyses
are spatially limited to the lab environment. Though Inertial
Measurement Unit (IMU) can eliminate the spatial limitations of the
motion capture system, those systems are impractical for use in daily
living due to the need for many sensors, typically one per body segment.
Due to the need for practical and accurate estimation of joint
kinematics, this study implements a reduced number of IMU sensors and
employs machine learning algorithm to map sensor data to joint angles.
Our developed algorithm estimates hip, knee, and ankle angles in the
sagittal plane using two shoe-mounted IMU sensors in different practical
walking conditions: treadmill, level overground, stair, and slope
conditions. Specifically, we proposed five deep learning networks that
use combinations of Convolutional Neural Networks (CNN) and Gated
Recurrent Unit (GRU) based Recurrent Neural Networks (RNN) as base
learners for our framework. Using those five baseline models, we
proposed a novel framework, DeepBBWAE-Net, that implements ensemble
techniques such as bagging, boosting, and weighted averaging to improve
kinematic predictions. DeepBBWAE-Net predicts joint kinematics for the
three joint angles under all the walking conditions with a Root Mean
Square Error (RMSE) 6.93-29.0% lower than base models individually.
This is the first study that uses a reduced number of IMU sensors to
estimate kinematics in multiple walking environments.