loading page

Wearable Motion Capture: Reconstructing and Predicting 3D Human Poses from Wearable Sensors
  • +2
  • Md Moniruzzaman ,
  • Zhaozheng Yin ,
  • Md Sanzid Bin Hossain ,
  • Zhishan Guo ,
  • Hwan Choi
Md Moniruzzaman
Stony Brook University

Corresponding Author:[email protected]

Author Profile
Zhaozheng Yin
Author Profile
Md Sanzid Bin Hossain
Author Profile
Zhishan Guo
Author Profile
Hwan Choi
Author Profile

Abstract

Identifying 3D human walking poses in unconstrained environments has many applications such as enabling prosthetists and clinicians to access the amputees’ walking functions outside clinics and helping amputees obtain an optimal walking condition with predictive control. Thus, we propose the wearable motion capture problem of reconstructing and predicting 3D human poses from the wearable IMU sensors and wearable cameras. To solve this challenging problem, we introduce a novel Attention-Oriented Recurrent Neural Network (AttRNet) that contains a sensor-wise attention-oriented recurrent encoder, a reconstruction module, and a dynamic temporal attention-oriented recurrent decoder, to reconstruct the current pose and predict the future poses. To evaluate our approach, we collected a new WearableMotionCapture dataset using wearable IMUs and wearable video cameras, along with the musculoskeletal joint angle ground truth. The proposed AttRNet shows high accuracy on theWearableMotionCapture dataset, and it also outperforms the current best methods on two public pose prediction datasets with IMU-only data: DIP-IMU and TotalCaputre. The source codes and the new dataset will be publicly available on https://github.com/MoniruzzamanMd/Wearable-Motion-Capture.
2023Published in IEEE Journal of Biomedical and Health Informatics on pages 1-12. 10.1109/JBHI.2023.3311448