TechRxiv
Manuscript.pdf (1.8 MB)
Download file

Wearable Motion Capture: Reconstructing and Predicting 3D Human Poses from Wearable Sensors

Download (1.8 MB)
preprint
posted on 2021-11-05, 04:04 authored by Md MoniruzzamanMd Moniruzzaman, Zhaozheng Yin, Md Sanzid Bin HossainMd Sanzid Bin Hossain, Zhishan Guo, Hwan Choi
Identifying 3D human walking poses in unconstrained environments has many applications such as enabling prosthetists and clinicians to access the amputees’ walking functions outside clinics and helping amputees obtain an optimal walking condition with predictive control. Thus, we propose the wearable motion capture problem of reconstructing and predicting 3D human poses from the wearable IMU sensors and wearable cameras. To solve this challenging problem, we introduce a novel Attention-Oriented Recurrent Neural Network (AttRNet) that contains a sensor-wise attention-oriented recurrent encoder, a reconstruction module, and a dynamic temporal attention-oriented recurrent decoder, to reconstruct the current pose and predict the future poses. To evaluate our approach, we collected a new WearableMotionCapture dataset using wearable IMUs and wearable video cameras, along with the musculoskeletal joint angle ground truth. The proposed AttRNet shows high accuracy on theWearableMotionCapture dataset, and it also outperforms the current best methods on two public pose prediction datasets with IMU-only data: DIP-IMU and TotalCaputre. The source codes and the new dataset will be publicly available on https://github.com/MoniruzzamanMd/Wearable-Motion-Capture.

History

Email Address of Submitting Author

mmoniruzzama@cs.stonybrook.edu

ORCID of Submitting Author

0000-0003-3217-5094

Submitting Author's Institution

Stony Brook University

Submitting Author's Country

United States of America

Usage metrics

Licence

Exports