Improved Sensor Fusion and Deep Learning of 3D Human Pose from Sparse Magnetic Inertial Measurement Units

  • 3D joint angles based human pose is needed for applications like activity recognition, musculoskeletal health, sports biomechanics and ergonomics. The microelectromechanical systems (MEMS) based magnetic-inertial measurement units (MIMUs) can estimate 3D orientation. Due to small size, MIMUs can be attached to the body as wearable sensors for obtaining full 3D human pose and this system is termed as inertial motion capture (i-Mocap). But the MIMUs suffer from sensor errors and disturbances, due to which orientation estimated from individual MIMUs can be erroneous. Accurate sensor calibration is essential and subsequently alignment of these sensors to body segments must also be precisely known, which is called sensor-to-segment calibration. Sensor fusion is employed to address the disturbances and noise in MIMUs. Many state-of-art inertial motion capture approaches ignore the magnetometer and only use IMUs to reduce the error arising from inhomogeneous magnetic field. These algorithms rely on kinematic constraints and assumptions regarding joints and are based on IMUs located on the adjacent body segments. The full body coverage requires 13-17 such units and can be quite obtrusive. The setting up and calibration of so many wearable sensors also take time. This thesis focuses on 3D human pose estimation from a reduced number of MIMUs and deals with this problem systematically. First we propose an accurate simultaneous calibration of multiple MIMUs, which also learns the uncertainty of individual sensors. We then describe a novel sensor fusion algorithm for robust orientation estimation from an MIMU and for updating sensors calibration online. The residual errors in both sensor calibration and fusion can result in drift error in the joint angles. Therefore, we present anatomical (sensor-to-segment) calibration in which an orientation offset correction term is updated and used for online correction of residual drift in individual joint angles. Subsequently we demonstrate that 3D human joint angle constraints can be learned using a data-driven approach in a high dimensional latent space. Owing to temporal and joint angle constraints, it is possible to use only a reduced set of sensors (as opposed to one sensor per segment) and still obtain 3D human pose. But the spatial and temporal prior learning from data is often limited due to finite set of movement patterns in most datasets. This introduces uncertainty while estimating 3D human pose from sparse MIMU sensors. We propose a magnetometer robust orientation parameterization and a data-driven deep learning framework to predict 3D human pose with associated uncertainty from sparse MIMUs. The model is evaluated on real MIMU data and we show that the uncertainty predicted by the trained model is well-correlated with actual error and ambiguity.

Download full text files

Export metadata

Metadaten
Author:Hammad Tanveer ButtORCiD
URN:urn:nbn:de:hbz:386-kluedo-73037
DOI:https://doi.org/10.26204/KLUEDO/7303
Advisor:Didier Stricker
Document Type:Doctoral Thesis
Language of publication:English
Date of Publication (online):2023/06/06
Year of first Publication:2023
Publishing Institution:Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau
Granting Institution:Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau
Acceptance Date of the Thesis:2022/11/04
Date of the Publication (Server):2023/06/09
Tag:Accelerometer; Human Pose; IMU; Magnetometer; Microelectromechanical Systems; Rate Gyro; Sensor Fusion; Sensors; Uncertainty Estimation
GND Keyword:Deep Learning; Datenfusion; Zielverfolgung; Sensor; Unsicherheit
Page Number:171
Faculties / Organisational entities:Kaiserslautern - Fachbereich Informatik
CCS-Classification (computer science):H. Information Systems
I. Computing Methodologies
DDC-Cassification:0 Allgemeines, Informatik, Informationswissenschaft / 004 Informatik
6 Technik, Medizin, angewandte Wissenschaften / 600 Technik
Licence (German):Creative Commons 4.0 - Namensnennung, nicht kommerziell, keine Bearbeitung (CC BY-NC-ND 4.0)