TY - GEN
T1 - Sensor Data Augmentation from Skeleton Pose Sequences for Improving Human Activity Recognition
AU - Zolfaghari, Parham
AU - Rey, Vitor Fortes
AU - Ray, Lala
AU - Kim, Hyun
AU - Suh, Sungho
AU - Lukowicz, Paul
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - The proliferation of deep learning has significantly advanced various fields, yet Human Activity Recognition (HAR) has not fully capitalized on these developments, primarily due to the scarcity of labeled datasets. Despite the integration of advanced Inertial Measurement Units (IMUs) in ubiquitous wearable devices like smartwatches and fitness trackers, which offer self-labeled activity data from users, the volume of labeled data remains insufficient compared to domains where deep learning has achieved remarkable success. Addressing this gap, in this paper, we propose a novel approach to improve wearable sensor-based HAR by introducing a pose-To-sensor network model that generates sensor data directly from 3D skeleton pose sequences. our method simultaneously trains the pose-To-sensor network and a human activity classifier, optimizing both data reconstruction and activity recognition. Our contributions include the integration of simultaneous training, direct pose-To-sensor generation, and a comprehensive evaluation on the MM-Fit dataset. Experimental results demonstrate the superiority of our framework with significant performance improvements over baseline methods.
AB - The proliferation of deep learning has significantly advanced various fields, yet Human Activity Recognition (HAR) has not fully capitalized on these developments, primarily due to the scarcity of labeled datasets. Despite the integration of advanced Inertial Measurement Units (IMUs) in ubiquitous wearable devices like smartwatches and fitness trackers, which offer self-labeled activity data from users, the volume of labeled data remains insufficient compared to domains where deep learning has achieved remarkable success. Addressing this gap, in this paper, we propose a novel approach to improve wearable sensor-based HAR by introducing a pose-To-sensor network model that generates sensor data directly from 3D skeleton pose sequences. our method simultaneously trains the pose-To-sensor network and a human activity classifier, optimizing both data reconstruction and activity recognition. Our contributions include the integration of simultaneous training, direct pose-To-sensor generation, and a comprehensive evaluation on the MM-Fit dataset. Experimental results demonstrate the superiority of our framework with significant performance improvements over baseline methods.
KW - data augmentation
KW - Human activity recognition
KW - multi-modal learning
KW - pose estimation
UR - https://www.scopus.com/pages/publications/85203814961
U2 - 10.1109/ABC61795.2024.10652200
DO - 10.1109/ABC61795.2024.10652200
M3 - Conference contribution
AN - SCOPUS:85203814961
T3 - 2024 International Conference on Activity and Behavior Computing, ABC 2024
BT - 2024 International Conference on Activity and Behavior Computing, ABC 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 International Conference on Activity and Behavior Computing, ABC 2024
Y2 - 29 May 2024 through 31 May 2024
ER -