TY - JOUR
T1 - MSA-GCN
T2 - Exploiting Multi-Scale Temporal Dynamics With Adaptive Graph Convolution for Skeleton-Based Action Recognition
AU - Comivi Alowonou, Kowovi
AU - Han, Ji Hyeong
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2024
Y1 - 2024
N2 - Graph convolutional networks (GCNs) have been widely used and have achieved remarkable results in skeleton-based action recognition. We note that existing GCN-based approaches rely on local context information of the skeleton joints to construct adaptive graphs for feature aggregation, limiting their ability to understand actions that involve coordinated movements across various parts of the body. An adaptive graph built upon the global context information of the joints can help move beyond this limitation. Therefore, in this paper, we propose a novel approach to skeleton-based action recognition named Multi-stage Adaptive Graph Convolution Network (MSA-GCN). It consists of two modules: Multi-stage Adaptive Graph Convolution (MSA-GC) and Temporal Multi-Scale Transformer (TMST). These two modules work together to capture complex spatial and temporal patterns within skeleton data effectively. Specifically, MSA-GC explores both local and global context information of the joints across all sequences to construct the adaptive graph and facilitates the understanding of complex and nuanced relationships between joints. On the other hand, the TMST module integrates a Gated Multi-stage Temporal Convolution (GMSTC) with a Temporal Multi-Head Self-Attention (TMHSA) to capture global temporal features and accommodate both long-term and short-term dependencies within action sequences. Through extensive experiments on multiple benchmark datasets, including NTU RGB+D 60, NTU RGB+D 120, and Northwestern-UCLA, MSA-GCN achieves state-of-the-art performance and verifies its effectiveness in skeleton-based action recognition.
AB - Graph convolutional networks (GCNs) have been widely used and have achieved remarkable results in skeleton-based action recognition. We note that existing GCN-based approaches rely on local context information of the skeleton joints to construct adaptive graphs for feature aggregation, limiting their ability to understand actions that involve coordinated movements across various parts of the body. An adaptive graph built upon the global context information of the joints can help move beyond this limitation. Therefore, in this paper, we propose a novel approach to skeleton-based action recognition named Multi-stage Adaptive Graph Convolution Network (MSA-GCN). It consists of two modules: Multi-stage Adaptive Graph Convolution (MSA-GC) and Temporal Multi-Scale Transformer (TMST). These two modules work together to capture complex spatial and temporal patterns within skeleton data effectively. Specifically, MSA-GC explores both local and global context information of the joints across all sequences to construct the adaptive graph and facilitates the understanding of complex and nuanced relationships between joints. On the other hand, the TMST module integrates a Gated Multi-stage Temporal Convolution (GMSTC) with a Temporal Multi-Head Self-Attention (TMHSA) to capture global temporal features and accommodate both long-term and short-term dependencies within action sequences. Through extensive experiments on multiple benchmark datasets, including NTU RGB+D 60, NTU RGB+D 120, and Northwestern-UCLA, MSA-GCN achieves state-of-the-art performance and verifies its effectiveness in skeleton-based action recognition.
KW - GCN
KW - Skeleton-based action recognition
KW - dynamic graph topology
KW - multi-scale temporal processing
UR - https://www.scopus.com/pages/publications/85212818204
U2 - 10.1109/ACCESS.2024.3520172
DO - 10.1109/ACCESS.2024.3520172
M3 - Article
AN - SCOPUS:85212818204
SN - 2169-3536
VL - 12
SP - 193552
EP - 193563
JO - IEEE Access
JF - IEEE Access
ER -