Qiyu Liu , Ying Wu , Bicheng Li , Yuxin Ma , Hanling Li , Yong Yu
{"title":"SHoTGCN: Spatial high-order temporal GCN for skeleton-based action recognition","authors":"Qiyu Liu , Ying Wu , Bicheng Li , Yuxin Ma , Hanling Li , Yong Yu","doi":"10.1016/j.neucom.2025.129697","DOIUrl":null,"url":null,"abstract":"<div><div>Action recognition algorithms that leverage human skeleton motion data are highly attractive due to their robustness and high information density. Currently, the majority of algorithms in this domain employ graph convolutional neural networks (GCNs). However, these algorithms often neglect the extraction of high-order features. To address this limitation, we propose a novel approach called the Spatial High-Order Temporal Graph Convolution Network (SHoTGCN), designed to evaluate the impact of high-order features on human action recognition. Our method begins by deriving high-order features from human skeleton time series data through temporal interactions. Utilizing these high-order features significantly improves the algorithm’s ability to recognize human actions. Moreover, we found that the traditional feature extraction method, which employs Depthwise Convolution (DWConv) with a single 2D convolution, is suboptimal compared to a multibranch structure for feature extraction. To address this, we introduce a structure re-parameterization technique with DWConv, termed Rep-tDWConv, to enhance feature extraction. By integrating the Exponential Moving Average (EMA) model during the model fusion process, our proposed model achieves state-of-the-art (SOTA) performance, with accuracies of 90.4% and 92.0% on the XSub and XSet splits of the NTU RGB+D 120 dataset, respectively.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"632 ","pages":"Article 129697"},"PeriodicalIF":5.5000,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225003698","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Action recognition algorithms that leverage human skeleton motion data are highly attractive due to their robustness and high information density. Currently, the majority of algorithms in this domain employ graph convolutional neural networks (GCNs). However, these algorithms often neglect the extraction of high-order features. To address this limitation, we propose a novel approach called the Spatial High-Order Temporal Graph Convolution Network (SHoTGCN), designed to evaluate the impact of high-order features on human action recognition. Our method begins by deriving high-order features from human skeleton time series data through temporal interactions. Utilizing these high-order features significantly improves the algorithm’s ability to recognize human actions. Moreover, we found that the traditional feature extraction method, which employs Depthwise Convolution (DWConv) with a single 2D convolution, is suboptimal compared to a multibranch structure for feature extraction. To address this, we introduce a structure re-parameterization technique with DWConv, termed Rep-tDWConv, to enhance feature extraction. By integrating the Exponential Moving Average (EMA) model during the model fusion process, our proposed model achieves state-of-the-art (SOTA) performance, with accuracies of 90.4% and 92.0% on the XSub and XSet splits of the NTU RGB+D 120 dataset, respectively.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.