推进基于骨骼的人类行为识别:多流融合时空图卷积网络

IF 5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Complex & Intelligent Systems Pub Date : 2024-12-28 DOI:10.1007/s40747-024-01743-2
Fenglin Liu, Chenyu Wang, Zhiqiang Tian, Shaoyi Du, Wei Zeng
{"title":"推进基于骨骼的人类行为识别:多流融合时空图卷积网络","authors":"Fenglin Liu, Chenyu Wang, Zhiqiang Tian, Shaoyi Du, Wei Zeng","doi":"10.1007/s40747-024-01743-2","DOIUrl":null,"url":null,"abstract":"<p>In the realm of daily human interactions, a rich tapestry of behaviors and actions is observed, encompassing a wealth of informative cues. In the era of burgeoning big data, extensive repositories of images and videos have risen to prominence as the primary conduits for disseminating information. Grasping the intricacies of human behaviors depicted within these multimedia contexts has evolved into a pivotal quandary within the domain of computer vision. The technology of behavior recognition finds its practical application across domains such as human-computer interaction, intelligent surveillance, and anomaly detection, exhibiting a robust blend of pragmatic utility and scholarly significance. The present study introduces an innovative human body behavior recognition framework anchored in skeleton sequences and multi-stream fused spatiotemporal graph convolutional networks. Developed upon the foundation of graph convolutional networks, this method encompasses three pivotal refinements tailored to ameliorate extant challenges. First and foremost, in response to the complex task of capturing distant interdependencies among nodes within graph convolutional networks, we incorporate a spatial attention module. This module adeptly encapsulates long-term node interdependencies via precision-laden positional information, thus engendering interconnections that span diverse temporal and spatial contexts. Subsequently, to elevate the discernment of channel information within the network and to optimize the allocation of attention across distinct channels, we introduce a channel attention mechanism. This augmentation fortifies the discernment of motion-related features. Lastly, confronting the lacuna of information gaps prevalent within single-stream data, we deploy a multi-stream fusion methodology to fortify model outputs, ultimately fostering more precise prognostications concerning action classifications. Empirical results bear testament to the efficacy of the proposed multi-stream fused spatiotemporal graph convolutional network paradigm for skeleton-centric behavior recognition, evincing a pinnacle recognition accuracy of 96.0% on the expansive NTU-RGB+D skeleton dataset, alongside a zenithal accuracy of 37.3% on the Kinetics-Skeleton dataset—emanating from RGB data and furthered through pose estimation.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"54 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2024-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Advancing skeleton-based human behavior recognition: multi-stream fusion spatiotemporal graph convolutional networks\",\"authors\":\"Fenglin Liu, Chenyu Wang, Zhiqiang Tian, Shaoyi Du, Wei Zeng\",\"doi\":\"10.1007/s40747-024-01743-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In the realm of daily human interactions, a rich tapestry of behaviors and actions is observed, encompassing a wealth of informative cues. In the era of burgeoning big data, extensive repositories of images and videos have risen to prominence as the primary conduits for disseminating information. Grasping the intricacies of human behaviors depicted within these multimedia contexts has evolved into a pivotal quandary within the domain of computer vision. The technology of behavior recognition finds its practical application across domains such as human-computer interaction, intelligent surveillance, and anomaly detection, exhibiting a robust blend of pragmatic utility and scholarly significance. The present study introduces an innovative human body behavior recognition framework anchored in skeleton sequences and multi-stream fused spatiotemporal graph convolutional networks. Developed upon the foundation of graph convolutional networks, this method encompasses three pivotal refinements tailored to ameliorate extant challenges. First and foremost, in response to the complex task of capturing distant interdependencies among nodes within graph convolutional networks, we incorporate a spatial attention module. This module adeptly encapsulates long-term node interdependencies via precision-laden positional information, thus engendering interconnections that span diverse temporal and spatial contexts. Subsequently, to elevate the discernment of channel information within the network and to optimize the allocation of attention across distinct channels, we introduce a channel attention mechanism. This augmentation fortifies the discernment of motion-related features. Lastly, confronting the lacuna of information gaps prevalent within single-stream data, we deploy a multi-stream fusion methodology to fortify model outputs, ultimately fostering more precise prognostications concerning action classifications. Empirical results bear testament to the efficacy of the proposed multi-stream fused spatiotemporal graph convolutional network paradigm for skeleton-centric behavior recognition, evincing a pinnacle recognition accuracy of 96.0% on the expansive NTU-RGB+D skeleton dataset, alongside a zenithal accuracy of 37.3% on the Kinetics-Skeleton dataset—emanating from RGB data and furthered through pose estimation.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":\"54 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2024-12-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-024-01743-2\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-024-01743-2","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

在人类日常互动的领域中,可以观察到丰富多彩的行为和行动,包括丰富的信息线索。在大数据蓬勃发展的时代,海量的图像和视频已经成为信息传播的主要渠道。掌握这些多媒体环境中描述的人类行为的复杂性已经演变成计算机视觉领域的一个关键难题。行为识别技术在人机交互、智能监控和异常检测等领域的实际应用,显示出实用效用和学术意义的强大融合。本研究提出了一种基于骨骼序列和多流融合时空图卷积网络的创新人体行为识别框架。该方法在图卷积网络的基础上发展起来,包含了三个关键的改进,以改善现有的挑战。首先,为了响应在图卷积网络中捕获节点之间遥远的相互依赖关系的复杂任务,我们结合了一个空间注意模块。该模块通过精确的位置信息巧妙地封装了长期的节点相互依赖关系,从而产生了跨越不同时间和空间背景的互连。随后,为了提高网络中渠道信息的识别能力,并优化不同渠道之间的注意力分配,我们引入了一种渠道注意机制。这种增强强化了对运动相关特征的识别。最后,面对单流数据中普遍存在的信息缺口,我们部署了一种多流融合方法来强化模型输出,最终促进关于行动分类的更精确的预测。实验结果证明了所提出的多流融合时空图卷积网络范式在以骨骼为中心的行为识别方面的有效性,在扩展的NTU-RGB+D骨骼数据集上的峰值识别准确率为96.0%,在基于RGB数据并通过姿态估计进一步提高的运动学-骨骼数据集上的峰值识别准确率为37.3%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Advancing skeleton-based human behavior recognition: multi-stream fusion spatiotemporal graph convolutional networks

In the realm of daily human interactions, a rich tapestry of behaviors and actions is observed, encompassing a wealth of informative cues. In the era of burgeoning big data, extensive repositories of images and videos have risen to prominence as the primary conduits for disseminating information. Grasping the intricacies of human behaviors depicted within these multimedia contexts has evolved into a pivotal quandary within the domain of computer vision. The technology of behavior recognition finds its practical application across domains such as human-computer interaction, intelligent surveillance, and anomaly detection, exhibiting a robust blend of pragmatic utility and scholarly significance. The present study introduces an innovative human body behavior recognition framework anchored in skeleton sequences and multi-stream fused spatiotemporal graph convolutional networks. Developed upon the foundation of graph convolutional networks, this method encompasses three pivotal refinements tailored to ameliorate extant challenges. First and foremost, in response to the complex task of capturing distant interdependencies among nodes within graph convolutional networks, we incorporate a spatial attention module. This module adeptly encapsulates long-term node interdependencies via precision-laden positional information, thus engendering interconnections that span diverse temporal and spatial contexts. Subsequently, to elevate the discernment of channel information within the network and to optimize the allocation of attention across distinct channels, we introduce a channel attention mechanism. This augmentation fortifies the discernment of motion-related features. Lastly, confronting the lacuna of information gaps prevalent within single-stream data, we deploy a multi-stream fusion methodology to fortify model outputs, ultimately fostering more precise prognostications concerning action classifications. Empirical results bear testament to the efficacy of the proposed multi-stream fused spatiotemporal graph convolutional network paradigm for skeleton-centric behavior recognition, evincing a pinnacle recognition accuracy of 96.0% on the expansive NTU-RGB+D skeleton dataset, alongside a zenithal accuracy of 37.3% on the Kinetics-Skeleton dataset—emanating from RGB data and furthered through pose estimation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
期刊最新文献
Tailored meta-learning for dual trajectory transformer: advancing generalized trajectory prediction Control strategy of robotic manipulator based on multi-task reinforcement learning Explainable and secure framework for autism prediction using multimodal eye tracking and kinematic data A novel three-way distance-based fuzzy large margin distribution machine for imbalance classification Chaos-enhanced metaheuristics: classification, comparison, and convergence analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1