GTIGNet: Global Topology Interaction Graphormer Network for 3D hand pose estimation

IF 6.3 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neural Networks Pub Date : 2025-02-04 DOI:10.1016/j.neunet.2025.107221
Yanjun Liu , Wanshu Fan , Cong Wang , Shixi Wen , Xin Yang , Qiang Zhang , Xiaopeng Wei , Dongsheng Zhou
{"title":"GTIGNet: Global Topology Interaction Graphormer Network for 3D hand pose estimation","authors":"Yanjun Liu ,&nbsp;Wanshu Fan ,&nbsp;Cong Wang ,&nbsp;Shixi Wen ,&nbsp;Xin Yang ,&nbsp;Qiang Zhang ,&nbsp;Xiaopeng Wei ,&nbsp;Dongsheng Zhou","doi":"10.1016/j.neunet.2025.107221","DOIUrl":null,"url":null,"abstract":"<div><div>Estimating 3D hand poses from monocular RGB images presents a series of challenges, including complex hand structures, self-occlusions, and depth ambiguities. Existing methods often fall short of capturing the long-distance dependencies of skeletal and non-skeletal connections for hand joints. To address these limitations, we introduce the Global Topology Interaction Graphormer Network (GTIGNet), a novel deep learning architecture designed to improve 3D hand pose estimation. Our model incorporates a Context-Aware Attention Block (CAAB) within the 2D pose estimator to enhance the extraction of multi-scale features, yielding more accurate 2D joint heatmaps to support the task that followed. Additionally, we introduce a High-Order Graphormer that explicitly and implicitly models the topological structure of hand joints, thereby enhancing feature interaction. Ablation studies confirm the effectiveness of our approach, and experimental results on four challenging datasets, Rendered Hand Dataset (RHD), Stereo Hand Pose Benchmark (STB), First-Person Hand Action Benchmark (FPHA), and FreiHAND Dataset, indicate that GTIGNet achieves state-of-the-art performance in 3D hand pose estimation. Notably, our model achieves an impressive Mean Per Joint Position Error (MPJPE) of 9.98 mm on RHD, 6.12 mm on STB, 11.15 mm on FPHA and 10.97 mm on FreiHAND.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107221"},"PeriodicalIF":6.3000,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025001005","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Estimating 3D hand poses from monocular RGB images presents a series of challenges, including complex hand structures, self-occlusions, and depth ambiguities. Existing methods often fall short of capturing the long-distance dependencies of skeletal and non-skeletal connections for hand joints. To address these limitations, we introduce the Global Topology Interaction Graphormer Network (GTIGNet), a novel deep learning architecture designed to improve 3D hand pose estimation. Our model incorporates a Context-Aware Attention Block (CAAB) within the 2D pose estimator to enhance the extraction of multi-scale features, yielding more accurate 2D joint heatmaps to support the task that followed. Additionally, we introduce a High-Order Graphormer that explicitly and implicitly models the topological structure of hand joints, thereby enhancing feature interaction. Ablation studies confirm the effectiveness of our approach, and experimental results on four challenging datasets, Rendered Hand Dataset (RHD), Stereo Hand Pose Benchmark (STB), First-Person Hand Action Benchmark (FPHA), and FreiHAND Dataset, indicate that GTIGNet achieves state-of-the-art performance in 3D hand pose estimation. Notably, our model achieves an impressive Mean Per Joint Position Error (MPJPE) of 9.98 mm on RHD, 6.12 mm on STB, 11.15 mm on FPHA and 10.97 mm on FreiHAND.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
GTIGNet:三维手部姿态估计的全局拓扑交互书写网络
从单目RGB图像估计3D手部姿势提出了一系列挑战,包括复杂的手部结构,自遮挡和深度模糊。现有的方法往往不能捕捉手部关节的骨骼和非骨骼连接的远距离依赖关系。为了解决这些限制,我们引入了Global Topology Interaction graphhormer Network (GTIGNet),这是一种新的深度学习架构,旨在改善3D手部姿势估计。我们的模型在2D姿态估计器中集成了上下文感知注意块(CAAB),以增强多尺度特征的提取,生成更准确的2D关节热图,以支持随后的任务。此外,我们还引入了一个高阶书写器,该书写器可以显式和隐式地对手部关节的拓扑结构进行建模,从而增强特征交互。烧烧研究证实了我们方法的有效性,并且在四个具有挑战性的数据集,渲染手数据集(RHD),立体手姿势基准(STB),第一人称手动作基准(FPHA)和FreiHAND数据集上的实验结果表明,GTIGNet在3D手姿势估计方面达到了最先进的性能。值得注意的是,我们的模型实现了令人印象深刻的平均每个关节位置误差(MPJPE),在RHD上为9.98 mm,在STB上为6.12 mm,在FPHA上为11.15 mm,在FreiHAND上为10.97 mm。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
期刊最新文献
Beyond local aggregation: Global graph contrastive learning for multi-view fusion. SD2-ReID: A semantic-stylistic decoupled distillation framework for robust multi-modal object re-identification. TransUTD: Underwater cross-domain collaborative spatial-temporal transformer detector. Adversarial discriminant attack on text-to-image diffusion models. Enhancing out-of-distribution detection with bilateral distribution score.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1