Human pose estimation via inter-view image similarity with adaptive weights

IF 3.4 2区 工程技术 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Displays Pub Date : 2025-04-01 Epub Date: 2025-01-30 DOI:10.1016/j.displa.2025.102972
Yang Gao, Shigang Wang, Zhiyuan Zha
{"title":"Human pose estimation via inter-view image similarity with adaptive weights","authors":"Yang Gao,&nbsp;Shigang Wang,&nbsp;Zhiyuan Zha","doi":"10.1016/j.displa.2025.102972","DOIUrl":null,"url":null,"abstract":"<div><div>Human pose estimation has garnered considerable interest in computer vision. However, in real-world scenarios, human joint points often experience occlusion from clothing, body parts, and objects, which can decrease the accuracy of detecting and tracking the joint points. In this paper, we propose a novel inter-view image similarity with adaptive weights (IVIM-AW) approach for human pose estimation, which leverages the consistency and complementarity of multiple views to enhance the beneficial information obtained from other views. First, we design a dynamic adjustment mechanism to optimize the fusion weights within the Siamese network framework, making it more adaptable to the feature similarities of different views. Second, we propose an information consistency measurement strategy for multi-view images using a similarity matrix. Third, we leverage the sparse characteristics of heatmaps to achieve point-to-point matching during the multi-view fusion process. Experimental results demonstrate that the proposed IVIM-AW approach outperforms many popular or state-of-the-art methods on most public occlusion datasets. Notably, in the occlusion-person dataset, the IVIM-AW approach achieves the lowest mean joint estimation error, reducing the Mean Per Joint Position Error (MPJPE) to 9.24 mm.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"87 ","pages":"Article 102972"},"PeriodicalIF":3.4000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938225000095","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/30 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Human pose estimation has garnered considerable interest in computer vision. However, in real-world scenarios, human joint points often experience occlusion from clothing, body parts, and objects, which can decrease the accuracy of detecting and tracking the joint points. In this paper, we propose a novel inter-view image similarity with adaptive weights (IVIM-AW) approach for human pose estimation, which leverages the consistency and complementarity of multiple views to enhance the beneficial information obtained from other views. First, we design a dynamic adjustment mechanism to optimize the fusion weights within the Siamese network framework, making it more adaptable to the feature similarities of different views. Second, we propose an information consistency measurement strategy for multi-view images using a similarity matrix. Third, we leverage the sparse characteristics of heatmaps to achieve point-to-point matching during the multi-view fusion process. Experimental results demonstrate that the proposed IVIM-AW approach outperforms many popular or state-of-the-art methods on most public occlusion datasets. Notably, in the occlusion-person dataset, the IVIM-AW approach achieves the lowest mean joint estimation error, reducing the Mean Per Joint Position Error (MPJPE) to 9.24 mm.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于自适应权值的视间图像相似度人体姿态估计
人体姿态估计在计算机视觉领域引起了相当大的兴趣。然而,在现实场景中,人体关节点经常会受到衣服、身体部位和物体的遮挡,这会降低检测和跟踪关节点的准确性。在本文中,我们提出了一种新的基于自适应权重(IVIM-AW)的人眼姿态估计方法,该方法利用多视图的一致性和互补性来增强从其他视图中获得的有益信息。首先,我们设计了一种动态调整机制来优化Siamese网络框架内的融合权重,使其更能适应不同视图的特征相似性;其次,我们提出了一种基于相似矩阵的多视图图像信息一致性度量策略。第三,利用热图的稀疏特性,在多视图融合过程中实现点对点匹配。实验结果表明,本文提出的IVIM-AW方法在大多数公共遮挡数据集上优于许多流行的或最先进的方法。值得注意的是,在闭塞人数据集中,IVIM-AW方法获得了最低的平均关节估计误差,将平均每个关节位置误差(MPJPE)降低到9.24 mm。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Displays
Displays 工程技术-工程:电子与电气
CiteScore
4.60
自引率
25.60%
发文量
138
审稿时长
92 days
期刊介绍: Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface. Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.
期刊最新文献
An end-to-end Chinese-Braille translation method based on mT5: Vocabulary expansion and structural enhancement Towards high-dimensional IMU-based human activity recognition: data construction via 3D body modeling and classification with a multi-channel attention fusion network Single blind image deblurring: advances and prospects All Inkjet-Printed red Micro Quantum-Dots Light-Emitting diodes (QLEDs): Fabrication and performance Comparative analysis of virtual keyboard typing methods in VR: controller, poke, and pinch techniques
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1