Dual-referenced assistive network for action quality assessment

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neurocomputing Pub Date : 2024-10-28 DOI:10.1016/j.neucom.2024.128786
Keyi Huang, Yi Tian, Chen Yu, Yaping Huang
{"title":"Dual-referenced assistive network for action quality assessment","authors":"Keyi Huang,&nbsp;Yi Tian,&nbsp;Chen Yu,&nbsp;Yaping Huang","doi":"10.1016/j.neucom.2024.128786","DOIUrl":null,"url":null,"abstract":"<div><div>Action quality assessment (AQA) aims to evaluate the performing quality of a specific action. It is a challenging task as it requires to identify the subtle differences between the videos containing the same action. Most of existing AQA methods directly adopt a pretrained network designed for other tasks to extract video features, which are too coarse to describe fine-grained details of action quality. In this paper, we propose a novel Dual-Referenced Assistive (DuRA) network to polish original coarse-grained features into fine-grained quality-oriented representations. Specifically, we introduce two levels of referenced assistants to highlight the discriminative quality-related contents by comparing a target video and the referenced objects, instead of obtrusively estimating the quality score from an individual video. Firstly, we design a Rating-guided Attention module, which takes advantage of a series of semantic-level referenced assistants to acquire implicit hierarchical semantic knowledge and progressively emphasize quality-focused features embedded in original inherent information. Subsequently, we further design a couple of Consistency Preserving constraints, which introduce a set of individual-level referenced assistants to further eliminate score-unrelated information through more detailed comparisons of differences between actions. The experiments show that our proposed method achieves promising performance on the AQA-7 and MTL-AQA datasets.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"614 ","pages":"Article 128786"},"PeriodicalIF":5.5000,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224015571","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Action quality assessment (AQA) aims to evaluate the performing quality of a specific action. It is a challenging task as it requires to identify the subtle differences between the videos containing the same action. Most of existing AQA methods directly adopt a pretrained network designed for other tasks to extract video features, which are too coarse to describe fine-grained details of action quality. In this paper, we propose a novel Dual-Referenced Assistive (DuRA) network to polish original coarse-grained features into fine-grained quality-oriented representations. Specifically, we introduce two levels of referenced assistants to highlight the discriminative quality-related contents by comparing a target video and the referenced objects, instead of obtrusively estimating the quality score from an individual video. Firstly, we design a Rating-guided Attention module, which takes advantage of a series of semantic-level referenced assistants to acquire implicit hierarchical semantic knowledge and progressively emphasize quality-focused features embedded in original inherent information. Subsequently, we further design a couple of Consistency Preserving constraints, which introduce a set of individual-level referenced assistants to further eliminate score-unrelated information through more detailed comparisons of differences between actions. The experiments show that our proposed method achieves promising performance on the AQA-7 and MTL-AQA datasets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
行动质量评估双参照辅助网络
动作质量评估(AQA)旨在评价特定动作的执行质量。这是一项具有挑战性的任务,因为它需要识别包含相同动作的视频之间的细微差别。现有的 AQA 方法大多直接采用为其他任务设计的预训练网络来提取视频特征,这种方法过于粗糙,无法描述动作质量的细微差别。在本文中,我们提出了一种新颖的双参照辅助(DuRA)网络,将原始的粗粒度特征打磨成面向质量的细粒度表示。具体来说,我们引入了两级参考助手,通过比较目标视频和参考对象来突出与质量相关的判别内容,而不是从单个视频中估算质量分数。首先,我们设计了一个 "评分引导关注 "模块,该模块利用一系列语义级参考助手来获取隐含的分层语义知识,并逐步强调蕴含在原始固有信息中的质量相关特征。随后,我们进一步设计了几个一致性保持约束,引入了一组个体级参考助手,通过更详细地比较行动之间的差异,进一步消除与分数无关的信息。实验表明,我们提出的方法在 AQA-7 和 MTL-AQA 数据集上取得了可喜的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
期刊最新文献
Editorial Board Extending the learning using privileged information paradigm to logistic regression DoA-ViT: Dual-objective Affine Vision Transformer for Data Insufficiency CNN explanation methods for ordinal regression tasks Superpixel semantics representation and pre-training for vision–language tasks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1