Weakly-Supervised Temporal Action Localization by Progressive Complementary Learning

IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Transactions on Circuits and Systems for Video Technology Pub Date : 2024-09-10 DOI:10.1109/TCSVT.2024.3456795
Jia-Run Du;Jia-Chang Feng;Kun-Yu Lin;Fa-Ting Hong;Zhongang Qi;Ying Shan;Jian-Fang Hu;Wei-Shi Zheng
{"title":"Weakly-Supervised Temporal Action Localization by Progressive Complementary Learning","authors":"Jia-Run Du;Jia-Chang Feng;Kun-Yu Lin;Fa-Ting Hong;Zhongang Qi;Ying Shan;Jian-Fang Hu;Wei-Shi Zheng","doi":"10.1109/TCSVT.2024.3456795","DOIUrl":null,"url":null,"abstract":"Weakly-Supervised Temporal Action Localization (WSTAL) aims to localize and classify action instances in long untrimmed videos with only video-level category labels as supervision. A critical challenge of WSTAL is the large gap between video-level supervision and unavailable snippet-level supervision. Prevailing methods typically assign pseudo labels to snippets, but these methods suffer from significant noise caused by the pseudo snippet-level labels. In this work, we address the WSTAL from a novel category exclusion perspective, which gradually enhances the snippet-level supervision to bridge the gap. Our proposed Progressive Complementary Learning (ProCL) is inspired by the fact that, video-level labels precisely indicate the categories that all snippets surely do not belong to, which is ignored by previous works. Accordingly, we first exclude these surely non-existent categories by the deterministic complementary learning. And then, we introduce the entropy-based pseudo complementary learning that is able to exclude more categories for snippets of less ambiguity. Furthermore, for the remaining ambiguous snippets, we attempt to reduce the ambiguity by distinguishing foreground actions from the background. Extensive experimental results show that our method achieves new state-of-the-art performance on THUMOS14, ActivityNet1.3, and MultiTHUMOS benchmarks.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 1","pages":"938-952"},"PeriodicalIF":11.1000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10671581/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Weakly-Supervised Temporal Action Localization (WSTAL) aims to localize and classify action instances in long untrimmed videos with only video-level category labels as supervision. A critical challenge of WSTAL is the large gap between video-level supervision and unavailable snippet-level supervision. Prevailing methods typically assign pseudo labels to snippets, but these methods suffer from significant noise caused by the pseudo snippet-level labels. In this work, we address the WSTAL from a novel category exclusion perspective, which gradually enhances the snippet-level supervision to bridge the gap. Our proposed Progressive Complementary Learning (ProCL) is inspired by the fact that, video-level labels precisely indicate the categories that all snippets surely do not belong to, which is ignored by previous works. Accordingly, we first exclude these surely non-existent categories by the deterministic complementary learning. And then, we introduce the entropy-based pseudo complementary learning that is able to exclude more categories for snippets of less ambiguity. Furthermore, for the remaining ambiguous snippets, we attempt to reduce the ambiguity by distinguishing foreground actions from the background. Extensive experimental results show that our method achieves new state-of-the-art performance on THUMOS14, ActivityNet1.3, and MultiTHUMOS benchmarks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过渐进式互补学习实现弱监督时空动作定位
弱监督时态动作定位(WSTAL)的目的是在仅使用视频级别类别标签作为监督的情况下,对长视频中未修剪的动作实例进行定位和分类。WSTAL面临的一个关键挑战是视频级监管与不可用的片段级监管之间的巨大差距。流行的方法通常为代码段分配伪标签,但是这些方法受到伪代码段级别标签引起的显著噪声的影响。在这项工作中,我们从一个新的类别排除的角度来解决WSTAL,逐步加强片段级监督以弥补差距。我们提出的渐进式互补学习(ProCL)的灵感来自于这样一个事实,即视频级别的标签精确地指出了所有片段肯定不属于的类别,这一点被以前的作品所忽略。因此,我们首先通过确定性互补学习排除这些肯定不存在的范畴。然后,我们引入了基于熵的伪互补学习,它能够排除更多的类别,以减少歧义的片段。此外,对于剩余的模糊片段,我们试图通过区分前景动作和背景来减少模糊。大量的实验结果表明,我们的方法在THUMOS14、ActivityNet1.3和MultiTHUMOS基准测试上实现了新的最先进的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
13.80
自引率
27.40%
发文量
660
审稿时长
5 months
期刊介绍: The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.
期刊最新文献
IEEE Circuits and Systems Society Information IEEE Circuits and Systems Society Information 2025 Index IEEE Transactions on Circuits and Systems for Video Technology IEEE Circuits and Systems Society Information IEEE Circuits and Systems Society Information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1