Beyond boundaries: Hierarchical-contrast unsupervised temporal action localization with high-coupling feature learning

IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pattern Recognition Pub Date : 2025-02-03 DOI:10.1016/j.patcog.2025.111421
Yuanyuan Liu , Ning Zhou , Yuxuan Huang , Shuyang Liu , Leyuan Liu , Wujie Zhou , Chang Tang , Ke Wang
{"title":"Beyond boundaries: Hierarchical-contrast unsupervised temporal action localization with high-coupling feature learning","authors":"Yuanyuan Liu ,&nbsp;Ning Zhou ,&nbsp;Yuxuan Huang ,&nbsp;Shuyang Liu ,&nbsp;Leyuan Liu ,&nbsp;Wujie Zhou ,&nbsp;Chang Tang ,&nbsp;Ke Wang","doi":"10.1016/j.patcog.2025.111421","DOIUrl":null,"url":null,"abstract":"<div><div>Current unsupervised temporal action localization (UTAL) methods mainly use clustering and localization with independent learning mechanisms. However, these individual mechanisms are low-coupled and struggle to finely localize action-background boundary information due to the lack of feature interactions in the clustering and localization process. To address this, we propose an end-to-end Hierarchical-Contrast UTAL (HC-UTAL) framework with high-coupling multi-task feature learning. HC-UTAL incorporates coarse-to-fine contrastive learning (CL) at three levels: <em>video level</em>, <em>instance level</em> and <em>boundary level</em>, thus obtaining adaptive interaction and robust performance. We first employ the <em>video-level CL</em> on video-level and cluster-level feature learning, generating video action pseudo-labels. Then, using the video action pseudo-labels, we further devise the <em>instance-level CL</em> on action-related feature learning for coarse localization and the <em>boundary-level CL</em> on ambiguous action-background boundary feature learning for finer localization, respectively. We conduct extensive experiments on THUMOS’14, ActivityNet v1.2, and ActivityNet v1.3 datasets. The results demonstrate that our method achieves state-of-the-art performance. The code and trained models are available at: <span><span>https://github.com/bugcat9/HC-UTAL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"162 ","pages":"Article 111421"},"PeriodicalIF":7.6000,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320325000810","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Current unsupervised temporal action localization (UTAL) methods mainly use clustering and localization with independent learning mechanisms. However, these individual mechanisms are low-coupled and struggle to finely localize action-background boundary information due to the lack of feature interactions in the clustering and localization process. To address this, we propose an end-to-end Hierarchical-Contrast UTAL (HC-UTAL) framework with high-coupling multi-task feature learning. HC-UTAL incorporates coarse-to-fine contrastive learning (CL) at three levels: video level, instance level and boundary level, thus obtaining adaptive interaction and robust performance. We first employ the video-level CL on video-level and cluster-level feature learning, generating video action pseudo-labels. Then, using the video action pseudo-labels, we further devise the instance-level CL on action-related feature learning for coarse localization and the boundary-level CL on ambiguous action-background boundary feature learning for finer localization, respectively. We conduct extensive experiments on THUMOS’14, ActivityNet v1.2, and ActivityNet v1.3 datasets. The results demonstrate that our method achieves state-of-the-art performance. The code and trained models are available at: https://github.com/bugcat9/HC-UTAL.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
超越边界:层次对比无监督时间动作定位与高耦合特征学习
目前的无监督时间动作定位(UTAL)方法主要采用具有独立学习机制的聚类和定位。然而,这些单独的机制是低耦合的,并且由于在聚类和定位过程中缺乏特征交互,难以精细地定位动作-背景边界信息。为了解决这个问题,我们提出了一个具有高耦合多任务特征学习的端到端分层对比UTAL (HC-UTAL)框架。HC-UTAL在视频级、实例级和边界级三个层次上结合了粗精对比学习(CL),从而获得自适应交互和鲁棒性能。我们首先在视频级和聚类级特征学习中使用视频级CL,生成视频动作伪标签。然后,利用视频动作伪标签,我们进一步设计了实例级动作相关特征学习的粗糙定位和边界级模糊动作背景边界特征学习的精细定位。我们在THUMOS ' 14、ActivityNet v1.2和ActivityNet v1.3数据集上进行了广泛的实验。结果表明,我们的方法达到了最先进的性能。代码和经过训练的模型可在https://github.com/bugcat9/HC-UTAL上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Pattern Recognition
Pattern Recognition 工程技术-工程:电子与电气
CiteScore
14.40
自引率
16.20%
发文量
683
审稿时长
5.6 months
期刊介绍: The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.
期刊最新文献
Editorial Board Contrastive calibration on consensus and complementary multi-view representations Adversarial supervised contrastive feature learning for cross-modal retrieval A visual-textual mutual guidance fusion network for remote sensing visual question answering Generalizable face forgery detection via mining single-step reconstruction difference
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1