Xun Jiang;Xing Xu;Liqing Zhu;Zhe Sun;Andrzej Cichocki;Heng Tao Shen
{"title":"Resisting Noise in Pseudo Labels: Audible Video Event Parsing With Evidential Learning","authors":"Xun Jiang;Xing Xu;Liqing Zhu;Zhe Sun;Andrzej Cichocki;Heng Tao Shen","doi":"10.1109/TNNLS.2024.3505674","DOIUrl":null,"url":null,"abstract":"Perceiving temporal events and discriminating their modality types in audible videos, which is also called audio-visual video parsing (AVVP), is becoming a research hotspot in multimodal video understanding. The AVVP task generally follows weakly supervised learning settings, since only video-level labels are provided. Most existing works usually generate modalitywise pseudo labels (PLs) first and then learn to parse audio or visual events from the audible videos. However, this paradigm inevitably results in two defects: 1) the generated PLs for each modality are not fully reliable, which may confuse models if they are adopted as supervision signals for discriminating modalities; and 2) the absence of temporal annotations increases the ambiguities in localizing foregrounds in videos, furtherly causing models prone to being disturbed by noisy labels. To tackle these problems, we propose a novel AVVP framework termed noise-resistant event parsing (NREP), which introduces evidential deep learning (EDL) to overcome the limitations of noisy pseudo supervision. Specifically, our NREP framework consists of three key components: 1) modalitywise evidential learning (MEL) that discriminates the modality-class dependency; 2) temporalwise evidential learning (TEL) that explores meaningful foregrounds; and 3) foreground-background consistency learning (FBCL) for collaborating two evidential learning branches above. Through perceiving meaningful video content and learning evidence for modality dependencies, our method suppresses the disturbance of noise in generated PLs thus achieving remarkable performance with different PL generation strategies. We evaluate our NREP method on two AVVP benchmark datasets and demonstrate it consistently to establish new state-of-the-art. Our implementation codes are available at <uri>https://github.com/CFM-MSG/NREP</uri>.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 6","pages":"10874-10888"},"PeriodicalIF":8.9000,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10812353/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Perceiving temporal events and discriminating their modality types in audible videos, which is also called audio-visual video parsing (AVVP), is becoming a research hotspot in multimodal video understanding. The AVVP task generally follows weakly supervised learning settings, since only video-level labels are provided. Most existing works usually generate modalitywise pseudo labels (PLs) first and then learn to parse audio or visual events from the audible videos. However, this paradigm inevitably results in two defects: 1) the generated PLs for each modality are not fully reliable, which may confuse models if they are adopted as supervision signals for discriminating modalities; and 2) the absence of temporal annotations increases the ambiguities in localizing foregrounds in videos, furtherly causing models prone to being disturbed by noisy labels. To tackle these problems, we propose a novel AVVP framework termed noise-resistant event parsing (NREP), which introduces evidential deep learning (EDL) to overcome the limitations of noisy pseudo supervision. Specifically, our NREP framework consists of three key components: 1) modalitywise evidential learning (MEL) that discriminates the modality-class dependency; 2) temporalwise evidential learning (TEL) that explores meaningful foregrounds; and 3) foreground-background consistency learning (FBCL) for collaborating two evidential learning branches above. Through perceiving meaningful video content and learning evidence for modality dependencies, our method suppresses the disturbance of noise in generated PLs thus achieving remarkable performance with different PL generation strategies. We evaluate our NREP method on two AVVP benchmark datasets and demonstrate it consistently to establish new state-of-the-art. Our implementation codes are available at https://github.com/CFM-MSG/NREP.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.