{"title":"Improving Vision Anomaly Detection With the Guidance of Language Modality","authors":"Dong Chen;Kaihang Pan;Guangyu Dai;Guoming Wang;Yueting Zhuang;Siliang Tang;Mingliang Xu","doi":"10.1109/TMM.2024.3521813","DOIUrl":null,"url":null,"abstract":"Recent years have seen a surge of interest in anomaly detection. However, existing unsupervised anomaly detectors, particularly those for the vision modality, face significant challenges due to redundant information and sparse latent space. In contrast, anomaly detectors demonstrate superior performance in the language modality due to the unimodal nature of the data. This paper tackles the aforementioned challenges for vision modality from a multimodal point of view. Specifically, we propose Cross-modal Guidance (CMG), comprising of Cross-modal Entropy Reduction (CMER) and Cross-modal Linear Embedding (CMLE), to address the issues of redundant information and sparse latent space, respectively. CMER involves masking portions of the raw image and computing the matching score with the corresponding text. Essentially, CMER eliminates irrelevant pixels to direct the detector's focus towards critical content. To learn a more compact latent space for the vision anomaly detection, CMLE learns a correlation structure matrix from the language modality. Then, the acquired matrix compels the distribution of images to resemble that of texts in the latent space. Extensive experiments demonstrate the effectiveness of the proposed methods. Particularly, compared to the baseline that only utilizes images, the performance of CMG has been improved by 16.81%. Ablation experiments further confirm the synergy among the proposed CMER and CMLE, as each component depends on the other to achieve optimal performance.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":"27 ","pages":"1410-1419"},"PeriodicalIF":8.4000,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10814059/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Recent years have seen a surge of interest in anomaly detection. However, existing unsupervised anomaly detectors, particularly those for the vision modality, face significant challenges due to redundant information and sparse latent space. In contrast, anomaly detectors demonstrate superior performance in the language modality due to the unimodal nature of the data. This paper tackles the aforementioned challenges for vision modality from a multimodal point of view. Specifically, we propose Cross-modal Guidance (CMG), comprising of Cross-modal Entropy Reduction (CMER) and Cross-modal Linear Embedding (CMLE), to address the issues of redundant information and sparse latent space, respectively. CMER involves masking portions of the raw image and computing the matching score with the corresponding text. Essentially, CMER eliminates irrelevant pixels to direct the detector's focus towards critical content. To learn a more compact latent space for the vision anomaly detection, CMLE learns a correlation structure matrix from the language modality. Then, the acquired matrix compels the distribution of images to resemble that of texts in the latent space. Extensive experiments demonstrate the effectiveness of the proposed methods. Particularly, compared to the baseline that only utilizes images, the performance of CMG has been improved by 16.81%. Ablation experiments further confirm the synergy among the proposed CMER and CMLE, as each component depends on the other to achieve optimal performance.
期刊介绍:
The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.