Ye Liu;Yuanrong Tian;Yunlong Mi;Hui Liu;Jianqiang Wang;Witold Pedrycz
{"title":"Landmark Block-Embedded Aggregation Autoencoder for Anomaly Detection","authors":"Ye Liu;Yuanrong Tian;Yunlong Mi;Hui Liu;Jianqiang Wang;Witold Pedrycz","doi":"10.1109/TSMC.2024.3496332","DOIUrl":null,"url":null,"abstract":"Unsupervised anomaly detection (AD) methods based on deep learning have attracted great attention in unlabeled data mining. The performance of these AD methods usually depends on the representation ability of normal patterns and the quality of training data. However, most deep unsupervised AD methods do not capture the distribution characteristics and the diversity of normal patterns effectively. In the meantime, they ignore the interference of abnormal samples on the model in training data with anomaly contamination. To tackle these issues, this article proposes a method named landmark block-embedded aggregation autoencoder (LBAA) for AD. LBAA constructs a filter and an aggregation autoencoder by introducing a novel normal feature learning approach to improve data quality and adjust its distribution differences from anomalies. In the normal feature learning, we define a landmark block to represent distribution of a normal class and an adaptive selection mechanism of landmark blocks’ number to obtain diverse normal features. On the basis, the filter is constructed to filter distinct anomalies and improve the quality of the contaminated training data. Then, a weighted objective function is proposed to train the aggregation autoencoder. The function can reduce the interference of anomalies and realize the aggregation of normal samples to increase the feature differences between normal and abnormal samples. Next, the trained aggregation autoencoder calculates the anomaly score of each sample by summing the reconstruction error and its median sparseness to the landmark blocks. Finally, we report on a comprehensive experiment on multiple datasets. The obtained results validate the effectiveness and robustness of LBAA.","PeriodicalId":48915,"journal":{"name":"IEEE Transactions on Systems Man Cybernetics-Systems","volume":"55 2","pages":"1004-1019"},"PeriodicalIF":8.6000,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Systems Man Cybernetics-Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10768191/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Unsupervised anomaly detection (AD) methods based on deep learning have attracted great attention in unlabeled data mining. The performance of these AD methods usually depends on the representation ability of normal patterns and the quality of training data. However, most deep unsupervised AD methods do not capture the distribution characteristics and the diversity of normal patterns effectively. In the meantime, they ignore the interference of abnormal samples on the model in training data with anomaly contamination. To tackle these issues, this article proposes a method named landmark block-embedded aggregation autoencoder (LBAA) for AD. LBAA constructs a filter and an aggregation autoencoder by introducing a novel normal feature learning approach to improve data quality and adjust its distribution differences from anomalies. In the normal feature learning, we define a landmark block to represent distribution of a normal class and an adaptive selection mechanism of landmark blocks’ number to obtain diverse normal features. On the basis, the filter is constructed to filter distinct anomalies and improve the quality of the contaminated training data. Then, a weighted objective function is proposed to train the aggregation autoencoder. The function can reduce the interference of anomalies and realize the aggregation of normal samples to increase the feature differences between normal and abnormal samples. Next, the trained aggregation autoencoder calculates the anomaly score of each sample by summing the reconstruction error and its median sparseness to the landmark blocks. Finally, we report on a comprehensive experiment on multiple datasets. The obtained results validate the effectiveness and robustness of LBAA.
期刊介绍:
The IEEE Transactions on Systems, Man, and Cybernetics: Systems encompasses the fields of systems engineering, covering issue formulation, analysis, and modeling throughout the systems engineering lifecycle phases. It addresses decision-making, issue interpretation, systems management, processes, and various methods such as optimization, modeling, and simulation in the development and deployment of large systems.