针对不平衡群体的公平异常检测

Ziwei Wu, Lecheng Zheng, Yuancheng Yu, Ruizhong Qiu, John Birge, Jingrui He
{"title":"针对不平衡群体的公平异常检测","authors":"Ziwei Wu, Lecheng Zheng, Yuancheng Yu, Ruizhong Qiu, John Birge, Jingrui He","doi":"arxiv-2409.10951","DOIUrl":null,"url":null,"abstract":"Anomaly detection (AD) has been widely studied for decades in many real-world\napplications, including fraud detection in finance, and intrusion detection for\ncybersecurity, etc. Due to the imbalanced nature between protected and\nunprotected groups and the imbalanced distributions of normal examples and\nanomalies, the learning objectives of most existing anomaly detection methods\ntend to solely concentrate on the dominating unprotected group. Thus, it has\nbeen recognized by many researchers about the significance of ensuring model\nfairness in anomaly detection. However, the existing fair anomaly detection\nmethods tend to erroneously label most normal examples from the protected group\nas anomalies in the imbalanced scenario where the unprotected group is more\nabundant than the protected group. This phenomenon is caused by the improper\ndesign of learning objectives, which statistically focus on learning the\nfrequent patterns (i.e., the unprotected group) while overlooking the\nunder-represented patterns (i.e., the protected group). To address these\nissues, we propose FairAD, a fairness-aware anomaly detection method targeting\nthe imbalanced scenario. It consists of a fairness-aware contrastive learning\nmodule and a rebalancing autoencoder module to ensure fairness and handle the\nimbalanced data issue, respectively. Moreover, we provide the theoretical\nanalysis that shows our proposed contrastive learning regularization guarantees\ngroup fairness. Empirical studies demonstrate the effectiveness and efficiency\nof FairAD across multiple real-world datasets.","PeriodicalId":501301,"journal":{"name":"arXiv - CS - Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fair Anomaly Detection For Imbalanced Groups\",\"authors\":\"Ziwei Wu, Lecheng Zheng, Yuancheng Yu, Ruizhong Qiu, John Birge, Jingrui He\",\"doi\":\"arxiv-2409.10951\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Anomaly detection (AD) has been widely studied for decades in many real-world\\napplications, including fraud detection in finance, and intrusion detection for\\ncybersecurity, etc. Due to the imbalanced nature between protected and\\nunprotected groups and the imbalanced distributions of normal examples and\\nanomalies, the learning objectives of most existing anomaly detection methods\\ntend to solely concentrate on the dominating unprotected group. Thus, it has\\nbeen recognized by many researchers about the significance of ensuring model\\nfairness in anomaly detection. However, the existing fair anomaly detection\\nmethods tend to erroneously label most normal examples from the protected group\\nas anomalies in the imbalanced scenario where the unprotected group is more\\nabundant than the protected group. This phenomenon is caused by the improper\\ndesign of learning objectives, which statistically focus on learning the\\nfrequent patterns (i.e., the unprotected group) while overlooking the\\nunder-represented patterns (i.e., the protected group). To address these\\nissues, we propose FairAD, a fairness-aware anomaly detection method targeting\\nthe imbalanced scenario. It consists of a fairness-aware contrastive learning\\nmodule and a rebalancing autoencoder module to ensure fairness and handle the\\nimbalanced data issue, respectively. Moreover, we provide the theoretical\\nanalysis that shows our proposed contrastive learning regularization guarantees\\ngroup fairness. Empirical studies demonstrate the effectiveness and efficiency\\nof FairAD across multiple real-world datasets.\",\"PeriodicalId\":501301,\"journal\":{\"name\":\"arXiv - CS - Machine Learning\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Machine Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10951\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10951","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

几十年来,异常检测(AD)已在许多实际应用中得到广泛研究,包括金融欺诈检测、网络安全入侵检测等。由于受保护组和非受保护组之间的不平衡性,以及正常示例和异常分布的不平衡性,大多数现有异常检测方法的学习目标往往只集中在占主导地位的非受保护组上。因此,许多研究人员已经认识到确保模型公平性在异常检测中的重要性。然而,现有的公平异常检测方法在未受保护组比受保护组多的不平衡场景中,往往会错误地将来自受保护组的大多数正常示例标记为异常。造成这种现象的原因是学习目标设计不当,在统计上只关注学习经常出现的模式(即未受保护组),而忽略了代表性不足的模式(即受保护组)。为了解决这些问题,我们提出了一种公平感知异常检测方法--FairAD,它主要针对不平衡场景。它由公平感知对比学习模块和再平衡自动编码器模块组成,分别用于确保公平性和处理不平衡数据问题。此外,我们提供的理论分析表明,我们提出的对比学习正则化可以保证组的公平性。实证研究证明了 FairAD 在多个实际数据集上的有效性和效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Fair Anomaly Detection For Imbalanced Groups
Anomaly detection (AD) has been widely studied for decades in many real-world applications, including fraud detection in finance, and intrusion detection for cybersecurity, etc. Due to the imbalanced nature between protected and unprotected groups and the imbalanced distributions of normal examples and anomalies, the learning objectives of most existing anomaly detection methods tend to solely concentrate on the dominating unprotected group. Thus, it has been recognized by many researchers about the significance of ensuring model fairness in anomaly detection. However, the existing fair anomaly detection methods tend to erroneously label most normal examples from the protected group as anomalies in the imbalanced scenario where the unprotected group is more abundant than the protected group. This phenomenon is caused by the improper design of learning objectives, which statistically focus on learning the frequent patterns (i.e., the unprotected group) while overlooking the under-represented patterns (i.e., the protected group). To address these issues, we propose FairAD, a fairness-aware anomaly detection method targeting the imbalanced scenario. It consists of a fairness-aware contrastive learning module and a rebalancing autoencoder module to ensure fairness and handle the imbalanced data issue, respectively. Moreover, we provide the theoretical analysis that shows our proposed contrastive learning regularization guarantees group fairness. Empirical studies demonstrate the effectiveness and efficiency of FairAD across multiple real-world datasets.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Almost Sure Convergence of Linear Temporal Difference Learning with Arbitrary Features The Impact of Element Ordering on LM Agent Performance Towards Interpretable End-Stage Renal Disease (ESRD) Prediction: Utilizing Administrative Claims Data with Explainable AI Techniques Extended Deep Submodular Functions Symmetry-Enriched Learning: A Category-Theoretic Framework for Robust Machine Learning Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1