Exploring the Landscape of Machine Unlearning: A Comprehensive Survey and Taxonomy

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE transactions on neural networks and learning systems Pub Date : 2024-11-12 DOI:10.1109/TNNLS.2024.3486109
Thanveer Shaik;Xiaohui Tao;Haoran Xie;Lin Li;Xiaofeng Zhu;Qing Li
{"title":"Exploring the Landscape of Machine Unlearning: A Comprehensive Survey and Taxonomy","authors":"Thanveer Shaik;Xiaohui Tao;Haoran Xie;Lin Li;Xiaofeng Zhu;Qing Li","doi":"10.1109/TNNLS.2024.3486109","DOIUrl":null,"url":null,"abstract":"Machine unlearning (MU) is gaining increasing attention due to the need to remove or modify predictions made by machine learning (ML) models. While training models have become more efficient and accurate, the importance of unlearning previously learned information has become increasingly significant in fields such as privacy, security, and ethics. This article presents a comprehensive survey of MU, covering current state-of-the-art techniques and approaches, including data deletion, perturbation, and model updates. In addition, commonly used metrics and datasets are presented. This article also highlights the challenges that need to be addressed, including attack sophistication, standardization, transferability, interpretability, training data, and resource constraints. The contributions of this article include discussions about the potential benefits of MU and its future directions. Additionally, this article emphasizes the need for researchers and practitioners to continue exploring and refining unlearning techniques to ensure that ML models can adapt to changing circumstances while maintaining user trust. The importance of unlearning is further highlighted in making artificial intelligence (AI) more trustworthy and transparent, especially with the growing importance of AI across various domains that involve large amounts of personal user data.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 7","pages":"11676-11696"},"PeriodicalIF":8.9000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10750906/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Machine unlearning (MU) is gaining increasing attention due to the need to remove or modify predictions made by machine learning (ML) models. While training models have become more efficient and accurate, the importance of unlearning previously learned information has become increasingly significant in fields such as privacy, security, and ethics. This article presents a comprehensive survey of MU, covering current state-of-the-art techniques and approaches, including data deletion, perturbation, and model updates. In addition, commonly used metrics and datasets are presented. This article also highlights the challenges that need to be addressed, including attack sophistication, standardization, transferability, interpretability, training data, and resource constraints. The contributions of this article include discussions about the potential benefits of MU and its future directions. Additionally, this article emphasizes the need for researchers and practitioners to continue exploring and refining unlearning techniques to ensure that ML models can adapt to changing circumstances while maintaining user trust. The importance of unlearning is further highlighted in making artificial intelligence (AI) more trustworthy and transparent, especially with the growing importance of AI across various domains that involve large amounts of personal user data.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
探索机器学习的前景:全面调查与分类法
由于需要删除或修改机器学习(ML)模型所做的预测,机器学习(MU)越来越受到关注。虽然训练模型变得更加高效和准确,但在隐私、安全和道德等领域,忘记以前学习过的信息的重要性也变得越来越重要。本文介绍了MU的全面调查,涵盖了当前最先进的技术和方法,包括数据删除、扰动和模型更新。此外,还介绍了常用的度量标准和数据集。本文还强调了需要解决的挑战,包括攻击的复杂性、标准化、可移植性、可解释性、训练数据和资源约束。本文的贡献包括对MU的潜在好处及其未来方向的讨论。此外,本文强调研究人员和实践者需要继续探索和改进遗忘技术,以确保ML模型能够适应不断变化的环境,同时保持用户信任。在使人工智能(AI)更加可信和透明的过程中,特别是在涉及大量个人用户数据的各个领域中,人工智能的重要性日益突出。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
期刊最新文献
When Optimal Transport Meets Photo-Realistic Image Dehazing With Unpaired Training. Multistage PCA Whitening: A Robust Method to Dimensionality Reduction in Image Retrieval. S2FS: Spatially-Aware Separability-Driven Feature Selection in Fuzzy Decision Systems. Neural Architecture Search With Spatial-Spectral Attention for Higher-Order Nonlinear Hyperspectral Unmixing. Spatial Meta-Learning-Based Representation for Unseen Geographic Entities.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1