不学习的代价:识别边缘计算中的未学习风险

IF 5.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS ACM Transactions on Multimedia Computing Communications and Applications Pub Date : 2024-05-06 DOI:10.1145/3662184
Lefeng Zhang, Tianqing Zhu, Ping Xiong, Wanlei Zhou
{"title":"不学习的代价:识别边缘计算中的未学习风险","authors":"Lefeng Zhang, Tianqing Zhu, Ping Xiong, Wanlei Zhou","doi":"10.1145/3662184","DOIUrl":null,"url":null,"abstract":"<p>Machine unlearning is an emerging paradigm that aims to make machine learning models “forget” what they have learned about particular data. It fulfills the requirements of privacy legislation (e.g., GDPR), which stipulates that individuals have the autonomy to determine the usage of their personal data. However, alongside all the achievements, there are still loopholes in machine unlearning that may cause significant losses for the system, especially in edge computing. Edge computing is a distributed computing paradigm with the purpose of migrating data processing tasks closer to terminal devices. While various machine unlearning approaches have been proposed to erase the influence of data sample(s), we claim that it might be dangerous to directly apply them in the realm of edge computing. A malicious edge node may broadcast (possibly fake) unlearning requests to a target data sample (s) and then analyze the behavior of edge devices to infer useful information. In this paper, we exploited the vulnerabilities of current machine unlearning strategies in edge computing and proposed a new inference attack to highlight the potential privacy risk. Furthermore, we developed a defense method against this particular type of attack and proposed <i>the price of unlearning</i> (<i>PoU</i>) as a means to evaluate the inefficiency it brings to an edge computing system. We provide theoretical analyses to show the upper bound of the <i>PoU</i> using tools borrowed from game theory. The experimental results on real-world datasets demonstrate that the proposed defense strategy is effective and capable of preventing an adversary from deducing useful information.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":null,"pages":null},"PeriodicalIF":5.2000,"publicationDate":"2024-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Price of Unlearning: Identifying Unlearning Risk in Edge Computing\",\"authors\":\"Lefeng Zhang, Tianqing Zhu, Ping Xiong, Wanlei Zhou\",\"doi\":\"10.1145/3662184\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Machine unlearning is an emerging paradigm that aims to make machine learning models “forget” what they have learned about particular data. It fulfills the requirements of privacy legislation (e.g., GDPR), which stipulates that individuals have the autonomy to determine the usage of their personal data. However, alongside all the achievements, there are still loopholes in machine unlearning that may cause significant losses for the system, especially in edge computing. Edge computing is a distributed computing paradigm with the purpose of migrating data processing tasks closer to terminal devices. While various machine unlearning approaches have been proposed to erase the influence of data sample(s), we claim that it might be dangerous to directly apply them in the realm of edge computing. A malicious edge node may broadcast (possibly fake) unlearning requests to a target data sample (s) and then analyze the behavior of edge devices to infer useful information. In this paper, we exploited the vulnerabilities of current machine unlearning strategies in edge computing and proposed a new inference attack to highlight the potential privacy risk. Furthermore, we developed a defense method against this particular type of attack and proposed <i>the price of unlearning</i> (<i>PoU</i>) as a means to evaluate the inefficiency it brings to an edge computing system. We provide theoretical analyses to show the upper bound of the <i>PoU</i> using tools borrowed from game theory. The experimental results on real-world datasets demonstrate that the proposed defense strategy is effective and capable of preventing an adversary from deducing useful information.</p>\",\"PeriodicalId\":50937,\"journal\":{\"name\":\"ACM Transactions on Multimedia Computing Communications and Applications\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":5.2000,\"publicationDate\":\"2024-05-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Transactions on Multimedia Computing Communications and Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1145/3662184\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Multimedia Computing Communications and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3662184","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

机器非学习是一种新兴模式,旨在让机器学习模型 "忘记 "它们所学到的关于特定数据的知识。它满足了隐私法(如 GDPR)的要求,该法规定个人有权自主决定其个人数据的用途。然而,在取得这些成就的同时,机器学习仍存在漏洞,可能会给系统造成重大损失,尤其是在边缘计算领域。边缘计算是一种分布式计算模式,目的是将数据处理任务迁移到更靠近终端设备的地方。虽然已经提出了各种机器学习方法来消除数据样本的影响,但我们认为,在边缘计算领域直接应用这些方法可能是危险的。恶意边缘节点可能会向目标数据样本广播(可能是伪造的)解除学习请求,然后分析边缘设备的行为,从而推断出有用的信息。在本文中,我们利用了当前边缘计算中机器解除学习策略的漏洞,并提出了一种新的推理攻击,以突出潜在的隐私风险。此外,我们还针对这种特殊类型的攻击开发了一种防御方法,并提出了 "不学习的代价"(PoU),以此来评估它给边缘计算系统带来的低效。我们借用博弈论的工具进行了理论分析,以说明 PoU 的上限。在真实世界数据集上的实验结果表明,所提出的防御策略是有效的,能够阻止对手推导出有用的信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The Price of Unlearning: Identifying Unlearning Risk in Edge Computing

Machine unlearning is an emerging paradigm that aims to make machine learning models “forget” what they have learned about particular data. It fulfills the requirements of privacy legislation (e.g., GDPR), which stipulates that individuals have the autonomy to determine the usage of their personal data. However, alongside all the achievements, there are still loopholes in machine unlearning that may cause significant losses for the system, especially in edge computing. Edge computing is a distributed computing paradigm with the purpose of migrating data processing tasks closer to terminal devices. While various machine unlearning approaches have been proposed to erase the influence of data sample(s), we claim that it might be dangerous to directly apply them in the realm of edge computing. A malicious edge node may broadcast (possibly fake) unlearning requests to a target data sample (s) and then analyze the behavior of edge devices to infer useful information. In this paper, we exploited the vulnerabilities of current machine unlearning strategies in edge computing and proposed a new inference attack to highlight the potential privacy risk. Furthermore, we developed a defense method against this particular type of attack and proposed the price of unlearning (PoU) as a means to evaluate the inefficiency it brings to an edge computing system. We provide theoretical analyses to show the upper bound of the PoU using tools borrowed from game theory. The experimental results on real-world datasets demonstrate that the proposed defense strategy is effective and capable of preventing an adversary from deducing useful information.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.50
自引率
5.90%
发文量
285
审稿时长
7.5 months
期刊介绍: The ACM Transactions on Multimedia Computing, Communications, and Applications is the flagship publication of the ACM Special Interest Group in Multimedia (SIGMM). It is soliciting paper submissions on all aspects of multimedia. Papers on single media (for instance, audio, video, animation) and their processing are also welcome. TOMM is a peer-reviewed, archival journal, available in both print form and digital form. The Journal is published quarterly; with roughly 7 23-page articles in each issue. In addition, all Special Issues are published online-only to ensure a timely publication. The transactions consists primarily of research papers. This is an archival journal and it is intended that the papers will have lasting importance and value over time. In general, papers whose primary focus is on particular multimedia products or the current state of the industry will not be included.
期刊最新文献
TA-Detector: A GNN-based Anomaly Detector via Trust Relationship KF-VTON: Keypoints-Driven Flow Based Virtual Try-On Network Unified View Empirical Study for Large Pretrained Model on Cross-Domain Few-Shot Learning Multimodal Fusion for Talking Face Generation Utilizing Speech-related Facial Action Units Compressed Point Cloud Quality Index by Combining Global Appearance and Local Details
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1