多代理系统中的可信人工智能:分布式学习的隐私和安全综述

IF 23.2 1区 计算机科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Proceedings of the IEEE Pub Date : 2023-09-14 DOI:10.1109/JPROC.2023.3306773
Chuan Ma;Jun Li;Kang Wei;Bo Liu;Ming Ding;Long Yuan;Zhu Han;H. Vincent Poor
{"title":"多代理系统中的可信人工智能:分布式学习的隐私和安全综述","authors":"Chuan Ma;Jun Li;Kang Wei;Bo Liu;Ming Ding;Long Yuan;Zhu Han;H. Vincent Poor","doi":"10.1109/JPROC.2023.3306773","DOIUrl":null,"url":null,"abstract":"Motivated by the advancing computational capacity of distributed end-user equipment (UE), as well as the increasing concerns about sharing private data, there has been considerable recent interest in machine learning (ML) and artificial intelligence (AI) that can be processed on distributed UEs. Specifically, in this paradigm, parts of an ML process are outsourced to multiple distributed UEs. Then, the processed information is aggregated on a certain level at a central server, which turns a centralized ML process into a distributed one and brings about significant benefits. However, this new distributed ML paradigm raises new risks in terms of privacy and security issues. In this article, we provide a survey of the emerging security and privacy risks of distributed ML from a unique perspective of information exchange levels, which are defined according to the key steps of an ML process, i.e., we consider the following levels: 1) the level of preprocessed data; 2) the level of learning models; 3) the level of extracted knowledge; and 4) the level of intermediate results. We explore and analyze the potential of threats for each information exchange level based on an overview of current state-of-the-art attack mechanisms and then discuss the possible defense methods against such threats. Finally, we complete the survey by providing an outlook on the challenges and possible directions for future research in this critical area.","PeriodicalId":20556,"journal":{"name":"Proceedings of the IEEE","volume":"111 9","pages":"1097-1132"},"PeriodicalIF":23.2000,"publicationDate":"2023-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Trusted AI in Multiagent Systems: An Overview of Privacy and Security for Distributed Learning\",\"authors\":\"Chuan Ma;Jun Li;Kang Wei;Bo Liu;Ming Ding;Long Yuan;Zhu Han;H. Vincent Poor\",\"doi\":\"10.1109/JPROC.2023.3306773\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Motivated by the advancing computational capacity of distributed end-user equipment (UE), as well as the increasing concerns about sharing private data, there has been considerable recent interest in machine learning (ML) and artificial intelligence (AI) that can be processed on distributed UEs. Specifically, in this paradigm, parts of an ML process are outsourced to multiple distributed UEs. Then, the processed information is aggregated on a certain level at a central server, which turns a centralized ML process into a distributed one and brings about significant benefits. However, this new distributed ML paradigm raises new risks in terms of privacy and security issues. In this article, we provide a survey of the emerging security and privacy risks of distributed ML from a unique perspective of information exchange levels, which are defined according to the key steps of an ML process, i.e., we consider the following levels: 1) the level of preprocessed data; 2) the level of learning models; 3) the level of extracted knowledge; and 4) the level of intermediate results. We explore and analyze the potential of threats for each information exchange level based on an overview of current state-of-the-art attack mechanisms and then discuss the possible defense methods against such threats. Finally, we complete the survey by providing an outlook on the challenges and possible directions for future research in this critical area.\",\"PeriodicalId\":20556,\"journal\":{\"name\":\"Proceedings of the IEEE\",\"volume\":\"111 9\",\"pages\":\"1097-1132\"},\"PeriodicalIF\":23.2000,\"publicationDate\":\"2023-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the IEEE\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10251703/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the IEEE","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10251703/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 8

摘要

由于分布式终端用户设备(UE)计算能力的提高,以及对共享私人数据的日益关注,最近人们对可以在分布式UE上处理的机器学习(ML)和人工智能(AI)产生了相当大的兴趣。具体地说,在这个范例中,ML过程的部分被外包给多个分布式UE。然后,处理后的信息在中央服务器上聚合到一定的级别,这将集中式ML过程转变为分布式ML过程,并带来显著的好处。然而,这种新的分布式ML范式在隐私和安全问题方面带来了新的风险。在本文中,我们从信息交换级别的独特角度对分布式ML新出现的安全和隐私风险进行了调查,信息交换级别是根据ML过程的关键步骤定义的,即我们考虑以下级别:1)预处理数据的级别;2) 学习模式的水平;3) 提取的知识水平;以及4)中间结果的水平。我们在概述当前最先进的攻击机制的基础上,探索和分析了每个信息交换级别的潜在威胁,然后讨论了针对此类威胁的可能防御方法。最后,我们通过展望这一关键领域的挑战和未来研究的可能方向来完成调查。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Trusted AI in Multiagent Systems: An Overview of Privacy and Security for Distributed Learning
Motivated by the advancing computational capacity of distributed end-user equipment (UE), as well as the increasing concerns about sharing private data, there has been considerable recent interest in machine learning (ML) and artificial intelligence (AI) that can be processed on distributed UEs. Specifically, in this paradigm, parts of an ML process are outsourced to multiple distributed UEs. Then, the processed information is aggregated on a certain level at a central server, which turns a centralized ML process into a distributed one and brings about significant benefits. However, this new distributed ML paradigm raises new risks in terms of privacy and security issues. In this article, we provide a survey of the emerging security and privacy risks of distributed ML from a unique perspective of information exchange levels, which are defined according to the key steps of an ML process, i.e., we consider the following levels: 1) the level of preprocessed data; 2) the level of learning models; 3) the level of extracted knowledge; and 4) the level of intermediate results. We explore and analyze the potential of threats for each information exchange level based on an overview of current state-of-the-art attack mechanisms and then discuss the possible defense methods against such threats. Finally, we complete the survey by providing an outlook on the challenges and possible directions for future research in this critical area.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Proceedings of the IEEE
Proceedings of the IEEE 工程技术-工程:电子与电气
CiteScore
46.40
自引率
1.00%
发文量
160
审稿时长
3-8 weeks
期刊介绍: Proceedings of the IEEE is the leading journal to provide in-depth review, survey, and tutorial coverage of the technical developments in electronics, electrical and computer engineering, and computer science. Consistently ranked as one of the top journals by Impact Factor, Article Influence Score and more, the journal serves as a trusted resource for engineers around the world.
期刊最新文献
Front Cover Table of Contents IEEE Membership Future Special Issues/Special Sections of the Proceedings TechRxiv
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1