安全与隐私背景下值得信赖的机器学习

IF 2.4 4区 计算机科学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS International Journal of Information Security Pub Date : 2024-04-03 DOI:10.1007/s10207-024-00813-3
Ramesh Upreti, Pedro G. Lind, Ahmed Elmokashfi, Anis Yazidi
{"title":"安全与隐私背景下值得信赖的机器学习","authors":"Ramesh Upreti, Pedro G. Lind, Ahmed Elmokashfi, Anis Yazidi","doi":"10.1007/s10207-024-00813-3","DOIUrl":null,"url":null,"abstract":"<p>Artificial intelligence-based algorithms are widely adopted in critical applications such as healthcare and autonomous vehicles. Mitigating the security and privacy issues of AI models, and enhancing their trustworthiness have become of paramount importance. We present a detailed investigation of existing security, privacy, and defense techniques and strategies to make machine learning more secure and trustworthy. We focus on the new paradigm of machine learning called federated learning, where one aims to develop machine learning models involving different partners (data sources) that do not need to share data and information with each other. In particular, we discuss how federated learning bridges security and privacy, how it guarantees privacy requirements of AI applications, and then highlight challenges that need to be addressed in the future. Finally, after having surveyed the high-level concepts of trustworthy AI and its different components and identifying present research trends addressing security, privacy, and trustworthiness separately, we discuss possible interconnections and dependencies between these three fields. All in all, we provide some insight to explain how AI researchers should focus on building a unified solution combining security, privacy, and trustworthy AI in the future.</p>","PeriodicalId":50316,"journal":{"name":"International Journal of Information Security","volume":"18 1","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Trustworthy machine learning in the context of security and privacy\",\"authors\":\"Ramesh Upreti, Pedro G. Lind, Ahmed Elmokashfi, Anis Yazidi\",\"doi\":\"10.1007/s10207-024-00813-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Artificial intelligence-based algorithms are widely adopted in critical applications such as healthcare and autonomous vehicles. Mitigating the security and privacy issues of AI models, and enhancing their trustworthiness have become of paramount importance. We present a detailed investigation of existing security, privacy, and defense techniques and strategies to make machine learning more secure and trustworthy. We focus on the new paradigm of machine learning called federated learning, where one aims to develop machine learning models involving different partners (data sources) that do not need to share data and information with each other. In particular, we discuss how federated learning bridges security and privacy, how it guarantees privacy requirements of AI applications, and then highlight challenges that need to be addressed in the future. Finally, after having surveyed the high-level concepts of trustworthy AI and its different components and identifying present research trends addressing security, privacy, and trustworthiness separately, we discuss possible interconnections and dependencies between these three fields. All in all, we provide some insight to explain how AI researchers should focus on building a unified solution combining security, privacy, and trustworthy AI in the future.</p>\",\"PeriodicalId\":50316,\"journal\":{\"name\":\"International Journal of Information Security\",\"volume\":\"18 1\",\"pages\":\"\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2024-04-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Information Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s10207-024-00813-3\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Information Security","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10207-024-00813-3","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

基于人工智能的算法被广泛应用于医疗保健和自动驾驶汽车等关键领域。缓解人工智能模型的安全和隐私问题并提高其可信度已变得至关重要。我们对现有的安全、隐私和防御技术及策略进行了详细研究,以提高机器学习的安全性和可信度。我们将重点放在被称为联合学习的机器学习新模式上,即开发涉及不同合作伙伴(数据源)的机器学习模型,而这些合作伙伴无需相互共享数据和信息。我们将特别讨论联合学习如何在安全和隐私之间架起桥梁,如何保证人工智能应用的隐私要求,然后强调未来需要应对的挑战。最后,在考察了可信人工智能的高层次概念及其不同组成部分,并确定了目前分别针对安全性、隐私性和可信性的研究趋势之后,我们讨论了这三个领域之间可能存在的相互联系和依赖关系。总之,我们提供了一些见解,以解释人工智能研究人员未来应如何专注于构建一个将安全、隐私和可信人工智能结合在一起的统一解决方案。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Trustworthy machine learning in the context of security and privacy

Artificial intelligence-based algorithms are widely adopted in critical applications such as healthcare and autonomous vehicles. Mitigating the security and privacy issues of AI models, and enhancing their trustworthiness have become of paramount importance. We present a detailed investigation of existing security, privacy, and defense techniques and strategies to make machine learning more secure and trustworthy. We focus on the new paradigm of machine learning called federated learning, where one aims to develop machine learning models involving different partners (data sources) that do not need to share data and information with each other. In particular, we discuss how federated learning bridges security and privacy, how it guarantees privacy requirements of AI applications, and then highlight challenges that need to be addressed in the future. Finally, after having surveyed the high-level concepts of trustworthy AI and its different components and identifying present research trends addressing security, privacy, and trustworthiness separately, we discuss possible interconnections and dependencies between these three fields. All in all, we provide some insight to explain how AI researchers should focus on building a unified solution combining security, privacy, and trustworthy AI in the future.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Information Security
International Journal of Information Security 工程技术-计算机:理论方法
CiteScore
6.30
自引率
3.10%
发文量
52
审稿时长
12 months
期刊介绍: The International Journal of Information Security is an English language periodical on research in information security which offers prompt publication of important technical work, whether theoretical, applicable, or related to implementation. Coverage includes system security: intrusion detection, secure end systems, secure operating systems, database security, security infrastructures, security evaluation; network security: Internet security, firewalls, mobile security, security agents, protocols, anti-virus and anti-hacker measures; content protection: watermarking, software protection, tamper resistant software; applications: electronic commerce, government, health, telecommunications, mobility.
期刊最新文献
“Animation” URL in NFT marketplaces considered harmful for privacy An overview of proposals towards the privacy-preserving publication of trajectory data Enhancing privacy protections in national identification systems: an examination of stakeholders’ knowledge, attitudes, and practices of privacy by design An enhanced and verifiable lightweight authentication protocol for securing the Internet of Medical Things (IoMT) based on CP-ABE encryption Secure multi-party computation with legally-enforceable fairness
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1