模型训练中具有针对性隐私意识的策略梯度传输:斯塔克尔伯格博弈分析

Hezhe Sun;Yufei Wang;Huiwen Yang;Kaixuan Huo;Yuzhe Li
{"title":"模型训练中具有针对性隐私意识的策略梯度传输:斯塔克尔伯格博弈分析","authors":"Hezhe Sun;Yufei Wang;Huiwen Yang;Kaixuan Huo;Yuzhe Li","doi":"10.1109/TAI.2024.3389611","DOIUrl":null,"url":null,"abstract":"Privacy-aware machine learning paradigms have sparked widespread concern due to their ability to safeguard the local privacy of data owners, preventing the leakage of private information to untrustworthy platforms or malicious third parties. This article focuses on characterizing the interactions between the learner and the data owner within this privacy-aware training process. Here, the data owner hesitates to transmit the original gradient to the learner due to potential cybersecurity issues, such as gradient leakage and membership inference. To address this concern, we propose a Stackelberg game framework that models the training process. In this framework, the data owner's objective is not to maximize the discrepancy between the learner's obtained gradient and the true gradient but rather to ensure that the learner obtains a gradient closely resembling one deliberately designed by the data owner, while the learner's objective is to recover the true gradient as accurately as possible. We derive the optimal encoder and decoder using mismatched cost functions and characterize the equilibrium for specific cases, balancing model accuracy and local privacy. Numerical examples illustrate the main results, and we conclude with expanding discussions to suggest future investigations into reliable countermeasure designs.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Strategic Gradient Transmission With Targeted Privacy-Awareness in Model Training: A Stackelberg Game Analysis\",\"authors\":\"Hezhe Sun;Yufei Wang;Huiwen Yang;Kaixuan Huo;Yuzhe Li\",\"doi\":\"10.1109/TAI.2024.3389611\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Privacy-aware machine learning paradigms have sparked widespread concern due to their ability to safeguard the local privacy of data owners, preventing the leakage of private information to untrustworthy platforms or malicious third parties. This article focuses on characterizing the interactions between the learner and the data owner within this privacy-aware training process. Here, the data owner hesitates to transmit the original gradient to the learner due to potential cybersecurity issues, such as gradient leakage and membership inference. To address this concern, we propose a Stackelberg game framework that models the training process. In this framework, the data owner's objective is not to maximize the discrepancy between the learner's obtained gradient and the true gradient but rather to ensure that the learner obtains a gradient closely resembling one deliberately designed by the data owner, while the learner's objective is to recover the true gradient as accurately as possible. We derive the optimal encoder and decoder using mismatched cost functions and characterize the equilibrium for specific cases, balancing model accuracy and local privacy. Numerical examples illustrate the main results, and we conclude with expanding discussions to suggest future investigations into reliable countermeasure designs.\",\"PeriodicalId\":73305,\"journal\":{\"name\":\"IEEE transactions on artificial intelligence\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10502336/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10502336/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

隐私感知机器学习范式能够保护数据所有者的本地隐私,防止私人信息泄露给不可信的平台或恶意第三方,因此引发了广泛关注。本文的重点是描述这种隐私感知训练过程中学习者与数据所有者之间的互动。在这里,由于潜在的网络安全问题,如梯度泄漏和成员推理,数据所有者在向学习者传输原始梯度时犹豫不决。为了解决这个问题,我们提出了一个斯塔克尔伯格博弈框架来模拟训练过程。在这个框架中,数据所有者的目标不是最大化学习者获得的梯度与真实梯度之间的差异,而是确保学习者获得的梯度与数据所有者刻意设计的梯度非常相似,而学习者的目标是尽可能准确地恢复真实梯度。我们利用不匹配的成本函数推导出了最优编码器和解码器,并描述了特定情况下的平衡,在模型准确性和局部隐私之间取得了平衡。数字示例说明了主要结果,最后我们将展开讨论,为未来研究可靠的对策设计提供建议。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Strategic Gradient Transmission With Targeted Privacy-Awareness in Model Training: A Stackelberg Game Analysis
Privacy-aware machine learning paradigms have sparked widespread concern due to their ability to safeguard the local privacy of data owners, preventing the leakage of private information to untrustworthy platforms or malicious third parties. This article focuses on characterizing the interactions between the learner and the data owner within this privacy-aware training process. Here, the data owner hesitates to transmit the original gradient to the learner due to potential cybersecurity issues, such as gradient leakage and membership inference. To address this concern, we propose a Stackelberg game framework that models the training process. In this framework, the data owner's objective is not to maximize the discrepancy between the learner's obtained gradient and the true gradient but rather to ensure that the learner obtains a gradient closely resembling one deliberately designed by the data owner, while the learner's objective is to recover the true gradient as accurately as possible. We derive the optimal encoder and decoder using mismatched cost functions and characterize the equilibrium for specific cases, balancing model accuracy and local privacy. Numerical examples illustrate the main results, and we conclude with expanding discussions to suggest future investigations into reliable countermeasure designs.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
7.70
自引率
0.00%
发文量
0
期刊最新文献
Table of Contents Front Cover IEEE Transactions on Artificial Intelligence Publication Information Front Cover Table of Contents
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1