Herd Accountability of Privacy-Preserving Algorithms: A Stackelberg Game Approach

IF 8 1区 计算机科学 Q1 COMPUTER SCIENCE, THEORY & METHODS IEEE Transactions on Information Forensics and Security Pub Date : 2025-02-10 DOI:10.1109/TIFS.2025.3540357
Ya-Ting Yang;Tao Zhang;Quanyan Zhu
{"title":"Herd Accountability of Privacy-Preserving Algorithms: A Stackelberg Game Approach","authors":"Ya-Ting Yang;Tao Zhang;Quanyan Zhu","doi":"10.1109/TIFS.2025.3540357","DOIUrl":null,"url":null,"abstract":"AI-driven algorithmic systems are increasingly adopted across various sectors, yet the lack of transparency can raise accountability concerns about claimed privacy protection measures. While machine-based audits offer one avenue for addressing these issues, they are often costly and time-consuming. Herd audit, on the other hand, offers a promising alternative by leveraging collective intelligence from end-users. However, the presence of epistemic disparity among auditors, resulting in varying levels of domain expertise and access to relevant knowledge, captured by the rational inattention model, may impact audit assurance. An effective herd audit must establish a credible accountability threat for algorithm developers, incentivizing them not to breach user trust. In this work, our objective is to develop a systematic framework that explores the impact of herd audits on algorithm developers through the lens of the Stackelberg game. Our analysis reveals the importance of easy access to information and the appropriate design of rewards, as they increase the auditors’ assurance in the audit process. In this context, herd audit serves as a deterrent to negligent behavior. Therefore, by enhancing herd accountability, herd audit contributes to responsible algorithm development, fostering trust between users and algorithms.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"2237-2251"},"PeriodicalIF":8.0000,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10879078/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

AI-driven algorithmic systems are increasingly adopted across various sectors, yet the lack of transparency can raise accountability concerns about claimed privacy protection measures. While machine-based audits offer one avenue for addressing these issues, they are often costly and time-consuming. Herd audit, on the other hand, offers a promising alternative by leveraging collective intelligence from end-users. However, the presence of epistemic disparity among auditors, resulting in varying levels of domain expertise and access to relevant knowledge, captured by the rational inattention model, may impact audit assurance. An effective herd audit must establish a credible accountability threat for algorithm developers, incentivizing them not to breach user trust. In this work, our objective is to develop a systematic framework that explores the impact of herd audits on algorithm developers through the lens of the Stackelberg game. Our analysis reveals the importance of easy access to information and the appropriate design of rewards, as they increase the auditors’ assurance in the audit process. In this context, herd audit serves as a deterrent to negligent behavior. Therefore, by enhancing herd accountability, herd audit contributes to responsible algorithm development, fostering trust between users and algorithms.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
隐私保护算法的群体问责:一种Stackelberg博弈方法
人工智能驱动的算法系统越来越多地应用于各个领域,但缺乏透明度可能会引发对所谓隐私保护措施的问责制担忧。虽然基于机器的审计为解决这些问题提供了一种途径,但它们通常既昂贵又耗时。另一方面,群体审计通过利用最终用户的集体智慧,提供了一个有希望的替代方案。然而,审计师之间存在的认知差异,导致了不同水平的领域专业知识和对相关知识的获取,由理性疏忽模型捕获,可能会影响审计保证。有效的群体审计必须为算法开发者建立可信的问责威胁,激励他们不要违背用户的信任。在这项工作中,我们的目标是开发一个系统框架,通过Stackelberg游戏的镜头探索群体审计对算法开发人员的影响。我们的分析揭示了易于获取信息和适当设计奖励的重要性,因为它们增加了审计人员在审计过程中的保证。在这种情况下,群体审计对疏忽行为起到了威慑作用。因此,通过增强群体问责制,群体审计有助于负责任的算法开发,促进用户和算法之间的信任。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Information Forensics and Security
IEEE Transactions on Information Forensics and Security 工程技术-工程:电子与电气
CiteScore
14.40
自引率
7.40%
发文量
234
审稿时长
6.5 months
期刊介绍: The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features
期刊最新文献
Urey-ML: A Machine Learning-based Distance Deception Attack against Apple UWB Interaction Frameworks DUAP: Disentanglement-based Universal Adversarial Perturbations for Robust Multilingual Speech Privacy Protection HIBPEKS: Hierarchical Identity-based Puncturable Encryption With Keyword Search Over Outsourced Encrypted Data Trust Under Siege: Label Spoofing Attacks Against Machine Learning for Android Malware Detection A Fine-Tuning Data Recovery Attack on Generative Language Models via Backdooring
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1