在具有隐私意识的助手中操作情境完整性

Sahra Ghalebikesabi, Eugene Bagdasaryan, Ren Yi, Itay Yona, Ilia Shumailov, Aneesh Pappu, Chongyang Shi, Laura Weidinger, Robert Stanforth, Leonard Berrada, Pushmeet Kohli, Po-Sen Huang, Borja Balle
{"title":"在具有隐私意识的助手中操作情境完整性","authors":"Sahra Ghalebikesabi, Eugene Bagdasaryan, Ren Yi, Itay Yona, Ilia Shumailov, Aneesh Pappu, Chongyang Shi, Laura Weidinger, Robert Stanforth, Leonard Berrada, Pushmeet Kohli, Po-Sen Huang, Borja Balle","doi":"arxiv-2408.02373","DOIUrl":null,"url":null,"abstract":"Advanced AI assistants combine frontier LLMs and tool access to autonomously\nperform complex tasks on behalf of users. While the helpfulness of such\nassistants can increase dramatically with access to user information including\nemails and documents, this raises privacy concerns about assistants sharing\ninappropriate information with third parties without user supervision. To steer\ninformation-sharing assistants to behave in accordance with privacy\nexpectations, we propose to operationalize $\\textit{contextual integrity}$\n(CI), a framework that equates privacy with the appropriate flow of information\nin a given context. In particular, we design and evaluate a number of\nstrategies to steer assistants' information-sharing actions to be CI compliant.\nOur evaluation is based on a novel form filling benchmark composed of synthetic\ndata and human annotations, and it reveals that prompting frontier LLMs to\nperform CI-based reasoning yields strong results.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"32 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Operationalizing Contextual Integrity in Privacy-Conscious Assistants\",\"authors\":\"Sahra Ghalebikesabi, Eugene Bagdasaryan, Ren Yi, Itay Yona, Ilia Shumailov, Aneesh Pappu, Chongyang Shi, Laura Weidinger, Robert Stanforth, Leonard Berrada, Pushmeet Kohli, Po-Sen Huang, Borja Balle\",\"doi\":\"arxiv-2408.02373\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Advanced AI assistants combine frontier LLMs and tool access to autonomously\\nperform complex tasks on behalf of users. While the helpfulness of such\\nassistants can increase dramatically with access to user information including\\nemails and documents, this raises privacy concerns about assistants sharing\\ninappropriate information with third parties without user supervision. To steer\\ninformation-sharing assistants to behave in accordance with privacy\\nexpectations, we propose to operationalize $\\\\textit{contextual integrity}$\\n(CI), a framework that equates privacy with the appropriate flow of information\\nin a given context. In particular, we design and evaluate a number of\\nstrategies to steer assistants' information-sharing actions to be CI compliant.\\nOur evaluation is based on a novel form filling benchmark composed of synthetic\\ndata and human annotations, and it reveals that prompting frontier LLMs to\\nperform CI-based reasoning yields strong results.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":\"32 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.02373\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.02373","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

先进的人工智能助手结合了前沿 LLM 和工具访问,可代表用户自主执行复杂的任务。虽然这类助手在获取用户信息(包括电子邮件和文档)后能显著提高帮助性,但这也引发了隐私问题,即助手在没有用户监督的情况下与第三方共享不适当的信息。为了引导信息共享助手的行为符合隐私期望,我们提出了$\textit{contextual integrity}$(CI),这是一个将隐私等同于特定情境下适当信息流的框架。我们的评估基于一个由合成数据和人类注释组成的新颖的表单填写基准,它揭示了促使前沿 LLM 执行基于 CI 的推理会产生强大的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Operationalizing Contextual Integrity in Privacy-Conscious Assistants
Advanced AI assistants combine frontier LLMs and tool access to autonomously perform complex tasks on behalf of users. While the helpfulness of such assistants can increase dramatically with access to user information including emails and documents, this raises privacy concerns about assistants sharing inappropriate information with third parties without user supervision. To steer information-sharing assistants to behave in accordance with privacy expectations, we propose to operationalize $\textit{contextual integrity}$ (CI), a framework that equates privacy with the appropriate flow of information in a given context. In particular, we design and evaluate a number of strategies to steer assistants' information-sharing actions to be CI compliant. Our evaluation is based on a novel form filling benchmark composed of synthetic data and human annotations, and it reveals that prompting frontier LLMs to perform CI-based reasoning yields strong results.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Abductive explanations of classifiers under constraints: Complexity and properties Explaining Non-monotonic Normative Reasoning using Argumentation Theory with Deontic Logic Towards Explainable Goal Recognition Using Weight of Evidence (WoE): A Human-Centered Approach A Metric Hybrid Planning Approach to Solving Pandemic Planning Problems with Simple SIR Models Neural Networks for Vehicle Routing Problem
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1