Macro-level Inference in Collaborative Learning

Rudolf Mayer, Andreas Ekelhart
{"title":"Macro-level Inference in Collaborative Learning","authors":"Rudolf Mayer, Andreas Ekelhart","doi":"10.1145/3508398.3519361","DOIUrl":null,"url":null,"abstract":"With increasing data collection, also efforts to extract the underlying knowledge increase. Among these, collaborative learning efforts become more important, where multiple organisations want to jointly learn a common predictive model, e.g. to detect anomalies or learn how to improve a production process. Instead of learning only from their own data, a collaborative approach enables the participants to learn a more generalising model, also capable to predict settings not yet encountered by their own organisation, but some of the others. However, in many cases, the participants would not want to directly share and disclose their data, for regulatory reasons, or because the data constitute a business asset. Approaches such as federated learning allow to train a collaborative model without exposing the data itself. However, federated learning still requires exchanging intermediate models from each participant. Information that can be inferred from these models is thus a concern. Threats to individual data points and defences have been studied e.g. in membership inference attacks. However, we argue that in many use cases, also global properties are of interest -- not only to outsiders, but specifically also to the other participants, which might be competitors. In a production process, e.g. knowing which types of steps a company performs frequently, or obtaining information on quantities of a specific product or material a company processes, could reveal business secrets, without needing to know details of individual data points.","PeriodicalId":102306,"journal":{"name":"Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3508398.3519361","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With increasing data collection, also efforts to extract the underlying knowledge increase. Among these, collaborative learning efforts become more important, where multiple organisations want to jointly learn a common predictive model, e.g. to detect anomalies or learn how to improve a production process. Instead of learning only from their own data, a collaborative approach enables the participants to learn a more generalising model, also capable to predict settings not yet encountered by their own organisation, but some of the others. However, in many cases, the participants would not want to directly share and disclose their data, for regulatory reasons, or because the data constitute a business asset. Approaches such as federated learning allow to train a collaborative model without exposing the data itself. However, federated learning still requires exchanging intermediate models from each participant. Information that can be inferred from these models is thus a concern. Threats to individual data points and defences have been studied e.g. in membership inference attacks. However, we argue that in many use cases, also global properties are of interest -- not only to outsiders, but specifically also to the other participants, which might be competitors. In a production process, e.g. knowing which types of steps a company performs frequently, or obtaining information on quantities of a specific product or material a company processes, could reveal business secrets, without needing to know details of individual data points.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
协作学习中的宏观层次推理
随着数据收集的增加,提取底层知识的努力也在增加。其中,协作学习工作变得更加重要,多个组织希望联合学习一个共同的预测模型,例如检测异常或学习如何改进生产过程。与仅仅从他们自己的数据中学习不同,协作方法使参与者能够学习一个更一般化的模型,也能够预测他们自己的组织尚未遇到的设置,但其他一些。然而,在许多情况下,由于监管原因,或者因为数据构成业务资产,参与者不希望直接共享和披露他们的数据。联邦学习等方法允许在不暴露数据本身的情况下训练协作模型。然而,联邦学习仍然需要从每个参与者交换中间模型。因此,可以从这些模型中推断出的信息是值得关注的。对单个数据点的威胁和防御已经进行了研究,例如在成员推理攻击中。然而,我们认为,在许多用例中,全局属性也是感兴趣的——不仅对外部人员,而且对其他参与者(可能是竞争对手)也特别感兴趣。在生产过程中,例如了解公司经常执行的步骤类型,或获取公司加工的特定产品或材料的数量信息,可以揭示商业秘密,而无需了解单个数据点的细节。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Session details: Session 7: Encryption and Privacy RS-PKE: Ranked Searchable Public-Key Encryption for Cloud-Assisted Lightweight Platforms Prediction of Mobile App Privacy Preferences with User Profiles via Federated Learning Building a Commit-level Dataset of Real-world Vulnerabilities Shared Multi-Keyboard and Bilingual Datasets to Support Keystroke Dynamics Research
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1