Reaching Data Confidentiality and Model Accountability on the CalTrain

Zhongshu Gu, H. Jamjoom, D. Su, Heqing Huang, Jialong Zhang, Tengfei Ma, D. Pendarakis, Ian Molloy
{"title":"Reaching Data Confidentiality and Model Accountability on the CalTrain","authors":"Zhongshu Gu, H. Jamjoom, D. Su, Heqing Huang, Jialong Zhang, Tengfei Ma, D. Pendarakis, Ian Molloy","doi":"10.1109/DSN.2019.00044","DOIUrl":null,"url":null,"abstract":"Distributed collaborative learning (DCL) paradigms enable building joint machine learning models from distrusted multi-party participants. Data confidentiality is guaranteed by retaining private training data on each participant's local infrastructure. However, this approach makes today's DCL design fundamentally vulnerable to data poisoning and backdoor attacks. It limits DCL's model accountability, which is key to backtracking problematic training data instances and their responsible contributors. In this paper, we introduce CALTRAIN, a centralized collaborative learning system that simultaneously achieves data confidentiality and model accountability. CALTRAIN enforces isolated computation via secure enclaves on centrally aggregated training data to guarantee data confidentiality. To support building accountable learning models, we securely maintain the links between training instances and their contributors. Our evaluation shows that the models generated by CALTRAIN can achieve the same prediction accuracy when compared to the models trained in non-protected environments. We also demonstrate that when malicious training participants tend to implant backdoors during model training, CALTRAIN can accurately and precisely discover the poisoned or mislabeled training data that lead to the runtime mispredictions.","PeriodicalId":271955,"journal":{"name":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSN.2019.00044","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

Distributed collaborative learning (DCL) paradigms enable building joint machine learning models from distrusted multi-party participants. Data confidentiality is guaranteed by retaining private training data on each participant's local infrastructure. However, this approach makes today's DCL design fundamentally vulnerable to data poisoning and backdoor attacks. It limits DCL's model accountability, which is key to backtracking problematic training data instances and their responsible contributors. In this paper, we introduce CALTRAIN, a centralized collaborative learning system that simultaneously achieves data confidentiality and model accountability. CALTRAIN enforces isolated computation via secure enclaves on centrally aggregated training data to guarantee data confidentiality. To support building accountable learning models, we securely maintain the links between training instances and their contributors. Our evaluation shows that the models generated by CALTRAIN can achieve the same prediction accuracy when compared to the models trained in non-protected environments. We also demonstrate that when malicious training participants tend to implant backdoors during model training, CALTRAIN can accurately and precisely discover the poisoned or mislabeled training data that lead to the runtime mispredictions.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
在加州列车上实现数据保密和模型责任
分布式协作学习(DCL)范式能够从不可信的多方参与者中构建联合机器学习模型。通过在每个参与者的本地基础设施上保留私人培训数据来保证数据的机密性。然而,这种方法使得今天的DCL设计从根本上容易受到数据中毒和后门攻击。它限制了DCL的模型问责制,这是回溯有问题的训练数据实例及其负责任的贡献者的关键。本文介绍了一种同时实现数据保密和模型问责的集中式协同学习系统CALTRAIN。CALTRAIN通过集中聚合的训练数据上的安全飞地强制隔离计算,以保证数据的机密性。为了支持建立负责任的学习模型,我们安全地维护训练实例和它们的贡献者之间的链接。我们的评估表明,与在非保护环境中训练的模型相比,CALTRAIN生成的模型可以达到相同的预测精度。我们还证明,当恶意的训练参与者倾向于在模型训练期间植入后门时,CALTRAIN可以准确准确地发现导致运行时错误预测的有毒或错误标记的训练数据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Exploiting Memory Corruption Vulnerabilities in Connman for IoT Devices Efficient Treatment of Uncertainty in System Reliability Analysis using Importance Measures Characterizing and Understanding HPC Job Failures Over The 2K-Day Life of IBM BlueGene/Q System PrivAnalyzer: Measuring the Efficacy of Linux Privilege Use POLaR: Per-Allocation Object Layout Randomization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1