使用加密输入混淆保护分类器免受白盒和黑盒攻击

G. D. Crescenzo, B. Coan, L. Bahler, K. Rohloff, Y. Polyakov, D. Cousins
{"title":"使用加密输入混淆保护分类器免受白盒和黑盒攻击","authors":"G. D. Crescenzo, B. Coan, L. Bahler, K. Rohloff, Y. Polyakov, D. Cousins","doi":"10.1145/3411495.3421369","DOIUrl":null,"url":null,"abstract":"Machine Learning as a Service (aka MLaaS) and Smart Grid as a Service (aka SGaaS) are expected to grow at a significant rate. Just like most cloud services, MLaaS and SGaaS can be subject to a number of attacks. In this paper, we focus on white-box attacks (informally defined as attacks that try to access some or all internal data or computation used by the service program), and black-box attacks (informally defined as attacks only use input-output access to the attacked service program). We consider a participant model including a setup manager, a cloud server, and one or many data producers. The cloud server runs a machine learning classifier trained on a dataset provided by the setup manager and classifies new input data provided by the data producers. Applications include analytics over data received by distributed sensors, such as, for instance, in a typical SGaaS environment. We propose a new security notion of encrypted-input classifier obfuscation as a set of algorithms that, in the above participant and algorithm model, aims to protect the cloud server's classifier program from both white-box and black-box attacks. This notion builds on cryptographic obfuscation of programs [1], cryptographic obfuscation of classifiers [2], and encrypted-input obfuscation of programs [3]. We model classifiers as a pair of programs: a training program that on input a dataset and secret data values, returns classification parameters, and a classification program that on input classification parameters, and a new input data value, returns a classification result. A typical execution goes as follows. During obfuscation generation, the setup manager randomly chooses a key k and sends a k-based obfuscation of the classifier to the cloud server, and sends to the data producers either k or information to generate k-based input data encryptions. During obfuscation evaluation, the data producers send k-based input data encryptions to the cloud server, which evaluates the obfuscated classifier over the encrypted input data. Here, the goal is to protect the confidentiality of the dataset, the secret data, and the classification parameters. One can obtain a general-purpose encrypted-input classifier obfuscator in two steps: 1) transforming a suitable composition of training and classification algorithms into a single boolean circuit; 2) applying to this circuit the result from saying that [3] a modification of Yao's protocol[4] is an encrypted-input obfuscation of gate values in any polynomial-size boolean circuit. This result is of only theoretical relevance. Towards finding a practically efficient obfuscation of specific classifiers, we note that techniques from [3] can be used to produce an obfuscator for decision trees. Moreover, in recent results we have produced an obfuscator for image matching (i.e., matching an input image to a secret image).","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Securing Classifiers Against Both White-Box and Black-Box Attacks using Encrypted-Input Obfuscation\",\"authors\":\"G. D. Crescenzo, B. Coan, L. Bahler, K. Rohloff, Y. Polyakov, D. Cousins\",\"doi\":\"10.1145/3411495.3421369\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine Learning as a Service (aka MLaaS) and Smart Grid as a Service (aka SGaaS) are expected to grow at a significant rate. Just like most cloud services, MLaaS and SGaaS can be subject to a number of attacks. In this paper, we focus on white-box attacks (informally defined as attacks that try to access some or all internal data or computation used by the service program), and black-box attacks (informally defined as attacks only use input-output access to the attacked service program). We consider a participant model including a setup manager, a cloud server, and one or many data producers. The cloud server runs a machine learning classifier trained on a dataset provided by the setup manager and classifies new input data provided by the data producers. Applications include analytics over data received by distributed sensors, such as, for instance, in a typical SGaaS environment. We propose a new security notion of encrypted-input classifier obfuscation as a set of algorithms that, in the above participant and algorithm model, aims to protect the cloud server's classifier program from both white-box and black-box attacks. This notion builds on cryptographic obfuscation of programs [1], cryptographic obfuscation of classifiers [2], and encrypted-input obfuscation of programs [3]. We model classifiers as a pair of programs: a training program that on input a dataset and secret data values, returns classification parameters, and a classification program that on input classification parameters, and a new input data value, returns a classification result. A typical execution goes as follows. During obfuscation generation, the setup manager randomly chooses a key k and sends a k-based obfuscation of the classifier to the cloud server, and sends to the data producers either k or information to generate k-based input data encryptions. During obfuscation evaluation, the data producers send k-based input data encryptions to the cloud server, which evaluates the obfuscated classifier over the encrypted input data. Here, the goal is to protect the confidentiality of the dataset, the secret data, and the classification parameters. One can obtain a general-purpose encrypted-input classifier obfuscator in two steps: 1) transforming a suitable composition of training and classification algorithms into a single boolean circuit; 2) applying to this circuit the result from saying that [3] a modification of Yao's protocol[4] is an encrypted-input obfuscation of gate values in any polynomial-size boolean circuit. This result is of only theoretical relevance. Towards finding a practically efficient obfuscation of specific classifiers, we note that techniques from [3] can be used to produce an obfuscator for decision trees. Moreover, in recent results we have produced an obfuscator for image matching (i.e., matching an input image to a secret image).\",\"PeriodicalId\":125943,\"journal\":{\"name\":\"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop\",\"volume\":\"51 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3411495.3421369\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3411495.3421369","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

机器学习即服务(MLaaS)和智能电网即服务(SGaaS)预计将以显著的速度增长。就像大多数云服务一样,MLaaS和SGaaS可能会受到许多攻击。在本文中,我们关注白盒攻击(非正式定义为试图访问服务程序使用的部分或全部内部数据或计算的攻击)和黑盒攻击(非正式定义为仅使用对被攻击服务程序的输入-输出访问的攻击)。我们考虑一个参与者模型,包括一个设置管理器、一个云服务器和一个或多个数据生产者。云服务器运行在设置管理器提供的数据集上训练的机器学习分类器,并对数据生产者提供的新输入数据进行分类。应用程序包括对分布式传感器接收的数据进行分析,例如,在典型的SGaaS环境中。我们提出了一种新的加密输入分类器混淆的安全概念,作为一组算法,在上述参与者和算法模型中,旨在保护云服务器的分类器程序免受白盒和黑盒攻击。这个概念建立在程序的加密混淆[1]、分类器的加密混淆[2]和程序的加密输入混淆[3]的基础上。我们将分类器建模为一对程序:一个训练程序输入数据集和秘密数据值,返回分类参数;一个分类程序输入分类参数和新的输入数据值,返回分类结果。典型的执行如下。在混淆生成过程中,设置管理器随机选择一个密钥k并向云服务器发送基于分类器的混淆,然后向数据生产者发送k或信息以生成基于输入的数据加密。在混淆评估期间,数据生产者将基于输入数据的加密发送到云服务器,云服务器在加密的输入数据上评估混淆的分类器。这里的目标是保护数据集、秘密数据和分类参数的机密性。我们可以通过两个步骤获得一个通用的加密输入分类器混淆器:1)将训练和分类算法的合适组合转换成单个布尔电路;2)将[3]对姚协议[4]的修改应用于该电路的结果是,在任何多项式大小的布尔电路中,门值的加密输入混淆。这一结果仅具有理论意义。为了找到一个实际有效的特定分类器的混淆,我们注意到[3]中的技术可以用来为决策树产生一个混淆器。此外,在最近的结果中,我们已经产生了用于图像匹配的混淆器(即将输入图像与秘密图像匹配)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Securing Classifiers Against Both White-Box and Black-Box Attacks using Encrypted-Input Obfuscation
Machine Learning as a Service (aka MLaaS) and Smart Grid as a Service (aka SGaaS) are expected to grow at a significant rate. Just like most cloud services, MLaaS and SGaaS can be subject to a number of attacks. In this paper, we focus on white-box attacks (informally defined as attacks that try to access some or all internal data or computation used by the service program), and black-box attacks (informally defined as attacks only use input-output access to the attacked service program). We consider a participant model including a setup manager, a cloud server, and one or many data producers. The cloud server runs a machine learning classifier trained on a dataset provided by the setup manager and classifies new input data provided by the data producers. Applications include analytics over data received by distributed sensors, such as, for instance, in a typical SGaaS environment. We propose a new security notion of encrypted-input classifier obfuscation as a set of algorithms that, in the above participant and algorithm model, aims to protect the cloud server's classifier program from both white-box and black-box attacks. This notion builds on cryptographic obfuscation of programs [1], cryptographic obfuscation of classifiers [2], and encrypted-input obfuscation of programs [3]. We model classifiers as a pair of programs: a training program that on input a dataset and secret data values, returns classification parameters, and a classification program that on input classification parameters, and a new input data value, returns a classification result. A typical execution goes as follows. During obfuscation generation, the setup manager randomly chooses a key k and sends a k-based obfuscation of the classifier to the cloud server, and sends to the data producers either k or information to generate k-based input data encryptions. During obfuscation evaluation, the data producers send k-based input data encryptions to the cloud server, which evaluates the obfuscated classifier over the encrypted input data. Here, the goal is to protect the confidentiality of the dataset, the secret data, and the classification parameters. One can obtain a general-purpose encrypted-input classifier obfuscator in two steps: 1) transforming a suitable composition of training and classification algorithms into a single boolean circuit; 2) applying to this circuit the result from saying that [3] a modification of Yao's protocol[4] is an encrypted-input obfuscation of gate values in any polynomial-size boolean circuit. This result is of only theoretical relevance. Towards finding a practically efficient obfuscation of specific classifiers, we note that techniques from [3] can be used to produce an obfuscator for decision trees. Moreover, in recent results we have produced an obfuscator for image matching (i.e., matching an input image to a secret image).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
MARTINI: Memory Access Traces to Detect Attacks Securing Classifiers Against Both White-Box and Black-Box Attacks using Encrypted-Input Obfuscation GANRED: GAN-based Reverse Engineering of DNNs via Cache Side-Channel Towards Enabling Secure Web-Based Cloud Services using Client-Side Encryption Non-Interactive Cryptographic Access Control for Secure Outsourced Storage
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1