GDALR: An Efficient Model Duplication Attack on Black Box Machine Learning Models

Nikhil Joshi, Rewanth Tammana
{"title":"GDALR: An Efficient Model Duplication Attack on Black Box Machine Learning Models","authors":"Nikhil Joshi, Rewanth Tammana","doi":"10.1109/ICSCAN.2019.8878726","DOIUrl":null,"url":null,"abstract":"Trained Machine learning models are core components of proprietary products. Business models are entirely built around these ML powered products. Such products are either delivered as a software package (containing the trained model) or they are deployed on cloud with restricted API access for prediction. In ML-as-a-service, users are charged per-query or per-hour basis, generating revenue for businesses. Models deployed on cloud could be vulnerable to Model Duplication attacks. Researchers found ways to exploit these services and clone the functionalities of black box models hidden in the cloud by continuously querying the provided APIs. After successful execution of attack, the attacker does not require to pay the cloud service provider. Worst case scenario, attackers can also sell the cloned model or use them in their business model.Traditionally attackers use convex optimization algorithm like Gradient Descent with appropriate hyper-parameters to train their models. In our research we propose a modification to traditional approach called as GDALR (Gradient Driven Adaptive Learning Rate) that dynamically updates the learning rate based on the gradient values. This results in stealing the target model in comparatively less number of epochs, decreasing the time and cost, hence increasing the efficiency of the attack. This shows that sophisticated attacks can be launched for stealing the black box machine learning models which increases risk for MLaaS based businesses.","PeriodicalId":363880,"journal":{"name":"2019 IEEE International Conference on System, Computation, Automation and Networking (ICSCAN)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on System, Computation, Automation and Networking (ICSCAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSCAN.2019.8878726","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Trained Machine learning models are core components of proprietary products. Business models are entirely built around these ML powered products. Such products are either delivered as a software package (containing the trained model) or they are deployed on cloud with restricted API access for prediction. In ML-as-a-service, users are charged per-query or per-hour basis, generating revenue for businesses. Models deployed on cloud could be vulnerable to Model Duplication attacks. Researchers found ways to exploit these services and clone the functionalities of black box models hidden in the cloud by continuously querying the provided APIs. After successful execution of attack, the attacker does not require to pay the cloud service provider. Worst case scenario, attackers can also sell the cloned model or use them in their business model.Traditionally attackers use convex optimization algorithm like Gradient Descent with appropriate hyper-parameters to train their models. In our research we propose a modification to traditional approach called as GDALR (Gradient Driven Adaptive Learning Rate) that dynamically updates the learning rate based on the gradient values. This results in stealing the target model in comparatively less number of epochs, decreasing the time and cost, hence increasing the efficiency of the attack. This shows that sophisticated attacks can be launched for stealing the black box machine learning models which increases risk for MLaaS based businesses.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
GDALR:黑盒机器学习模型的有效模型复制攻击
训练有素的机器学习模型是专有产品的核心组成部分。商业模式完全是围绕这些机器学习驱动的产品构建的。这些产品要么作为软件包(包含经过训练的模型)交付,要么部署在具有受限API访问的云上进行预测。在ml即服务中,用户按查询或按小时收费,为企业创造收入。部署在云上的模型可能容易受到模型复制攻击。研究人员找到了利用这些服务的方法,并通过不断查询提供的api来克隆隐藏在云中的黑匣子模型的功能。攻击成功后,攻击者不需要向云服务提供商支付任何费用。最坏的情况是,攻击者还可以出售克隆模型或在其业务模型中使用克隆模型。传统的攻击者使用梯度下降等凸优化算法和适当的超参数来训练他们的模型。在我们的研究中,我们提出了一种基于梯度值动态更新学习率的改进方法,称为梯度驱动自适应学习率(GDALR)。这样可以在相对较少的时间内窃取目标模型,减少了时间和成本,从而提高了攻击效率。这表明可以发起复杂的攻击来窃取黑箱机器学习模型,这增加了基于MLaaS的业务的风险。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Security Analytics For Heterogeneous Web Pipeline Gas Leakage Detection And Location Identification System IoT Enabled Forest Fire Detection and Early Warning System Research opportunities on virtual reality and augmented reality: a survey Performance Analysis of Hub BLDC Motor Using Finite Element Analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1