Stochastic Coordinate Descent for 01 Loss and Its Sensitivity to Adversarial Attacks

Meiyan Xie, Yunzhe Xue, Usman Roshan
{"title":"Stochastic Coordinate Descent for 01 Loss and Its Sensitivity to Adversarial Attacks","authors":"Meiyan Xie, Yunzhe Xue, Usman Roshan","doi":"10.1109/ICMLA.2019.00056","DOIUrl":null,"url":null,"abstract":"The 01 loss while hard to optimize is least sensitive to outliers compared to its continuous differentiable counterparts, namely hinge and logistic loss. Recently the 01 loss has been shown to be most robust compared to surrogate losses against corrupted labels which can be interpreted as adversarial attacks. Here we propose a stochastic coordinate descent heuristic for linear 01 loss classification. We implement and study our heuristic on real datasets from the UCI machine learning archive and find our method to be comparable to the support vector machine in accuracy and tractable in training time. We conjecture that the 01 loss may be harder to attack in a black box setting due to its non-continuity and infinite solution space. We train our linear classifier in a one-vs-one multi-class strategy on CIFAR10 and STL10 image benchmark datasets. In both cases we find our classifier to have the same accuracy as the linear support vector machine but more resilient to black box attacks. On CIFAR10 the linear support vector machine has 0% on adversarial examples while the 01 loss classifier hovers about 10%. On STL10 the linear support vector machine has 0% accuracy whereas 01 loss is at 10%. Our work here suggests that 01 loss may be more resilient to adversarial attacks than the hinge loss and further work is required.","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2019.00056","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

The 01 loss while hard to optimize is least sensitive to outliers compared to its continuous differentiable counterparts, namely hinge and logistic loss. Recently the 01 loss has been shown to be most robust compared to surrogate losses against corrupted labels which can be interpreted as adversarial attacks. Here we propose a stochastic coordinate descent heuristic for linear 01 loss classification. We implement and study our heuristic on real datasets from the UCI machine learning archive and find our method to be comparable to the support vector machine in accuracy and tractable in training time. We conjecture that the 01 loss may be harder to attack in a black box setting due to its non-continuity and infinite solution space. We train our linear classifier in a one-vs-one multi-class strategy on CIFAR10 and STL10 image benchmark datasets. In both cases we find our classifier to have the same accuracy as the linear support vector machine but more resilient to black box attacks. On CIFAR10 the linear support vector machine has 0% on adversarial examples while the 01 loss classifier hovers about 10%. On STL10 the linear support vector machine has 0% accuracy whereas 01 loss is at 10%. Our work here suggests that 01 loss may be more resilient to adversarial attacks than the hinge loss and further work is required.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
01损失的随机坐标下降及其对抗性攻击的敏感性
01损失虽然难以优化,但与连续可微损失相比,它对异常值最不敏感,即铰链损失和逻辑损失。最近,与可解释为对抗性攻击的损坏标签的代理损失相比,01损失已被证明是最稳健的。本文提出了一种线性01损失分类的随机坐标下降启发式算法。我们在UCI机器学习档案的真实数据集上实现和研究了我们的启发式方法,发现我们的方法在准确性和训练时间上与支持向量机相当。我们推测,由于01损失的非连续性和无限的解空间,在黑箱设置中可能更难攻击。我们在CIFAR10和STL10图像基准数据集上以一对一的多类策略训练我们的线性分类器。在这两种情况下,我们发现我们的分类器具有与线性支持向量机相同的精度,但对黑盒攻击更具弹性。在CIFAR10上,线性支持向量机对对抗样本的准确率为0%,而01损失分类器在10%左右徘徊。在STL10上,线性支持向量机的准确率为0%,而01的损失为10%。我们在这里的工作表明,01损失可能比铰链损失对对抗性攻击更有弹性,需要进一步的工作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Automated Stenosis Classification of Carotid Artery Sonography using Deep Neural Networks Hybrid Condition Monitoring for Power Electronic Systems Time Series Anomaly Detection from a Markov Chain Perspective Anyone here? Smart Embedded Low-Resolution Omnidirectional Video Sensor to Measure Room Occupancy Deep Learning with Domain Randomization for Optimal Filtering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1