Practical Convex Formulations of One-hidden-layer Neural Network Adversarial Training

Yatong Bai, Tanmay Gautam, Yujie Gai, S. Sojoudi
{"title":"Practical Convex Formulations of One-hidden-layer Neural Network Adversarial Training","authors":"Yatong Bai, Tanmay Gautam, Yujie Gai, S. Sojoudi","doi":"10.23919/ACC53348.2022.9867244","DOIUrl":null,"url":null,"abstract":"As neural networks become more prevalent in safety-critical systems, ensuring their robustness against adversaries becomes essential. \"Adversarial training\" is one of the most common methods for training robust networks. Current adversarial training algorithms solve highly non-convex bi-level optimization problems. These algorithms suffer from the lack of convergence guarantees and can exhibit unstable behaviors. A recent work has shown that the standard training formulation of a one-hidden-layer, scalar-output fully-connected neural network with rectified linear unit (ReLU) activations can be reformulated as a finite-dimensional convex program, addressing the aforementioned issues for training non-robust networks. In this paper, we leverage this \"convex training\" framework to tackle the problem of adversarial training. Unfortunately, the scale of the convex training program proposed in the literature grows exponentially in the data size. We prove that a stochastic approximation procedure that scales linearly yields high-quality solutions. With the complexity roadblock removed, we derive convex optimization models that train robust neural networks. Our convex methods provably produce an upper bound on the global optimum of the adversarial training objective and can be applied to binary classification and regression. We demonstrate in experiments that the proposed method achieves a superior robustness compared with the existing methods.","PeriodicalId":366299,"journal":{"name":"2022 American Control Conference (ACC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 American Control Conference (ACC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/ACC53348.2022.9867244","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

As neural networks become more prevalent in safety-critical systems, ensuring their robustness against adversaries becomes essential. "Adversarial training" is one of the most common methods for training robust networks. Current adversarial training algorithms solve highly non-convex bi-level optimization problems. These algorithms suffer from the lack of convergence guarantees and can exhibit unstable behaviors. A recent work has shown that the standard training formulation of a one-hidden-layer, scalar-output fully-connected neural network with rectified linear unit (ReLU) activations can be reformulated as a finite-dimensional convex program, addressing the aforementioned issues for training non-robust networks. In this paper, we leverage this "convex training" framework to tackle the problem of adversarial training. Unfortunately, the scale of the convex training program proposed in the literature grows exponentially in the data size. We prove that a stochastic approximation procedure that scales linearly yields high-quality solutions. With the complexity roadblock removed, we derive convex optimization models that train robust neural networks. Our convex methods provably produce an upper bound on the global optimum of the adversarial training objective and can be applied to binary classification and regression. We demonstrate in experiments that the proposed method achieves a superior robustness compared with the existing methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一隐层神经网络对抗训练的实用凸公式
随着神经网络在安全关键系统中变得越来越普遍,确保它们对对手的鲁棒性变得至关重要。“对抗性训练”是训练健壮网络最常用的方法之一。目前的对抗性训练算法解决了高度非凸双水平优化问题。这些算法缺乏收敛保证,并且可能表现出不稳定的行为。最近的一项研究表明,具有整流线性单元(ReLU)激活的单隐藏层标量输出全连接神经网络的标准训练公式可以重新表述为有限维凸程序,解决了上述训练非鲁棒网络的问题。在本文中,我们利用这个“凸训练”框架来解决对抗性训练的问题。不幸的是,文献中提出的凸训练程序的规模随着数据大小呈指数增长。我们证明了一个线性缩放的随机逼近过程可以产生高质量的解。随着复杂性障碍的消除,我们推导出凸优化模型来训练鲁棒神经网络。我们的凸方法可证明产生对抗性训练目标全局最优的上界,并可应用于二值分类和回归。实验表明,与现有方法相比,该方法具有更好的鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Optimal Connectivity during Multi-agent Consensus Dynamics via Model Predictive Control Gradient-Based Optimization for Anti-Windup PID Controls Power Management for Noise Aware Path Planning of Hybrid UAVs Fixed-Time Seeking and Tracking of Time-Varying Nash Equilibria in Noncooperative Games Aerial Interception of Non-Cooperative Intruder using Model Predictive Control
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1