Theory II: Deep learning and optimization

IF 1.2 4区 工程技术 Q3 ENGINEERING, MULTIDISCIPLINARY Bulletin of the Polish Academy of Sciences-Technical Sciences Pub Date : 2023-11-06 DOI:10.24425/BPAS.2018.125925
T. Poggio, Q. Liao
{"title":"Theory II: Deep learning and optimization","authors":"T. Poggio, Q. Liao","doi":"10.24425/BPAS.2018.125925","DOIUrl":null,"url":null,"abstract":"Bull. Pol. Ac.: Tech. 66(6) 2018 Abstract. The landscape of the empirical risk of overparametrized deep convolutional neural networks (DCNNs) is characterized with a mix of theory and experiments. In part A we show the existence of a large number of global minimizers with zero empirical error (modulo inconsistent equations). The argument which relies on the use of Bezout theorem is rigorous when the RELUs are replaced by a polynomial nonlinearity. We show with simulations that the corresponding polynomial network is indistinguishable from the RELU network. According to Bezout theorem, the global minimizers are degenerate unlike the local minima which in general should be non-degenerate. Further we experimentally analyzed and visualized the landscape of empirical risk of DCNNs on CIFAR-10 dataset. Based on above theoretical and experimental observations, we propose a simple model of the landscape of empirical risk. In part B, we characterize the optimization properties of stochastic gradient descent applied to deep networks. The main claim here consists of theoretical and experimental evidence for the following property of SGD: SGD concentrates in probability – like the classical Langevin equation – on large volume, ”flat” minima, selecting with high probability degenerate minimizers which are typically global minimizers.","PeriodicalId":55299,"journal":{"name":"Bulletin of the Polish Academy of Sciences-Technical Sciences","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Bulletin of the Polish Academy of Sciences-Technical Sciences","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.24425/BPAS.2018.125925","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 8

Abstract

Bull. Pol. Ac.: Tech. 66(6) 2018 Abstract. The landscape of the empirical risk of overparametrized deep convolutional neural networks (DCNNs) is characterized with a mix of theory and experiments. In part A we show the existence of a large number of global minimizers with zero empirical error (modulo inconsistent equations). The argument which relies on the use of Bezout theorem is rigorous when the RELUs are replaced by a polynomial nonlinearity. We show with simulations that the corresponding polynomial network is indistinguishable from the RELU network. According to Bezout theorem, the global minimizers are degenerate unlike the local minima which in general should be non-degenerate. Further we experimentally analyzed and visualized the landscape of empirical risk of DCNNs on CIFAR-10 dataset. Based on above theoretical and experimental observations, we propose a simple model of the landscape of empirical risk. In part B, we characterize the optimization properties of stochastic gradient descent applied to deep networks. The main claim here consists of theoretical and experimental evidence for the following property of SGD: SGD concentrates in probability – like the classical Langevin equation – on large volume, ”flat” minima, selecting with high probability degenerate minimizers which are typically global minimizers.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
理论二:深度学习和优化
公牛。波尔。通信技术,66(6)2018过度参数化深度卷积神经网络(DCNNs)的经验风险景观具有理论和实验的混合特征。在A部分中,我们展示了大量具有零经验误差(模不一致方程)的全局极小值的存在性。当用多项式非线性代替relu时,依靠Bezout定理的论证是严格的。我们通过仿真证明了相应的多项式网络与RELU网络是无法区分的。根据Bezout定理,全局最小值是简并的,而局部最小值通常是不简并的。此外,我们还对CIFAR-10数据集上的DCNNs的经验风险进行了实验分析和可视化。基于上述理论和实验观察,我们提出了一个简单的经验风险景观模型。在B部分中,我们描述了应用于深度网络的随机梯度下降的优化特性。这里的主要主张包括理论和实验证据,证明了SGD的以下性质:SGD集中在概率上——像经典的朗之万方程一样——集中在大体积的“平坦”极小值上,选择具有高概率的退化极小值,这是典型的全局极小值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
2.80
自引率
16.70%
发文量
0
审稿时长
6-12 weeks
期刊介绍: The Bulletin of the Polish Academy of Sciences: Technical Sciences is published bimonthly by the Division IV Engineering Sciences of the Polish Academy of Sciences, since the beginning of the existence of the PAS in 1952. The journal is peer‐reviewed and is published both in printed and electronic form. It is established for the publication of original high quality papers from multidisciplinary Engineering sciences with the following topics preferred: Artificial and Computational Intelligence, Biomedical Engineering and Biotechnology, Civil Engineering, Control, Informatics and Robotics, Electronics, Telecommunication and Optoelectronics, Mechanical and Aeronautical Engineering, Thermodynamics, Material Science and Nanotechnology, Power Systems and Power Electronics.
期刊最新文献
148439 148440 Enhancement of COVID-19 symptom-based screening with quality-based classifier optimisation Analyzing and improving tools for supporting fighting against COVID-19 based on prediction models and contact tracing The Effect of Protrusions on the Initiation of Partial Discharges in XLPE High Voltage Cables
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1