基于确定性梯度的避鞍点方法

IF 2.3 4区 数学 Q1 MATHEMATICS, APPLIED European Journal of Applied Mathematics Pub Date : 2022-11-09 DOI:10.1017/s0956792522000316
L. M. Kreusser, S. J. Osher, B. Wang
{"title":"基于确定性梯度的避鞍点方法","authors":"L. M. Kreusser, S. J. Osher, B. Wang","doi":"10.1017/s0956792522000316","DOIUrl":null,"url":null,"abstract":"<p>Loss functions with a large number of saddle points are one of the major obstacles for training modern machine learning (ML) models efficiently. First-order methods such as gradient descent (GD) are usually the methods of choice for training ML models. However, these methods converge to saddle points for certain choices of initial guesses. In this paper, we propose a modification of the recently proposed Laplacian smoothing gradient descent (LSGD) [Osher et al., arXiv:1806.06317], called modified LSGD (mLSGD), and demonstrate its potential to avoid saddle points without sacrificing the convergence rate. Our analysis is based on the attraction region, formed by all starting points for which the considered numerical scheme converges to a saddle point. We investigate the attraction region’s dimension both analytically and numerically. For a canonical class of quadratic functions, we show that the dimension of the attraction region for mLSGD is <span>\n<span>\n<img data-mimesubtype=\"png\" data-type=\"\" src=\"https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20230705121909977-0335:S0956792522000316:S0956792522000316_inline1.png\"/>\n<span data-mathjax-type=\"texmath\"><span>\n$\\lfloor (n-1)/2\\rfloor$\n</span></span>\n</span>\n</span>, and hence it is significantly smaller than that of GD whose dimension is <span>\n<span>\n<img data-mimesubtype=\"png\" data-type=\"\" src=\"https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20230705121909977-0335:S0956792522000316:S0956792522000316_inline2.png\"/>\n<span data-mathjax-type=\"texmath\"><span>\n$n-1$\n</span></span>\n</span>\n</span>.</p>","PeriodicalId":51046,"journal":{"name":"European Journal of Applied Mathematics","volume":"22 1","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2022-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A deterministic gradient-based approach to avoid saddle points\",\"authors\":\"L. M. Kreusser, S. J. Osher, B. Wang\",\"doi\":\"10.1017/s0956792522000316\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Loss functions with a large number of saddle points are one of the major obstacles for training modern machine learning (ML) models efficiently. First-order methods such as gradient descent (GD) are usually the methods of choice for training ML models. However, these methods converge to saddle points for certain choices of initial guesses. In this paper, we propose a modification of the recently proposed Laplacian smoothing gradient descent (LSGD) [Osher et al., arXiv:1806.06317], called modified LSGD (mLSGD), and demonstrate its potential to avoid saddle points without sacrificing the convergence rate. Our analysis is based on the attraction region, formed by all starting points for which the considered numerical scheme converges to a saddle point. We investigate the attraction region’s dimension both analytically and numerically. For a canonical class of quadratic functions, we show that the dimension of the attraction region for mLSGD is <span>\\n<span>\\n<img data-mimesubtype=\\\"png\\\" data-type=\\\"\\\" src=\\\"https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20230705121909977-0335:S0956792522000316:S0956792522000316_inline1.png\\\"/>\\n<span data-mathjax-type=\\\"texmath\\\"><span>\\n$\\\\lfloor (n-1)/2\\\\rfloor$\\n</span></span>\\n</span>\\n</span>, and hence it is significantly smaller than that of GD whose dimension is <span>\\n<span>\\n<img data-mimesubtype=\\\"png\\\" data-type=\\\"\\\" src=\\\"https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20230705121909977-0335:S0956792522000316:S0956792522000316_inline2.png\\\"/>\\n<span data-mathjax-type=\\\"texmath\\\"><span>\\n$n-1$\\n</span></span>\\n</span>\\n</span>.</p>\",\"PeriodicalId\":51046,\"journal\":{\"name\":\"European Journal of Applied Mathematics\",\"volume\":\"22 1\",\"pages\":\"\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2022-11-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Journal of Applied Mathematics\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1017/s0956792522000316\",\"RegionNum\":4,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICS, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Applied Mathematics","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1017/s0956792522000316","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 0

摘要

具有大量鞍点的损失函数是有效训练现代机器学习(ML)模型的主要障碍之一。梯度下降(GD)等一阶方法通常是训练ML模型的首选方法。然而,对于初始猜测的某些选择,这些方法收敛到鞍点。在本文中,我们对最近提出的拉普拉斯平滑梯度下降(LSGD) [Osher et al., arXiv:1806.06317]提出了一种修正,称为修正LSGD (mLSGD),并证明了其在不牺牲收敛速度的情况下避免鞍点的潜力。我们的分析是基于引力区域的,引力区域由所考虑的数值方案收敛于鞍点的所有起点组成。我们用解析法和数值法研究了引力区域的维数。对于一类典型的二次函数,我们证明了mLSGD的吸引域的维数为$\lfloor (n-1)/2\rfloor$,因此它明显小于维数为$n-1$的GD的吸引域。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A deterministic gradient-based approach to avoid saddle points

Loss functions with a large number of saddle points are one of the major obstacles for training modern machine learning (ML) models efficiently. First-order methods such as gradient descent (GD) are usually the methods of choice for training ML models. However, these methods converge to saddle points for certain choices of initial guesses. In this paper, we propose a modification of the recently proposed Laplacian smoothing gradient descent (LSGD) [Osher et al., arXiv:1806.06317], called modified LSGD (mLSGD), and demonstrate its potential to avoid saddle points without sacrificing the convergence rate. Our analysis is based on the attraction region, formed by all starting points for which the considered numerical scheme converges to a saddle point. We investigate the attraction region’s dimension both analytically and numerically. For a canonical class of quadratic functions, we show that the dimension of the attraction region for mLSGD is $\lfloor (n-1)/2\rfloor$ , and hence it is significantly smaller than that of GD whose dimension is $n-1$ .

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.70
自引率
0.00%
发文量
31
审稿时长
>12 weeks
期刊介绍: Since 2008 EJAM surveys have been expanded to cover Applied and Industrial Mathematics. Coverage of the journal has been strengthened in probabilistic applications, while still focusing on those areas of applied mathematics inspired by real-world applications, and at the same time fostering the development of theoretical methods with a broad range of applicability. Survey papers contain reviews of emerging areas of mathematics, either in core areas or with relevance to users in industry and other disciplines. Research papers may be in any area of applied mathematics, with special emphasis on new mathematical ideas, relevant to modelling and analysis in modern science and technology, and the development of interesting mathematical methods of wide applicability.
期刊最新文献
Exact recovery of community detection in k-community Gaussian mixture models Local geometric properties of conductive transmission eigenfunctions and applications Non-linear biphasic mixture model: Existence and uniqueness results Optimal transport through a toll station Stabilization in a chemotaxis system modelling T-cell dynamics with simultaneous production and consumption of signals
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1