神经网络训练的微分自适应PID优化策略

Yulin Cai, Haoqian Wang
{"title":"神经网络训练的微分自适应PID优化策略","authors":"Yulin Cai, Haoqian Wang","doi":"10.1109/IJCNN55064.2022.9892746","DOIUrl":null,"url":null,"abstract":"Derived from automatic control theory, the PID optimizer for neural network training can effectively inhibit the overshoot phenomenon of conventional optimization algorithms such as SGD-Momentum. However, its differential term may unexpectedly have a relatively large scale during iteration, which may amplify the inherent noise of input samples and deteriorate the training process. In this paper, we adopt a self-adaptive iterating rule for the PID optimizer's differential term, which uses both first-order and second-order moment estimation to calculate the differential's unbiased statistical value approximately. Such strategy prevents the differential term from being divergent and accelerates the iteration without increasing much computational cost. Empirical results on several popular machine learning datasets demonstrate that the proposed optimization strategy achieves favorable acceleration of convergence as well as competitive accuracy compared with other stochastic optimization approaches.","PeriodicalId":106974,"journal":{"name":"2022 International Joint Conference on Neural Networks (IJCNN)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DAPID: A Differential-adaptive PID Optimization Strategy for Neural Network Training\",\"authors\":\"Yulin Cai, Haoqian Wang\",\"doi\":\"10.1109/IJCNN55064.2022.9892746\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Derived from automatic control theory, the PID optimizer for neural network training can effectively inhibit the overshoot phenomenon of conventional optimization algorithms such as SGD-Momentum. However, its differential term may unexpectedly have a relatively large scale during iteration, which may amplify the inherent noise of input samples and deteriorate the training process. In this paper, we adopt a self-adaptive iterating rule for the PID optimizer's differential term, which uses both first-order and second-order moment estimation to calculate the differential's unbiased statistical value approximately. Such strategy prevents the differential term from being divergent and accelerates the iteration without increasing much computational cost. Empirical results on several popular machine learning datasets demonstrate that the proposed optimization strategy achieves favorable acceleration of convergence as well as competitive accuracy compared with other stochastic optimization approaches.\",\"PeriodicalId\":106974,\"journal\":{\"name\":\"2022 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"67 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN55064.2022.9892746\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN55064.2022.9892746","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

神经网络训练PID优化器来源于自动控制理论,可以有效抑制SGD-Momentum等传统优化算法的超调现象。然而,在迭代过程中,它的微分项可能会出乎意料地具有较大的尺度,这可能会放大输入样本的固有噪声,从而恶化训练过程。本文对PID优化器的微分项采用自适应迭代规则,利用一阶和二阶矩估计近似计算微分的无偏统计值。该策略防止了微分项的发散,在不增加计算成本的情况下加快了迭代速度。在几个流行的机器学习数据集上的实证结果表明,与其他随机优化方法相比,所提出的优化策略具有良好的收敛加速和竞争精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
DAPID: A Differential-adaptive PID Optimization Strategy for Neural Network Training
Derived from automatic control theory, the PID optimizer for neural network training can effectively inhibit the overshoot phenomenon of conventional optimization algorithms such as SGD-Momentum. However, its differential term may unexpectedly have a relatively large scale during iteration, which may amplify the inherent noise of input samples and deteriorate the training process. In this paper, we adopt a self-adaptive iterating rule for the PID optimizer's differential term, which uses both first-order and second-order moment estimation to calculate the differential's unbiased statistical value approximately. Such strategy prevents the differential term from being divergent and accelerates the iteration without increasing much computational cost. Empirical results on several popular machine learning datasets demonstrate that the proposed optimization strategy achieves favorable acceleration of convergence as well as competitive accuracy compared with other stochastic optimization approaches.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Parameterization of Vector Symbolic Approach for Sequence Encoding Based Visual Place Recognition Nested compression of convolutional neural networks with Tucker-2 decomposition SQL-Rank++: A Novel Listwise Approach for Collaborative Ranking with Implicit Feedback ACTSS: Input Detection Defense against Backdoor Attacks via Activation Subset Scanning ADV-ResNet: Residual Network with Controlled Adversarial Regularization for Effective Classification of Practical Time Series Under Training Data Scarcity Problem
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1