一种基于强化学习的神经网络梯度优化算法

Lei Lv, Ziming Chen, Zhenyu Lu
{"title":"一种基于强化学习的神经网络梯度优化算法","authors":"Lei Lv, Ziming Chen, Zhenyu Lu","doi":"10.1109/SPAC49953.2019.237884","DOIUrl":null,"url":null,"abstract":"Searching appropriate step size and hyperparameter is the key to getting a robust convergence for gradient descent optimization algorithm. This study comes up with a novel gradient descent strategy based on reinforce learning, in which the gradient information of each time step is expressed as the state information of markov decision process in iterative optimization of neural network. We design a variable-view distance planner with a markov decision process as its recursive core for neural-network gradient descent. It combines the advantages of model-free learning and model-based learning, and fully utilizes the state transition information of the optimized neural-network objective function at each step. Experimental results show that the proposed method not only retains the merits of the model-free asymptotic optimal strategy but also enhances the utilization rate of samples compared with manually designed optimization algorithms.","PeriodicalId":410003,"journal":{"name":"2019 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A novel neural-network gradient optimization algorithm based on reinforcement learning\",\"authors\":\"Lei Lv, Ziming Chen, Zhenyu Lu\",\"doi\":\"10.1109/SPAC49953.2019.237884\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Searching appropriate step size and hyperparameter is the key to getting a robust convergence for gradient descent optimization algorithm. This study comes up with a novel gradient descent strategy based on reinforce learning, in which the gradient information of each time step is expressed as the state information of markov decision process in iterative optimization of neural network. We design a variable-view distance planner with a markov decision process as its recursive core for neural-network gradient descent. It combines the advantages of model-free learning and model-based learning, and fully utilizes the state transition information of the optimized neural-network objective function at each step. Experimental results show that the proposed method not only retains the merits of the model-free asymptotic optimal strategy but also enhances the utilization rate of samples compared with manually designed optimization algorithms.\",\"PeriodicalId\":410003,\"journal\":{\"name\":\"2019 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SPAC49953.2019.237884\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPAC49953.2019.237884","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

寻找合适的步长和超参数是保证梯度下降优化算法鲁棒收敛的关键。本文提出了一种基于强化学习的梯度下降策略,将每个时间步长的梯度信息表示为神经网络迭代优化中马尔可夫决策过程的状态信息。我们设计了一个以马尔可夫决策过程作为神经网络梯度下降递归核心的变视距规划器。它结合了无模型学习和基于模型学习的优点,充分利用了优化后的神经网络目标函数在每一步的状态转移信息。实验结果表明,该方法不仅保留了无模型渐近优化策略的优点,而且与人工设计的优化算法相比,提高了样本的利用率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A novel neural-network gradient optimization algorithm based on reinforcement learning
Searching appropriate step size and hyperparameter is the key to getting a robust convergence for gradient descent optimization algorithm. This study comes up with a novel gradient descent strategy based on reinforce learning, in which the gradient information of each time step is expressed as the state information of markov decision process in iterative optimization of neural network. We design a variable-view distance planner with a markov decision process as its recursive core for neural-network gradient descent. It combines the advantages of model-free learning and model-based learning, and fully utilizes the state transition information of the optimized neural-network objective function at each step. Experimental results show that the proposed method not only retains the merits of the model-free asymptotic optimal strategy but also enhances the utilization rate of samples compared with manually designed optimization algorithms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Observer-based Adaptive Fuzzy Control for Uncertain Nonlinear time-delay systems Fuzzy Quality Evaluation Algorithm for Higher Engineering Education Quality via Quasi-neural-network Framework Random Feature Based Attribute-weighed Kernel Fuzzy Clustering for Non-linear Data Cement Texture Synthesis Based on Feedforward Neural Network Adaptive indirect inverse control for nonlinear systems actuated by smart-material actuator*
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1