耦合因子条件传递函数条件下无模型强化学习的应用研究

Xiaoya Yang, Youtian Guo, Rui Wang, Xiaohui Hu
{"title":"耦合因子条件传递函数条件下无模型强化学习的应用研究","authors":"Xiaoya Yang, Youtian Guo, Rui Wang, Xiaohui Hu","doi":"10.1145/3430199.3430210","DOIUrl":null,"url":null,"abstract":"Dynamic systems are ubiquitous in nature. The analysis of the stability and performance of dynamic systems has been a research hotspot in control science and operations research for a long time. In this paper, we construct and analyze an actual sequential decision-making problem of dynamic system. The Model-Free reinforcement learning algorithms are used to optimize this problem. The problem is analyzed in detail through adaptive control theory and information theory, also the extreme performance of the algorithm is pointed out. In this paper, we select three classic Model-Free reinforcement learning algorithms, DQN, DQN-PER, and PPO, to compare and analyze their performance on the timing series decision problem we construct.","PeriodicalId":371055,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","volume":"215 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Application Research of Model-Free Reinforcement Learning under the Condition of Conditional Transfer Function with Coupling Factors\",\"authors\":\"Xiaoya Yang, Youtian Guo, Rui Wang, Xiaohui Hu\",\"doi\":\"10.1145/3430199.3430210\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Dynamic systems are ubiquitous in nature. The analysis of the stability and performance of dynamic systems has been a research hotspot in control science and operations research for a long time. In this paper, we construct and analyze an actual sequential decision-making problem of dynamic system. The Model-Free reinforcement learning algorithms are used to optimize this problem. The problem is analyzed in detail through adaptive control theory and information theory, also the extreme performance of the algorithm is pointed out. In this paper, we select three classic Model-Free reinforcement learning algorithms, DQN, DQN-PER, and PPO, to compare and analyze their performance on the timing series decision problem we construct.\",\"PeriodicalId\":371055,\"journal\":{\"name\":\"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition\",\"volume\":\"215 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3430199.3430210\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 3rd International Conference on Artificial Intelligence and Pattern Recognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3430199.3430210","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

动态系统在自然界中无处不在。长期以来,动态系统的稳定性和性能分析一直是控制科学和运筹学领域的研究热点。本文构造并分析了一个实际的动态系统序列决策问题。采用无模型强化学习算法对该问题进行优化。运用自适应控制理论和信息论对该问题进行了详细分析,并指出了该算法的极限性能。本文选取了三种经典的无模型强化学习算法DQN、DQN- per和PPO,比较分析了它们在构建的时序决策问题上的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Application Research of Model-Free Reinforcement Learning under the Condition of Conditional Transfer Function with Coupling Factors
Dynamic systems are ubiquitous in nature. The analysis of the stability and performance of dynamic systems has been a research hotspot in control science and operations research for a long time. In this paper, we construct and analyze an actual sequential decision-making problem of dynamic system. The Model-Free reinforcement learning algorithms are used to optimize this problem. The problem is analyzed in detail through adaptive control theory and information theory, also the extreme performance of the algorithm is pointed out. In this paper, we select three classic Model-Free reinforcement learning algorithms, DQN, DQN-PER, and PPO, to compare and analyze their performance on the timing series decision problem we construct.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
3D Modeling of Riverbeds Based on NURBS Algorithm Multi-view Learning for 3D LGE-MRI Left Atrial Cavity Segmentation Detection of Key Structure of Auroral Images Based on Weakly Supervised Learning People Counting Based on Multi-scale Region Adaptive Segmentation and Depth Neural Network A Survey of Research on Image Data Sources Forensics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1