Novel adaptive parameter fractional-order gradient descent learning for stock selection decision support systems

IF 6 2区 管理学 Q1 OPERATIONS RESEARCH & MANAGEMENT SCIENCE European Journal of Operational Research Pub Date : 2025-07-01 Epub Date: 2025-01-18 DOI:10.1016/j.ejor.2025.01.013
Mingjie Ma , Siyuan Chen , Lunan Zheng
{"title":"Novel adaptive parameter fractional-order gradient descent learning for stock selection decision support systems","authors":"Mingjie Ma ,&nbsp;Siyuan Chen ,&nbsp;Lunan Zheng","doi":"10.1016/j.ejor.2025.01.013","DOIUrl":null,"url":null,"abstract":"<div><div>Gradient descent methods are widely used as optimization algorithms for updating neural network weights. With advancements in fractional-order calculus, fractional-order gradient descent algorithms have demonstrated superior optimization performance. Nevertheless, existing fractional-order gradient descent algorithms have shortcomings in terms of structural design and theoretical derivation. Specifically, the convergence of fractional-order algorithms in the existing literature relies on the assumed boundedness of network weights. This assumption leads to uncertainty in the optimization results. To address this issue, this paper proposes several adaptive parameter fractional-order gradient descent learning (AP-FOGDL) algorithms based on the Caputo and Riemann–Liouville derivatives. To fully leverage the convergence theorem, an adaptive learning rate is designed by introducing computable upper bounds. The convergence property is then theoretically proven for both derivatives, with and without the adaptive learning rate. Moreover, to enhance prediction accuracy, an amplification factor is employed to increase the adaptive learning rate. Finally, practical applications on a stock selection dataset and a bankruptcy dataset substantiate the feasibility, high accuracy, and strong generalization performance of the proposed algorithms. A comparative study between the proposed methods and other relevant gradient descent methods demonstrates the superiority of the AP-FOGDL algorithms.</div></div>","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"324 1","pages":"Pages 276-289"},"PeriodicalIF":6.0000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Operational Research","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0377221725000384","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/18 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"OPERATIONS RESEARCH & MANAGEMENT SCIENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Gradient descent methods are widely used as optimization algorithms for updating neural network weights. With advancements in fractional-order calculus, fractional-order gradient descent algorithms have demonstrated superior optimization performance. Nevertheless, existing fractional-order gradient descent algorithms have shortcomings in terms of structural design and theoretical derivation. Specifically, the convergence of fractional-order algorithms in the existing literature relies on the assumed boundedness of network weights. This assumption leads to uncertainty in the optimization results. To address this issue, this paper proposes several adaptive parameter fractional-order gradient descent learning (AP-FOGDL) algorithms based on the Caputo and Riemann–Liouville derivatives. To fully leverage the convergence theorem, an adaptive learning rate is designed by introducing computable upper bounds. The convergence property is then theoretically proven for both derivatives, with and without the adaptive learning rate. Moreover, to enhance prediction accuracy, an amplification factor is employed to increase the adaptive learning rate. Finally, practical applications on a stock selection dataset and a bankruptcy dataset substantiate the feasibility, high accuracy, and strong generalization performance of the proposed algorithms. A comparative study between the proposed methods and other relevant gradient descent methods demonstrates the superiority of the AP-FOGDL algorithms.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于自适应参数分数阶梯度下降学习的股票选择决策支持系统
梯度下降法是一种广泛应用于神经网络权值更新的优化算法。随着分数阶微积分的进步,分数阶梯度下降算法已经显示出优越的优化性能。然而,现有的分数阶梯度下降算法在结构设计和理论推导方面存在不足。具体而言,现有文献中分数阶算法的收敛性依赖于假设网络权值的有界性。这种假设导致了优化结果的不确定性。为了解决这一问题,本文提出了几种基于Caputo和Riemann-Liouville导数的自适应参数分数阶梯度下降学习(AP-FOGDL)算法。为了充分利用收敛定理,引入可计算的上界,设计了自适应学习率。然后从理论上证明了具有和不具有自适应学习率的两个导数的收敛性。此外,为了提高预测精度,采用了放大因子来提高自适应学习率。最后,在股票选择数据集和破产数据集上的实际应用验证了该算法的可行性、较高的准确率和较强的泛化性能。通过与其他相关梯度下降方法的对比研究,证明了AP-FOGDL算法的优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
European Journal of Operational Research
European Journal of Operational Research 管理科学-运筹学与管理科学
CiteScore
11.90
自引率
9.40%
发文量
786
审稿时长
8.2 months
期刊介绍: The European Journal of Operational Research (EJOR) publishes high quality, original papers that contribute to the methodology of operational research (OR) and to the practice of decision making.
期刊最新文献
Measuring consensus and voter influence in ternary preferences Graph model for conflict resolution of large-scale group with overlapping coalitions based on meta-clustering under big data Feature-guided metaheuristic with diversity management for solving the capacitated vehicle routing problem Portfolio optimization with robust stochastic dominance testing: A genetic algorithm approach Energy management for electric vehicles in facility logistics: A survey from an operational research perspective
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1