Fractional-order gradient approach for optimizing neural networks: A theoretical and empirical analysis

IF 5.3 1区 数学 Q1 MATHEMATICS, INTERDISCIPLINARY APPLICATIONS Chaos Solitons & Fractals Pub Date : 2025-01-18 DOI:10.1016/j.chaos.2025.116009
Priyanka Harjule, Rinki Sharma, Rajesh Kumar
{"title":"Fractional-order gradient approach for optimizing neural networks: A theoretical and empirical analysis","authors":"Priyanka Harjule, Rinki Sharma, Rajesh Kumar","doi":"10.1016/j.chaos.2025.116009","DOIUrl":null,"url":null,"abstract":"This article proposes a modified fractional gradient descent algorithm to enhance the learning capabilities of neural networks, comprising the benefits of a metaheuristic optimizer. The use of fractional derivatives, which possess memory properties, offers an additional degree of adaptability to the network. The convergence of the fractional gradient descent algorithm, incorporating the Caputo derivative in the neural network’s backpropagation process, is thoroughly examined, and a detailed convergence analysis is provided which indicates that it enables a more gradual and controlled adaptation of the network to the data. Additionally, the optimal fractional order has been found for each dataset, a contribution that has not been previously explored in the literature, which has a significant impact on the training of neural networks with fractional gradient backpropagation. In the experiments, four classification datasets and one regression dataset were used, and the results consistently show that the proposed hybrid algorithm achieves faster convergence across all cases. The empirical results with the proposed algorithm are supported by theoretical convergence analysis. Empirical results demonstrate that the proposed optimizer with optimal order yields more accurate results compared to existing optimizers.","PeriodicalId":9764,"journal":{"name":"Chaos Solitons & Fractals","volume":"12 1","pages":""},"PeriodicalIF":5.3000,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Chaos Solitons & Fractals","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1016/j.chaos.2025.116009","RegionNum":1,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

This article proposes a modified fractional gradient descent algorithm to enhance the learning capabilities of neural networks, comprising the benefits of a metaheuristic optimizer. The use of fractional derivatives, which possess memory properties, offers an additional degree of adaptability to the network. The convergence of the fractional gradient descent algorithm, incorporating the Caputo derivative in the neural network’s backpropagation process, is thoroughly examined, and a detailed convergence analysis is provided which indicates that it enables a more gradual and controlled adaptation of the network to the data. Additionally, the optimal fractional order has been found for each dataset, a contribution that has not been previously explored in the literature, which has a significant impact on the training of neural networks with fractional gradient backpropagation. In the experiments, four classification datasets and one regression dataset were used, and the results consistently show that the proposed hybrid algorithm achieves faster convergence across all cases. The empirical results with the proposed algorithm are supported by theoretical convergence analysis. Empirical results demonstrate that the proposed optimizer with optimal order yields more accurate results compared to existing optimizers.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
优化神经网络的分数阶梯度方法:理论与实证分析
本文提出了一种改进的分数阶梯度下降算法来增强神经网络的学习能力,包括元启发式优化器的优点。具有记忆特性的分数阶导数的使用为网络提供了额外程度的适应性。在神经网络的反向传播过程中纳入Caputo导数的分数阶梯度下降算法的收敛性进行了彻底的检查,并提供了详细的收敛分析,表明它使网络对数据的适应更加渐进和可控。此外,已经为每个数据集找到了最优分数阶,这是以前在文献中没有探索过的贡献,这对具有分数梯度反向传播的神经网络的训练有重大影响。在实验中,使用了4个分类数据集和1个回归数据集,结果一致表明所提出的混合算法在所有情况下都具有更快的收敛速度。该算法的实证结果得到了理论收敛性分析的支持。实验结果表明,与现有的优化器相比,所提出的优化器具有最优顺序,可以产生更准确的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Chaos Solitons & Fractals
Chaos Solitons & Fractals 物理-数学跨学科应用
CiteScore
13.20
自引率
10.30%
发文量
1087
审稿时长
9 months
期刊介绍: Chaos, Solitons & Fractals strives to establish itself as a premier journal in the interdisciplinary realm of Nonlinear Science, Non-equilibrium, and Complex Phenomena. It welcomes submissions covering a broad spectrum of topics within this field, including dynamics, non-equilibrium processes in physics, chemistry, and geophysics, complex matter and networks, mathematical models, computational biology, applications to quantum and mesoscopic phenomena, fluctuations and random processes, self-organization, and social phenomena.
期刊最新文献
Vibration reduction research of a thin beam system by employing distributed coupling nonlinear energy sinks Rheostatic effect of a magnetic field on the onset of chaotic and periodic motions in a five-dimensional magnetoconvective Lorenz system Nonreciprocal cavity magnonics system for amplification of photonic spin Hall effect Gradient based optimization of Chaogates A novel Riemann–Hilbert formulation-based reduction method to an integrable reverse-space nonlocal Manakov equation and its applications
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1