Persistent Excitation of Improved RBF Neural Networks: Neuron Dynamic-Growing Strategy

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE transactions on neural networks and learning systems Pub Date : 2025-02-10 DOI:10.1109/TNNLS.2025.3528118
Min Wang;Mingyu Wang;Chenguang Yang
{"title":"Persistent Excitation of Improved RBF Neural Networks: Neuron Dynamic-Growing Strategy","authors":"Min Wang;Mingyu Wang;Chenguang Yang","doi":"10.1109/TNNLS.2025.3528118","DOIUrl":null,"url":null,"abstract":"This brief proposes a novel neuron dynamic-growing (NDG) strategy for radial basis function neural networks (RBF NNs). Only one neuron is selected in advance relying on the system initial states, and other neurons are dynamically generated based on the designed threshold for the distance between the current NN input and the closest neuron. Compared with the RBF NN using neuron fixed evenly spaced strategy (NFES), the improved RBF NN has two major advantages: one is to extremely reduce the number of neurons, especially for the high dimensional NN inputs; and the other is to provide a theoretical criteria for the choice of NN structure parameters including the neuron center and the compact set size. To guarantee the dynamic learning ability of the improved RBF NN, the persistent excitation (PE) is verified strictly by subtly constructing the threshold and the center of newly added neurons. Simulation and experimental results illustrate that the improved RBF NN integrated into the existing dynamic learning control effectively enhances the transient control performance, reduces the computational burden, and saves data storage space.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 8","pages":"15553-15560"},"PeriodicalIF":8.9000,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10879145/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

This brief proposes a novel neuron dynamic-growing (NDG) strategy for radial basis function neural networks (RBF NNs). Only one neuron is selected in advance relying on the system initial states, and other neurons are dynamically generated based on the designed threshold for the distance between the current NN input and the closest neuron. Compared with the RBF NN using neuron fixed evenly spaced strategy (NFES), the improved RBF NN has two major advantages: one is to extremely reduce the number of neurons, especially for the high dimensional NN inputs; and the other is to provide a theoretical criteria for the choice of NN structure parameters including the neuron center and the compact set size. To guarantee the dynamic learning ability of the improved RBF NN, the persistent excitation (PE) is verified strictly by subtly constructing the threshold and the center of newly added neurons. Simulation and experimental results illustrate that the improved RBF NN integrated into the existing dynamic learning control effectively enhances the transient control performance, reduces the computational burden, and saves data storage space.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
改进RBF神经网络的持续激励:神经元动态生长策略
本文提出了一种新的径向基函数神经网络(RBF)神经元动态生长(NDG)策略。仅根据系统初始状态提前选择一个神经元,其他神经元根据当前神经网络输入与最近神经元之间的距离设计阈值动态生成。与使用神经元固定均匀间隔策略(NFES)的RBF神经网络相比,改进的RBF神经网络有两个主要优点:一是极大地减少了神经元数量,特别是对于高维神经网络输入;二是为神经网络结构参数(包括神经元中心和紧集大小)的选择提供理论依据。为了保证改进的RBF神经网络的动态学习能力,通过微妙地构造阈值和新添加神经元的中心,严格验证了持续激励(PE)。仿真和实验结果表明,将改进的RBF神经网络与现有的动态学习控制相结合,有效地提高了暂态控制性能,减少了计算量,节省了数据存储空间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
期刊最新文献
When Optimal Transport Meets Photo-Realistic Image Dehazing With Unpaired Training. Multistage PCA Whitening: A Robust Method to Dimensionality Reduction in Image Retrieval. S2FS: Spatially-Aware Separability-Driven Feature Selection in Fuzzy Decision Systems. Neural Architecture Search With Spatial-Spectral Attention for Higher-Order Nonlinear Hyperspectral Unmixing. Spatial Meta-Learning-Based Representation for Unseen Geographic Entities.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1