构建容错 RBF 网络的广义 M-稀疏算法

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neural Networks Pub Date : 2024-08-14 DOI:10.1016/j.neunet.2024.106633
{"title":"构建容错 RBF 网络的广义 M-稀疏算法","authors":"","doi":"10.1016/j.neunet.2024.106633","DOIUrl":null,"url":null,"abstract":"<div><p>In the construction process of radial basis function (RBF) networks, two common crucial issues arise: the selection of RBF centers and the effective utilization of the given source without encountering the overfitting problem. Another important issue is the fault tolerant capability. That is, when noise or faults exist in a trained network, it is crucial that the network’s performance does not undergo significant deterioration or decrease. However, without employing a fault tolerant procedure, a trained RBF network may exhibit significantly poor performance. Unfortunately, most existing algorithms are unable to simultaneously address all of the aforementioned issues. This paper proposes fault tolerant training algorithms that can simultaneously select RBF nodes and train RBF output weights. Additionally, our algorithms can directly control the number of RBF nodes in an explicit manner, eliminating the need for a time-consuming procedure to tune the regularization parameter and achieve the target RBF network size. Based on simulation results, our algorithms demonstrate improved test set performance when more RBF nodes are used, effectively utilizing the given source without encountering the overfitting problem. This paper first defines a fault tolerant objective function, which includes a term to suppress the effects of weight faults and weight noise. This term also prevents the issue of overfitting, resulting in better test set performance when more RBF nodes are utilized. With the defined objective function, the training process is designed to solve a generalized <span><math><mi>M</mi></math></span>-sparse problem by incorporating an <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span>-norm constraint. The <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span>-norm constraint allows us to directly and explicitly control the number of RBF nodes. To address the generalized <span><math><mi>M</mi></math></span>-sparse problem, we introduce the noise-resistant iterative hard thresholding (NR-IHT) algorithm. The convergence properties of the NR-IHT algorithm are subsequently discussed theoretically. To further enhance performance, we incorporate the momentum concept into the NR-IHT algorithm, referring to the modified version as “NR-IHT-Mom”. Simulation results show that both the NR-IHT algorithm and the NR-IHT-Mom algorithm outperform several state-of-the-art comparison algorithms.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0000,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Generalized M-sparse algorithms for constructing fault tolerant RBF networks\",\"authors\":\"\",\"doi\":\"10.1016/j.neunet.2024.106633\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In the construction process of radial basis function (RBF) networks, two common crucial issues arise: the selection of RBF centers and the effective utilization of the given source without encountering the overfitting problem. Another important issue is the fault tolerant capability. That is, when noise or faults exist in a trained network, it is crucial that the network’s performance does not undergo significant deterioration or decrease. However, without employing a fault tolerant procedure, a trained RBF network may exhibit significantly poor performance. Unfortunately, most existing algorithms are unable to simultaneously address all of the aforementioned issues. This paper proposes fault tolerant training algorithms that can simultaneously select RBF nodes and train RBF output weights. Additionally, our algorithms can directly control the number of RBF nodes in an explicit manner, eliminating the need for a time-consuming procedure to tune the regularization parameter and achieve the target RBF network size. Based on simulation results, our algorithms demonstrate improved test set performance when more RBF nodes are used, effectively utilizing the given source without encountering the overfitting problem. This paper first defines a fault tolerant objective function, which includes a term to suppress the effects of weight faults and weight noise. This term also prevents the issue of overfitting, resulting in better test set performance when more RBF nodes are utilized. With the defined objective function, the training process is designed to solve a generalized <span><math><mi>M</mi></math></span>-sparse problem by incorporating an <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span>-norm constraint. The <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span>-norm constraint allows us to directly and explicitly control the number of RBF nodes. To address the generalized <span><math><mi>M</mi></math></span>-sparse problem, we introduce the noise-resistant iterative hard thresholding (NR-IHT) algorithm. The convergence properties of the NR-IHT algorithm are subsequently discussed theoretically. To further enhance performance, we incorporate the momentum concept into the NR-IHT algorithm, referring to the modified version as “NR-IHT-Mom”. Simulation results show that both the NR-IHT algorithm and the NR-IHT-Mom algorithm outperform several state-of-the-art comparison algorithms.</p></div>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2024-08-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0893608024005574\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608024005574","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

在径向基函数(RBF)网络的构建过程中,会出现两个常见的关键问题:RBF 中心的选择和有效利用给定源而不出现过拟合问题。另一个重要问题是容错能力。也就是说,当训练有素的网络中存在噪声或故障时,网络的性能不会出现明显的恶化或下降是至关重要的。然而,如果不采用容错程序,经过训练的 RBF 网络可能会表现出很差的性能。遗憾的是,大多数现有算法无法同时解决上述所有问题。本文提出的容错训练算法可以同时选择 RBF 节点和训练 RBF 输出权重。此外,我们的算法还能以明确的方式直接控制 RBF 节点的数量,从而省去了调整正则化参数和实现目标 RBF 网络大小的耗时过程。根据仿真结果,当使用更多的 RBF 节点时,我们的算法证明了测试集性能的提高,有效地利用了给定的源,而不会遇到过拟合问题。本文首先定义了一个容错目标函数,其中包含一个抑制权重故障和权重噪声影响的项。这个项还能防止过拟合问题,从而在使用更多 RBF 节点时获得更好的测试集性能。有了确定的目标函数,训练过程就可以通过加入 ℓ0-norm 约束来解决广义 M-稀疏问题。ℓ0-norm 约束条件允许我们直接、明确地控制 RBF 节点的数量。为了解决广义 M 稀疏问题,我们引入了抗噪迭代硬阈值算法(NR-IHT)。随后,我们从理论上讨论了 NR-IHT 算法的收敛特性。为了进一步提高性能,我们在 NR-IHT 算法中加入了动量概念,并将修改后的版本称为 "NR-IHT-Mom"。仿真结果表明,NR-IHT 算法和 NR-IHT-Mom 算法都优于几种最先进的比较算法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Generalized M-sparse algorithms for constructing fault tolerant RBF networks

In the construction process of radial basis function (RBF) networks, two common crucial issues arise: the selection of RBF centers and the effective utilization of the given source without encountering the overfitting problem. Another important issue is the fault tolerant capability. That is, when noise or faults exist in a trained network, it is crucial that the network’s performance does not undergo significant deterioration or decrease. However, without employing a fault tolerant procedure, a trained RBF network may exhibit significantly poor performance. Unfortunately, most existing algorithms are unable to simultaneously address all of the aforementioned issues. This paper proposes fault tolerant training algorithms that can simultaneously select RBF nodes and train RBF output weights. Additionally, our algorithms can directly control the number of RBF nodes in an explicit manner, eliminating the need for a time-consuming procedure to tune the regularization parameter and achieve the target RBF network size. Based on simulation results, our algorithms demonstrate improved test set performance when more RBF nodes are used, effectively utilizing the given source without encountering the overfitting problem. This paper first defines a fault tolerant objective function, which includes a term to suppress the effects of weight faults and weight noise. This term also prevents the issue of overfitting, resulting in better test set performance when more RBF nodes are utilized. With the defined objective function, the training process is designed to solve a generalized M-sparse problem by incorporating an 0-norm constraint. The 0-norm constraint allows us to directly and explicitly control the number of RBF nodes. To address the generalized M-sparse problem, we introduce the noise-resistant iterative hard thresholding (NR-IHT) algorithm. The convergence properties of the NR-IHT algorithm are subsequently discussed theoretically. To further enhance performance, we incorporate the momentum concept into the NR-IHT algorithm, referring to the modified version as “NR-IHT-Mom”. Simulation results show that both the NR-IHT algorithm and the NR-IHT-Mom algorithm outperform several state-of-the-art comparison algorithms.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
期刊最新文献
Multi-focus image fusion with parameter adaptive dual channel dynamic threshold neural P systems. Joint computation offloading and resource allocation for end-edge collaboration in internet of vehicles via multi-agent reinforcement learning. An information-theoretic perspective of physical adversarial patches. Contrastive fine-grained domain adaptation network for EEG-based vigilance estimation. Decoupling visual and identity features for adversarial palm-vein image attack
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1