首页 > 最新文献

IEEE transactions on neural networks and learning systems最新文献

英文 中文
FDSRM: A Feature-Driven Style-Agnostic Foundation Model for Sketch-Less Facial Image Retrieval FDSRM:一种基于特征驱动的无草图人脸图像检索模型
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-25 DOI: 10.1109/tnnls.2025.3633075
Yingge Liu, Dawei Dai, Shuyin Xia, Guoyin Wang
{"title":"FDSRM: A Feature-Driven Style-Agnostic Foundation Model for Sketch-Less Facial Image Retrieval","authors":"Yingge Liu, Dawei Dai, Shuyin Xia, Guoyin Wang","doi":"10.1109/tnnls.2025.3633075","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3633075","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"1 1","pages":"1-15"},"PeriodicalIF":10.4,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145599044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Model Fusion: A Survey 深度模型融合:综述
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-25 DOI: 10.1109/tnnls.2025.3628666
Weishi Li, Yong Peng, Miao Zhang, Liang Ding, Han Hu, Li Shen
{"title":"Deep Model Fusion: A Survey","authors":"Weishi Li, Yong Peng, Miao Zhang, Liang Ding, Han Hu, Li Shen","doi":"10.1109/tnnls.2025.3628666","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3628666","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"199 1","pages":"1-17"},"PeriodicalIF":10.4,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145599041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reservoir Kernels and Volterra Series 水库核和Volterra系列
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-25 DOI: 10.1109/tnnls.2025.3630143
Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega
{"title":"Reservoir Kernels and Volterra Series","authors":"Lukas Gonon, Lyudmila Grigoryeva, Juan-Pablo Ortega","doi":"10.1109/tnnls.2025.3630143","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3630143","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"203 1","pages":"1-12"},"PeriodicalIF":10.4,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145599043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
General Network Learning Rules Based on DNA Strand Displacement for Thyroid Disease Prediction 基于DNA链位移的甲状腺疾病预测通用网络学习规则
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-25 DOI: 10.1109/tnnls.2025.3633665
Junwei Sun, Jiaming Li, Yanfeng Wang, Yan Wang
{"title":"General Network Learning Rules Based on DNA Strand Displacement for Thyroid Disease Prediction","authors":"Junwei Sun, Jiaming Li, Yanfeng Wang, Yan Wang","doi":"10.1109/tnnls.2025.3633665","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3633665","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"18 1","pages":"1-14"},"PeriodicalIF":10.4,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145599038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Genetic Algorithm-Based Two-Step Optimization for Precise Latent Factor Analysis 基于遗传算法的两步优化潜在因子精确分析
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-25 DOI: 10.1109/tnnls.2025.3631465
Chao Lyu, Jingna Cheng, Xin Luo, Yuhui Shi
{"title":"Genetic Algorithm-Based Two-Step Optimization for Precise Latent Factor Analysis","authors":"Chao Lyu, Jingna Cheng, Xin Luo, Yuhui Shi","doi":"10.1109/tnnls.2025.3631465","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3631465","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"149 1","pages":"1-13"},"PeriodicalIF":10.4,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145599040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Simple Unified Uncertainty-Guided Framework for Offline-to-Online Reinforcement Learning 离线到在线强化学习的简单统一不确定性指导框架
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-25 DOI: 10.1109/tnnls.2025.3633997
Siyuan Guo, Yanchao Sun, Jifeng Hu, Sili Huang, Hechang Chen, Haiyin Piao, Lichao Sun, Yi Chang
{"title":"A Simple Unified Uncertainty-Guided Framework for Offline-to-Online Reinforcement Learning","authors":"Siyuan Guo, Yanchao Sun, Jifeng Hu, Sili Huang, Hechang Chen, Haiyin Piao, Lichao Sun, Yi Chang","doi":"10.1109/tnnls.2025.3633997","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3633997","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"8 1","pages":"1-13"},"PeriodicalIF":10.4,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145599045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Context-Aware Learning and Pattern Decomposition for Temporal Knowledge Graph Reasoning 时态知识图推理的上下文感知学习和模式分解
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-25 DOI: 10.1109/tnnls.2025.3631478
Longquan Liao, Linjiang Zheng, Jiaxing Shang, Xu Li, Jiang Zhong, Kaiwen Wei, Yi Tang
{"title":"Context-Aware Learning and Pattern Decomposition for Temporal Knowledge Graph Reasoning","authors":"Longquan Liao, Linjiang Zheng, Jiaxing Shang, Xu Li, Jiang Zhong, Kaiwen Wei, Yi Tang","doi":"10.1109/tnnls.2025.3631478","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3631478","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"418 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145599046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal-Flip-Based Segmented Reinforcement Learning for Detectability Synthesis of Probabilistic Boolean Networks 基于最优翻转的分段强化学习用于概率布尔网络的可检测性综合
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-25 DOI: 10.1109/tnnls.2025.3633687
Zhipeng Zhang, Chenyang Bian, Chengyi Xia, Zengqiang Chen
{"title":"Optimal-Flip-Based Segmented Reinforcement Learning for Detectability Synthesis of Probabilistic Boolean Networks","authors":"Zhipeng Zhang, Chenyang Bian, Chengyi Xia, Zengqiang Chen","doi":"10.1109/tnnls.2025.3633687","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3633687","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"63 1","pages":"1-13"},"PeriodicalIF":10.4,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145599042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Probabilistic Pooling: From Bernoulli to Poisson Distribution 图概率池化:从伯努利分布到泊松分布
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-25 DOI: 10.1109/tnnls.2025.3631730
Guangbu Liu, Yi Lei, Miao Sun, Tong Zhang, Xudong Wang, Chuanwei Zhou, Cheng Long, Zhen Cui
{"title":"Graph Probabilistic Pooling: From Bernoulli to Poisson Distribution","authors":"Guangbu Liu, Yi Lei, Miao Sun, Tong Zhang, Xudong Wang, Chuanwei Zhou, Cheng Long, Zhen Cui","doi":"10.1109/tnnls.2025.3631730","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3631730","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"117 1","pages":"1-12"},"PeriodicalIF":10.4,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145599039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse Bayesian Broad Learning System via Adaptive Lasso Priors for Robust Regression. 基于自适应Lasso先验的稀疏贝叶斯广义学习系统鲁棒回归。
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-20 DOI: 10.1109/tnnls.2025.3630247
Tao Chen,Lijie Wang,C L Philip Chen
Broad learning system (BLS), as an innovative type of neural network, has demonstrated exceptional performance in regression tasks. Nonetheless, the majority of BLS methods, which rely on the least squares criterion, are highly sensitive to outliers and noisy data, resulting in reduced prediction accuracy. To improve the robustness of broad networks, a sparse Bayesian BLS via adaptive Lasso priors (AL-SBBLS) is proposed in this article to handle regression tasks with data contaminated by outliers and noise. Specifically, adaptive Lasso constraints are first applied to enhance the adaptive sparsity of output weights, which facilitates the automatic selection of highly correlated features. Subsequently, a multilayer Bayesian framework is constructed to provide an adaptive Lasso prior to the output weights, allowing the model for the adaptive learning of regularization factors and the estimation of probability distributions for output values, while further sparsifying the network. By selecting highly correlated features and estimating the probability distributions of output values, the impact of outliers and noise can be effectively mitigated. To effectively train the networks, corresponding optimization algorithms are designed for AL-SBLS and AL-SBBLS using the alternating direction method of multipliers (ADMMs) and variational Bayesian inference methods, respectively. The effectiveness and robustness of the proposed methods are validated through robust regression experiments on 14 real-world datasets and complex nonlinear data. Quantitative results demonstrate that the proposed AL-SBBLS achieves the best performance on most datasets, attaining the lowest average ranking of 1.44 in Friedman tests compared with 11 state-of-the-art BLS variants, which confirms its superior predictive accuracy and robustness. The resource code of AL-SBBLS proposed in this article is available at: https://github.com/taocheny/AL-SBBLS.
广义学习系统(BLS)作为一种新型的神经网络,在回归任务中表现出优异的性能。然而,大多数BLS方法依赖于最小二乘准则,对异常值和噪声数据高度敏感,导致预测精度降低。为了提高广义网络的鲁棒性,本文提出了一种基于自适应Lasso先验的稀疏贝叶斯BLS (AL-SBBLS)来处理受异常值和噪声污染的数据回归任务。具体而言,首先采用自适应Lasso约束增强输出权值的自适应稀疏度,有利于自动选择高度相关的特征。随后,构建多层贝叶斯框架,在输出权重之前提供自适应Lasso,使模型能够自适应学习正则化因子和估计输出值的概率分布,同时进一步稀疏网络。通过选择高度相关的特征并估计输出值的概率分布,可以有效地减轻异常值和噪声的影响。为了有效地训练网络,分别采用乘法器交替方向法(admm)和变分贝叶斯推理方法对AL-SBLS和AL-SBBLS设计了相应的优化算法。通过对14个真实数据集和复杂非线性数据的鲁棒回归实验,验证了所提方法的有效性和鲁棒性。定量结果表明,所提出的AL-SBBLS在大多数数据集上取得了最佳性能,与11个最先进的BLS变体相比,在Friedman测试中获得了最低的平均排名1.44,这证实了其优越的预测准确性和稳健性。本文中提出的AL-SBBLS的资源代码可从https://github.com/taocheny/AL-SBBLS获得。
{"title":"Sparse Bayesian Broad Learning System via Adaptive Lasso Priors for Robust Regression.","authors":"Tao Chen,Lijie Wang,C L Philip Chen","doi":"10.1109/tnnls.2025.3630247","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3630247","url":null,"abstract":"Broad learning system (BLS), as an innovative type of neural network, has demonstrated exceptional performance in regression tasks. Nonetheless, the majority of BLS methods, which rely on the least squares criterion, are highly sensitive to outliers and noisy data, resulting in reduced prediction accuracy. To improve the robustness of broad networks, a sparse Bayesian BLS via adaptive Lasso priors (AL-SBBLS) is proposed in this article to handle regression tasks with data contaminated by outliers and noise. Specifically, adaptive Lasso constraints are first applied to enhance the adaptive sparsity of output weights, which facilitates the automatic selection of highly correlated features. Subsequently, a multilayer Bayesian framework is constructed to provide an adaptive Lasso prior to the output weights, allowing the model for the adaptive learning of regularization factors and the estimation of probability distributions for output values, while further sparsifying the network. By selecting highly correlated features and estimating the probability distributions of output values, the impact of outliers and noise can be effectively mitigated. To effectively train the networks, corresponding optimization algorithms are designed for AL-SBLS and AL-SBBLS using the alternating direction method of multipliers (ADMMs) and variational Bayesian inference methods, respectively. The effectiveness and robustness of the proposed methods are validated through robust regression experiments on 14 real-world datasets and complex nonlinear data. Quantitative results demonstrate that the proposed AL-SBBLS achieves the best performance on most datasets, attaining the lowest average ranking of 1.44 in Friedman tests compared with 11 state-of-the-art BLS variants, which confirms its superior predictive accuracy and robustness. The resource code of AL-SBBLS proposed in this article is available at: https://github.com/taocheny/AL-SBBLS.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"1 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145559031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on neural networks and learning systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1