Pub Date : 2025-12-04DOI: 10.1109/tnnls.2025.3633573
Akhil S. Anand, Shambhuraj Sawant, Dirk Peter Reinhardt, Sebastien Gros
{"title":"Predicting What Matters: Training AI Models for Better Decisions","authors":"Akhil S. Anand, Shambhuraj Sawant, Dirk Peter Reinhardt, Sebastien Gros","doi":"10.1109/tnnls.2025.3633573","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3633573","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"41 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interlayer Sparse Compression-Based Deep Echo State Network Model and Its Application in Time-Series Forecasting","authors":"Yuxuan Wang, Mingwen Zheng, Yaru Shang, Manman Yuan, Hui Zhao","doi":"10.1109/tnnls.2025.3634741","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3634741","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"14 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.1109/tnnls.2025.3632965
G. Manjunath, A. de Clercq, M. J. Steynberg
{"title":"Universal Set of Observables for Forecasting Physical Systems Through Causal Embedding","authors":"G. Manjunath, A. de Clercq, M. J. Steynberg","doi":"10.1109/tnnls.2025.3632965","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3632965","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"169 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-04DOI: 10.1109/tnnls.2025.3638370
Minghao Zhou, Hong Wang, Yefeng Zheng, Deyu Meng
{"title":"A Refreshed Similarity-Based Upsampler for Direct High-Ratio Feature Upsampling","authors":"Minghao Zhou, Hong Wang, Yefeng Zheng, Deyu Meng","doi":"10.1109/tnnls.2025.3638370","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3638370","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"48 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145673799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Industrial multiprocess collaborative optimization presents significant challenges due to the intricate spatiotemporal dependencies inherent in modern process industries. Traditional optimization and reinforcement learning often treat subprocesses as independent entities, neglecting the fine-grained interdependencies among operational variables across different subprocesses. To fundamentally address this limitation, we introduce, a novel spatiotemporal topology-informed multiprocess collaborative optimization (STI-MCO) framework, which pioneers action-level interdependency modeling through an innovative spatiotemporal graph architecture. Rather than treating subprocesses as monolithic entities, STI-MCO operates at the operational variable level, enabling precise representation of both interprocess relationships and intraprocess dependencies through a hierarchical two-stage decision framework. This approach enables more precise coordination through fine-grained variable interactions, better temporal consistency via dynamic graph structures, and enhanced scalability compared with conventional agent-level methods. This paradigm shift from subprocess-level to variable-level collaboration, combined with dynamic graph-based coordination, enables extensive simulations and experiments conducted across three benchmark environments with progressively complex topologies to demonstrate that STI-MCO consistently outperforms baseline methods, achieving up to 38.9% improvement over centralized methods and 171.9% improvement over existing multiagent strategies. In addition, STI-MCO exhibits superior convergence efficiency, requiring significantly fewer training steps to achieve high performance. Its practical applicability is further validated through deployment in a real-world Salt Lake chemical process. By fundamentally shifting the optimization paradigm from holistic subprocess control to fine-grained variable-level collaboration, this work establishes a new framework for more effective optimization in complex industrial processes, particularly those with strong interunit coupling.
{"title":"Spatiotemporal Topology-Informed Multiagent Reinforcement Learning Framework for Structured Multiprocess Collaborative Optimization.","authors":"Diju Liu,Yalin Wang,Chenliang Liu,Biao Luo,Biao Huang","doi":"10.1109/tnnls.2025.3633880","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3633880","url":null,"abstract":"Industrial multiprocess collaborative optimization presents significant challenges due to the intricate spatiotemporal dependencies inherent in modern process industries. Traditional optimization and reinforcement learning often treat subprocesses as independent entities, neglecting the fine-grained interdependencies among operational variables across different subprocesses. To fundamentally address this limitation, we introduce, a novel spatiotemporal topology-informed multiprocess collaborative optimization (STI-MCO) framework, which pioneers action-level interdependency modeling through an innovative spatiotemporal graph architecture. Rather than treating subprocesses as monolithic entities, STI-MCO operates at the operational variable level, enabling precise representation of both interprocess relationships and intraprocess dependencies through a hierarchical two-stage decision framework. This approach enables more precise coordination through fine-grained variable interactions, better temporal consistency via dynamic graph structures, and enhanced scalability compared with conventional agent-level methods. This paradigm shift from subprocess-level to variable-level collaboration, combined with dynamic graph-based coordination, enables extensive simulations and experiments conducted across three benchmark environments with progressively complex topologies to demonstrate that STI-MCO consistently outperforms baseline methods, achieving up to 38.9% improvement over centralized methods and 171.9% improvement over existing multiagent strategies. In addition, STI-MCO exhibits superior convergence efficiency, requiring significantly fewer training steps to achieve high performance. Its practical applicability is further validated through deployment in a real-world Salt Lake chemical process. By fundamentally shifting the optimization paradigm from holistic subprocess control to fine-grained variable-level collaboration, this work establishes a new framework for more effective optimization in complex industrial processes, particularly those with strong interunit coupling.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"29 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145657028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1109/tnnls.2025.3637391
Kunjie Yu,Hao Tang,Jing Liang,Chao Li,Mingyuan Yu
Neural architecture search (NAS) has achieved significant success in automating neural network design, particularly through evolutionary NAS. To address the critical need for efficient architecture discovery across diverse scenarios, such as computer vision and natural language processing, multitask NAS (MT-NAS) methods have emerged. Nevertheless, existing MT-NAS approaches still face critical challenges, including redundant search arising from insufficient exploitation of population historical information across generations and negative transfer caused by unguided interactions between tasks. To address these limitations, a population historical information-driven evolutionary multitask neural architecture search (HIMT-NAS) algorithm is proposed. For each generation, the population historical information is recorded, which includes the operation information and the topology information. In the search process, systematic utilization of population historical information to guide evolutionary search directions, preventing redundant search. Furthermore, the proposed method adjusts cross-task knowledge transfer probability by measuring task similarity through patterns in population historical information, and then updates transfer probabilities when the information proves useful across multiple tasks. Extensive experiments on MedMNIST, CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate consistent advantages of the proposed method over both single-task NAS methods and recent MT-NAS methods.
{"title":"Population Historical Information-Driven Evolutionary Multitask Neural Architecture Search.","authors":"Kunjie Yu,Hao Tang,Jing Liang,Chao Li,Mingyuan Yu","doi":"10.1109/tnnls.2025.3637391","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3637391","url":null,"abstract":"Neural architecture search (NAS) has achieved significant success in automating neural network design, particularly through evolutionary NAS. To address the critical need for efficient architecture discovery across diverse scenarios, such as computer vision and natural language processing, multitask NAS (MT-NAS) methods have emerged. Nevertheless, existing MT-NAS approaches still face critical challenges, including redundant search arising from insufficient exploitation of population historical information across generations and negative transfer caused by unguided interactions between tasks. To address these limitations, a population historical information-driven evolutionary multitask neural architecture search (HIMT-NAS) algorithm is proposed. For each generation, the population historical information is recorded, which includes the operation information and the topology information. In the search process, systematic utilization of population historical information to guide evolutionary search directions, preventing redundant search. Furthermore, the proposed method adjusts cross-task knowledge transfer probability by measuring task similarity through patterns in population historical information, and then updates transfer probabilities when the information proves useful across multiple tasks. Extensive experiments on MedMNIST, CIFAR-10, CIFAR-100, and Tiny-ImageNet demonstrate consistent advantages of the proposed method over both single-task NAS methods and recent MT-NAS methods.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"33 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145656997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1109/TNNLS.2025.3629911
{"title":"IEEE Transactions on Neural Networks and Learning Systems Information for Authors","authors":"","doi":"10.1109/TNNLS.2025.3629911","DOIUrl":"https://doi.org/10.1109/TNNLS.2025.3629911","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 12","pages":"C4-C4"},"PeriodicalIF":8.9,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11272993","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145652144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}