首页 > 最新文献

IEEE transactions on neural networks and learning systems最新文献

英文 中文
Adaptive Frequency-Based Constructive Wavelet Neural Network for Nonlinear Dynamic Systems 基于自适应频率的非线性动态系统构造小波神经网络
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-17 DOI: 10.1109/tnnls.2025.3642820
Dunsheng Huang, Dong Shen, Lei Lu, Ying Tan
{"title":"Adaptive Frequency-Based Constructive Wavelet Neural Network for Nonlinear Dynamic Systems","authors":"Dunsheng Huang, Dong Shen, Lei Lu, Ying Tan","doi":"10.1109/tnnls.2025.3642820","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3642820","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"34 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145770811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spurious Local Minima Provably Exist for Deep CNNs: Theory and Application 深度cnn可证明存在伪局部极小值:理论与应用
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-17 DOI: 10.1109/tnnls.2025.3640733
Bo Liu, Keyi Fu, Tongtong Yuan, Shen Geng
{"title":"Spurious Local Minima Provably Exist for Deep CNNs: Theory and Application","authors":"Bo Liu, Keyi Fu, Tongtong Yuan, Shen Geng","doi":"10.1109/tnnls.2025.3640733","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3640733","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"155 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145770813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-Grained Visual Classification via Adaptive Attention Quantization Transformer 基于自适应注意力量化转换器的细粒度视觉分类
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-17 DOI: 10.1109/tnnls.2025.3643809
Shishi Qiao, Shixian Li, Haiyong Zheng
{"title":"Fine-Grained Visual Classification via Adaptive Attention Quantization Transformer","authors":"Shishi Qiao, Shixian Li, Haiyong Zheng","doi":"10.1109/tnnls.2025.3643809","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3643809","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"21 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145770814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AFoCo: Ambiguous Focus and Correction for Semi-Supervised Medical Image Segmentation 半监督医学图像分割的模糊焦点与校正
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-17 DOI: 10.1109/tnnls.2025.3642162
Gang Hu, Feng Zhao, Essam H. Houssein
{"title":"AFoCo: Ambiguous Focus and Correction for Semi-Supervised Medical Image Segmentation","authors":"Gang Hu, Feng Zhao, Essam H. Houssein","doi":"10.1109/tnnls.2025.3642162","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3642162","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"12 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145770812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diverse Semantic Image Editing With Style Codes 不同的语义图像编辑与风格代码
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-12 DOI: 10.1109/tnnls.2025.3636483
Hakan Sivuk, Aysegul Dundar
{"title":"Diverse Semantic Image Editing With Style Codes","authors":"Hakan Sivuk, Aysegul Dundar","doi":"10.1109/tnnls.2025.3636483","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3636483","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"3 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145731221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fractional Gradient Descent With Matrix Stepsizes for Non-Convex Optimization 非凸优化的矩阵步长分数阶梯度下降
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-12 DOI: 10.1109/tnnls.2025.3637535
Alokendu Mazumder, Keshav Vyas, Punit Rathore
{"title":"Fractional Gradient Descent With Matrix Stepsizes for Non-Convex Optimization","authors":"Alokendu Mazumder, Keshav Vyas, Punit Rathore","doi":"10.1109/tnnls.2025.3637535","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3637535","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"12 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145731220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved Knowledge Distillation Based on Global Latent Workspace With Multimodal Knowledge Fusion for Understanding Topological Guidance on Wearable Sensor Data 基于全局潜在工作空间和多模态知识融合的改进知识蒸馏用于可穿戴传感器数据的拓扑引导理解
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-12 DOI: 10.1109/tnnls.2025.3640274
Jinyung Hong, Eun Som Jeon, Matthew P. Buman, Pavan Turaga, Theodore P. Pavlic
{"title":"Improved Knowledge Distillation Based on Global Latent Workspace With Multimodal Knowledge Fusion for Understanding Topological Guidance on Wearable Sensor Data","authors":"Jinyung Hong, Eun Som Jeon, Matthew P. Buman, Pavan Turaga, Theodore P. Pavlic","doi":"10.1109/tnnls.2025.3640274","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3640274","url":null,"abstract":"","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"42 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145731223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Implicit Mapping: Advancing Generative Models Through Smoothed Optimal Transport. 超越隐式映射:通过平滑最优传输推进生成模型。
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1109/tnnls.2025.3638632
Shenghao Li,Lianbao Jin,Zhanpeng Wang,Zebin Xu,Na Lei,Zhongxuan Luo
Optimal transport (OT) has gained significant attention in deep learning as a powerful mathematical tool for transforming distributions. Specifically, in deep generative models, the incorporation of OT helps address issues such as training instability, vanishing gradients, and mode collapse. However, in these models, most of the OT mappings learned by neural networks are typically implicit, making it difficult to explicitly model the relationship between the source and target domains. This limitation reduces the interpretability of the model and hinders its applicability in conditional generation tasks. To address this issue, we introduce Nesterov's smoothing technique to smooth the Brenier potential, enabling the derivation of an explicit OT mapping that serves as the foundation for constructing an advanced generative model. The proposed model offers the following advantages. First, it explicitly captures the mapping between the source and target domains, thereby enhancing the interpretability of the generative process and enabling a novel pathway for conditional sample generation based on a smoothed approximation of OT mapping. Second, the model can generate new samples directly through an explicit OT mapping, eliminating the need for interpolation and rejection sampling commonly seen in traditional methods, thereby improving generation efficiency. Moreover, extensive experiments show that our proposed model achieves superior performance in both unconditional and conditional generation tasks.
最优传输(OT)作为一种转换分布的强大数学工具,在深度学习中得到了广泛的关注。具体来说,在深度生成模型中,OT的结合有助于解决诸如训练不稳定性、梯度消失和模式崩溃等问题。然而,在这些模型中,神经网络学习的大多数OT映射通常是隐式的,这使得很难显式地建模源域和目标域之间的关系。这种限制降低了模型的可解释性,阻碍了它在条件生成任务中的适用性。为了解决这个问题,我们引入Nesterov的平滑技术来平滑Brenier势,从而可以推导出显式OT映射,作为构建高级生成模型的基础。所提出的模型具有以下优点。首先,它明确地捕获源域和目标域之间的映射,从而增强生成过程的可解释性,并基于OT映射的光滑近似实现条件样本生成的新途径。其次,该模型可以通过显式OT映射直接生成新样本,消除了传统方法中常见的插值和拒绝采样,从而提高了生成效率。此外,大量的实验表明,我们提出的模型在无条件和条件生成任务中都取得了优异的性能。
{"title":"Beyond Implicit Mapping: Advancing Generative Models Through Smoothed Optimal Transport.","authors":"Shenghao Li,Lianbao Jin,Zhanpeng Wang,Zebin Xu,Na Lei,Zhongxuan Luo","doi":"10.1109/tnnls.2025.3638632","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3638632","url":null,"abstract":"Optimal transport (OT) has gained significant attention in deep learning as a powerful mathematical tool for transforming distributions. Specifically, in deep generative models, the incorporation of OT helps address issues such as training instability, vanishing gradients, and mode collapse. However, in these models, most of the OT mappings learned by neural networks are typically implicit, making it difficult to explicitly model the relationship between the source and target domains. This limitation reduces the interpretability of the model and hinders its applicability in conditional generation tasks. To address this issue, we introduce Nesterov's smoothing technique to smooth the Brenier potential, enabling the derivation of an explicit OT mapping that serves as the foundation for constructing an advanced generative model. The proposed model offers the following advantages. First, it explicitly captures the mapping between the source and target domains, thereby enhancing the interpretability of the generative process and enabling a novel pathway for conditional sample generation based on a smoothed approximation of OT mapping. Second, the model can generate new samples directly through an explicit OT mapping, eliminating the need for interpolation and rejection sampling commonly seen in traditional methods, thereby improving generation efficiency. Moreover, extensive experiments show that our proposed model achieves superior performance in both unconditional and conditional generation tasks.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"7 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145728614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DuaDiff: Dual-Conditional Diffusion Model for Guided Thermal Image Super-Resolution. 引导热图像超分辨率的双条件扩散模型。
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1109/tnnls.2025.3640168
Linrui Shi,Gaochang Wu,Yingqian Wang,Yebin Liu,Tianyou Chai
Thermal imaging offers valuable properties, but suffers from inherently low spatial resolution, which can be enhanced using a high-resolution (HR) visible image as guidance. However, the substantial modality differences between thermal and visible images, coupled with significant resolution gaps, pose challenges to existing guided super-resolution (SR) approaches. In this article, we present dual-conditional diffusion (DuaDiff), an innovative diffusion model featuring a dual-conditioning mechanism to enhance guided thermal image SR. Unlike typical conditional diffusion models, DuaDiff integrates a learnable Laplacian pyramid to extract high-frequency details from the visible image, serving as one of the conditioning inputs. By capturing multiscale high-frequency components, DuaDiff effectively focuses on intricate textures and edges in the HR visible images, significantly enhancing thermal image fidelity. Furthermore, we project both thermal and visible images into a semantic latent space, constructing another conditioning input. Leveraging these complementary conditions, DuaDiff employs a multimodal latent feature cross-attention module to facilitate effective interaction between noise, thermal, and visible latent representations. Extensive experiments on the FLIR-ADAS and CATS datasets for $4times $ and $8times $ guided SR demonstrate that combining learnable Laplacian conditioning with semantic latent conditioning enables DuaDiff to surpass state-of-the-art methods in both visual quality and metric evaluation, particularly in scenarios with a large resolution gap. Besides, the applications to downstream tasks further confirm the capability of DuaDiff to recover high-fidelity semantic information. The code will be released.
热成像提供了宝贵的性能,但其固有的空间分辨率较低,可以使用高分辨率(HR)可见图像作为引导来增强。然而,热图像和可见光图像之间的巨大模态差异,加上显著的分辨率差距,给现有的制导超分辨率(SR)方法带来了挑战。在本文中,我们提出了双条件扩散(dual-conditional diffusion, DuaDiff),这是一种创新的扩散模型,具有双条件调节机制,以提高制导热图像的sr。与典型的条件扩散模型不同,DuaDiff集成了一个可学习的拉普拉斯金字塔,从可见图像中提取高频细节,作为条件输入之一。通过捕获多尺度高频分量,DuaDiff有效聚焦HR可见图像中复杂的纹理和边缘,显著提高热图像保真度。此外,我们将热图像和可见图像投影到语义潜空间中,构建另一个条件反射输入。利用这些互补条件,DuaDiff采用了一个多模态潜在特征交叉注意模块,以促进噪声、热和可见潜在表征之间的有效交互。在$4times $和$8times $引导SR的fliri - adas和CATS数据集上进行的大量实验表明,将可学习的拉普拉斯条件反射与语义潜在条件反射相结合,使DuaDiff在视觉质量和度量评估方面都超过了最先进的方法,特别是在分辨率差距很大的情况下。此外,在下游任务中的应用进一步验证了DuaDiff恢复高保真语义信息的能力。代码将被发布。
{"title":"DuaDiff: Dual-Conditional Diffusion Model for Guided Thermal Image Super-Resolution.","authors":"Linrui Shi,Gaochang Wu,Yingqian Wang,Yebin Liu,Tianyou Chai","doi":"10.1109/tnnls.2025.3640168","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3640168","url":null,"abstract":"Thermal imaging offers valuable properties, but suffers from inherently low spatial resolution, which can be enhanced using a high-resolution (HR) visible image as guidance. However, the substantial modality differences between thermal and visible images, coupled with significant resolution gaps, pose challenges to existing guided super-resolution (SR) approaches. In this article, we present dual-conditional diffusion (DuaDiff), an innovative diffusion model featuring a dual-conditioning mechanism to enhance guided thermal image SR. Unlike typical conditional diffusion models, DuaDiff integrates a learnable Laplacian pyramid to extract high-frequency details from the visible image, serving as one of the conditioning inputs. By capturing multiscale high-frequency components, DuaDiff effectively focuses on intricate textures and edges in the HR visible images, significantly enhancing thermal image fidelity. Furthermore, we project both thermal and visible images into a semantic latent space, constructing another conditioning input. Leveraging these complementary conditions, DuaDiff employs a multimodal latent feature cross-attention module to facilitate effective interaction between noise, thermal, and visible latent representations. Extensive experiments on the FLIR-ADAS and CATS datasets for $4times $ and $8times $ guided SR demonstrate that combining learnable Laplacian conditioning with semantic latent conditioning enables DuaDiff to surpass state-of-the-art methods in both visual quality and metric evaluation, particularly in scenarios with a large resolution gap. Besides, the applications to downstream tasks further confirm the capability of DuaDiff to recover high-fidelity semantic information. The code will be released.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"33 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145728609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Traffic Forecasting With Disentangled Spatiotemporal Graph Neural Networks. 基于解纠缠时空图神经网络的鲁棒交通预测。
IF 10.4 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-11 DOI: 10.1109/tnnls.2025.3635636
Ting Wang,Rui Luo,Daqian Shi,Hao Deng,Shengjie Zhao
Traffic prediction is a cornerstone of intelligent transportation systems (ITSs). The effectiveness of existing spatiotemporal graph neural networks (STGNNs) heavily relies on the independent identically distributed (i.i.d.) assumption of traffic data, which is frequently violated in practice because of distribution shifts owing to exogenous factors. While learning features that remain stable across all environments is promising for modeling robust frameworks, the fundamental challenge involves the decomposition of invariant features from the dynamic nature of spatiotemporal dependencies. In this article, we propose the disentangled spatiotemporal (DIST) graph neural networks, a novel framework for robust traffic forecasting considering distribution shifts. In DIST, latent invariant variables are explicitly decoupled from dynamically evolving spatiotemporal dependencies, enabling the learning of topology-agnostic representations resilient to distribution shifts. Specifically, we formulate a causality-driven learning objective that guides the separation of invariant variables from various exogenous factors. We then propose a spatiotemporal graph modeling module that can adaptively capture spatiotemporal dependencies in evolving traffic systems. Furthermore, we present a graph perturbation module to simulate topology variations during training, thereby encouraging the model to identify perturbation-sensitive dependencies and infer invariant and variant features for prediction and intervention tasks. The prediction risk and its variance on multiple interventional distributions are minimized in our learning strategy, allowing the model to identify invariant features, thus improving its robustness. The results of comprehensive real-world experiments demonstrate the superiority of our approach. The source code is available: https://github.com/tingwang25/DIST.
交通预测是智能交通系统的基础。现有时空图神经网络(stgnn)的有效性在很大程度上依赖于交通数据的独立同分布假设,而在实践中,由于外生因素导致的分布变化,这一假设经常被违反。虽然学习在所有环境中保持稳定的特征对于建模健壮的框架是有希望的,但基本的挑战涉及从时空依赖性的动态性质中分解不变特征。在本文中,我们提出了解纠缠时空(DIST)图神经网络,这是一个考虑分布变化的鲁棒交通预测的新框架。在DIST中,潜在不变变量被明确地从动态演化的时空依赖关系中解耦,从而使拓扑不可知表示的学习能够适应分布变化。具体来说,我们制定了一个因果驱动的学习目标,指导从各种外生因素中分离不变变量。然后,我们提出了一个时空图建模模块,该模块可以自适应地捕获不断发展的交通系统中的时空依赖关系。此外,我们提出了一个图扰动模块来模拟训练期间的拓扑变化,从而鼓励模型识别扰动敏感的依赖关系,并推断预测和干预任务的不变和变特征。在我们的学习策略中,预测风险及其对多个干预分布的方差被最小化,使模型能够识别不变特征,从而提高其鲁棒性。综合实际实验的结果证明了我们方法的优越性。源代码是可用的:https://github.com/tingwang25/DIST。
{"title":"Robust Traffic Forecasting With Disentangled Spatiotemporal Graph Neural Networks.","authors":"Ting Wang,Rui Luo,Daqian Shi,Hao Deng,Shengjie Zhao","doi":"10.1109/tnnls.2025.3635636","DOIUrl":"https://doi.org/10.1109/tnnls.2025.3635636","url":null,"abstract":"Traffic prediction is a cornerstone of intelligent transportation systems (ITSs). The effectiveness of existing spatiotemporal graph neural networks (STGNNs) heavily relies on the independent identically distributed (i.i.d.) assumption of traffic data, which is frequently violated in practice because of distribution shifts owing to exogenous factors. While learning features that remain stable across all environments is promising for modeling robust frameworks, the fundamental challenge involves the decomposition of invariant features from the dynamic nature of spatiotemporal dependencies. In this article, we propose the disentangled spatiotemporal (DIST) graph neural networks, a novel framework for robust traffic forecasting considering distribution shifts. In DIST, latent invariant variables are explicitly decoupled from dynamically evolving spatiotemporal dependencies, enabling the learning of topology-agnostic representations resilient to distribution shifts. Specifically, we formulate a causality-driven learning objective that guides the separation of invariant variables from various exogenous factors. We then propose a spatiotemporal graph modeling module that can adaptively capture spatiotemporal dependencies in evolving traffic systems. Furthermore, we present a graph perturbation module to simulate topology variations during training, thereby encouraging the model to identify perturbation-sensitive dependencies and infer invariant and variant features for prediction and intervention tasks. The prediction risk and its variance on multiple interventional distributions are minimized in our learning strategy, allowing the model to identify invariant features, thus improving its robustness. The results of comprehensive real-world experiments demonstrate the superiority of our approach. The source code is available: https://github.com/tingwang25/DIST.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"9 1","pages":""},"PeriodicalIF":10.4,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145728615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on neural networks and learning systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1