首页 > 最新文献

Neural Networks最新文献

英文 中文
Incremental model-based reinforcement learning with model constraint
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-08 DOI: 10.1016/j.neunet.2025.107245
Zhiyou Yang , Mingsheng Fu , Hong Qu , Fan Li , Shuqing Shi , Wang Hu
In model-based reinforcement learning (RL) approaches, the estimated model of a real environment is learned with limited data and then utilized for policy optimization. As a result, the policy optimization process in model-based RL is influenced by both policy and estimated model updates. In practice, previous model-based RL methods only perform incremental policy constraint to policy updates, which cannot assure the complete incremental updates, thereby limiting the algorithm’s performance. To address this issue, we propose an incremental model-based RL update scheme by analyzing the policy optimization procedure of model-based RL. This scheme includes both an incremental model constraint that guarantees incremental updates to the estimated model, and an incremental policy constraint that ensures incremental updates to the policy. Further, we establish a performance bound incorporating the incremental model-based RL update scheme between the real environment and the estimated model, which can assure non-decreasing policy performance improvement in the real environment. To implement the incremental model-based RL update scheme, we develop a simple and efficient model-based RL algorithm known as IMPO (Incremental Model-based Policy Optimization), which leverages previous knowledge to enhance stability during the learning process. Experimental results across various control benchmarks demonstrate that IMPO significantly outperforms previous state-of-the-art model-based RL methods in terms of overall performance and sample efficiency.
{"title":"Incremental model-based reinforcement learning with model constraint","authors":"Zhiyou Yang ,&nbsp;Mingsheng Fu ,&nbsp;Hong Qu ,&nbsp;Fan Li ,&nbsp;Shuqing Shi ,&nbsp;Wang Hu","doi":"10.1016/j.neunet.2025.107245","DOIUrl":"10.1016/j.neunet.2025.107245","url":null,"abstract":"<div><div>In model-based reinforcement learning (RL) approaches, the estimated model of a real environment is learned with limited data and then utilized for policy optimization. As a result, the policy optimization process in model-based RL is influenced by both policy and estimated model updates. In practice, previous model-based RL methods only perform incremental policy constraint to policy updates, which cannot assure the complete incremental updates, thereby limiting the algorithm’s performance. To address this issue, we propose an incremental model-based RL update scheme by analyzing the policy optimization procedure of model-based RL. This scheme includes both an incremental model constraint that guarantees incremental updates to the estimated model, and an incremental policy constraint that ensures incremental updates to the policy. Further, we establish a performance bound incorporating the incremental model-based RL update scheme between the real environment and the estimated model, which can assure non-decreasing policy performance improvement in the real environment. To implement the incremental model-based RL update scheme, we develop a simple and efficient model-based RL algorithm known as <strong>IMPO</strong> (<strong>I</strong>ncremental <strong>M</strong>odel-based <strong>P</strong>olicy <strong>O</strong>ptimization), which leverages previous knowledge to enhance stability during the learning process. Experimental results across various control benchmarks demonstrate that IMPO significantly outperforms previous state-of-the-art model-based RL methods in terms of overall performance and sample efficiency.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107245"},"PeriodicalIF":6.0,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
S3H: Long-tailed classification via spatial constraint sampling, scalable network, and hybrid task
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-08 DOI: 10.1016/j.neunet.2025.107247
Wenyi Zhao , Wei Li , Yongqin Tian , Enwen Hu , Wentao Liu , Bin Zhang , Weidong Zhang , Huihua Yang
Long-tailed classification is a significant yet challenging vision task that aims to making the clearest decision boundaries via integrating semantic consistency and texture characteristics. Unlike prior methods, we design spatial constraint sampling and scalable network to bolster the extraction of well-balanced features during training process. Simultaneously, we propose hybrid task to optimize models, which integrates single-model classification and cross-model contrastive learning complementarity to capture comprehensive features. Concretely, the sampling strategy meticulously furnishes the model with spatial constraint samples, encouraging the model to integrate high-level semantic and low-level texture representative features. The scalable network and hybrid task enable the features learned by the model to be dynamically adjusted and consistent with the true data distribution. Such manners effectively dismantle the constraints associated with multi-stage optimization, thereby ushering in innovative possibilities for the end-to-end training of long-tailed classification tasks. Extensive experiments demonstrate that our method achieves state-of-the-art performance on CIFAR10-LT, CIFAR100-LT, ImageNet-LT, and iNaturalist 2018 datasets. The codes and model weights will be available at https://github.com/WilyZhao8/S3H
{"title":"S3H: Long-tailed classification via spatial constraint sampling, scalable network, and hybrid task","authors":"Wenyi Zhao ,&nbsp;Wei Li ,&nbsp;Yongqin Tian ,&nbsp;Enwen Hu ,&nbsp;Wentao Liu ,&nbsp;Bin Zhang ,&nbsp;Weidong Zhang ,&nbsp;Huihua Yang","doi":"10.1016/j.neunet.2025.107247","DOIUrl":"10.1016/j.neunet.2025.107247","url":null,"abstract":"<div><div>Long-tailed classification is a significant yet challenging vision task that aims to making the clearest decision boundaries via integrating semantic consistency and texture characteristics. Unlike prior methods, we design spatial constraint sampling and scalable network to bolster the extraction of well-balanced features during training process. Simultaneously, we propose hybrid task to optimize models, which integrates single-model classification and cross-model contrastive learning complementarity to capture comprehensive features. Concretely, the sampling strategy meticulously furnishes the model with spatial constraint samples, encouraging the model to integrate high-level semantic and low-level texture representative features. The scalable network and hybrid task enable the features learned by the model to be dynamically adjusted and consistent with the true data distribution. Such manners effectively dismantle the constraints associated with multi-stage optimization, thereby ushering in innovative possibilities for the end-to-end training of long-tailed classification tasks. Extensive experiments demonstrate that our method achieves state-of-the-art performance on CIFAR10-LT, CIFAR100-LT, ImageNet-LT, and iNaturalist 2018 datasets. The codes and model weights will be available at <span><span>https://github.com/WilyZhao8/S3H</span><svg><path></path></svg></span></div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107247"},"PeriodicalIF":6.0,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PrediRep: Modeling hierarchical predictive coding with an unsupervised deep learning network
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-08 DOI: 10.1016/j.neunet.2025.107246
Ibrahim C. Hashim, Mario Senden, Rainer Goebel
Hierarchical predictive coding (hPC) provides a compelling framework for understanding how the cortex predicts future sensory inputs by minimizing prediction errors through an internal generative model of the external world. Existing deep learning models inspired by hPC incorporate architectural choices that deviate from core hPC principles, potentially limiting their utility for neuroscientific investigations. We introduce PrediRep (Predicting Representations), a novel deep learning network that adheres more closely to architectural principles of hPC. We validate PrediRep by comparing its functional alignment with hPC to that of existing models after being trained on a next-frame prediction task. Our findings demonstrate that PrediRep, particularly when trained with an all-level loss function (PrediRepAll), exhibits high functional alignment with hPC. In contrast to other contemporary deep learning networks inspired by hPC, it consistently processes input-relevant information at higher hierarchical levels and maintains active representations and accurate predictions across all hierarchical levels. Although PrediRep was designed primarily to serve as a model suitable for neuroscientific research rather than to optimize performance, it nevertheless achieves competitive performance in next-frame prediction while utilizing significantly fewer trainable parameters than alternative models. Our results underscore that even minor architectural deviations from neuroscientific theories like hPC can lead to significant functional discrepancies. By faithfully adhering to hPC principles, PrediRep provides a more accurate tool for in silico exploration of cortical phenomena. PrediRep’s lightweight and biologically plausible design makes it well-suited for future studies aiming to investigate the neural underpinnings of predictive coding and to derive empirically testable predictions.
{"title":"PrediRep: Modeling hierarchical predictive coding with an unsupervised deep learning network","authors":"Ibrahim C. Hashim,&nbsp;Mario Senden,&nbsp;Rainer Goebel","doi":"10.1016/j.neunet.2025.107246","DOIUrl":"10.1016/j.neunet.2025.107246","url":null,"abstract":"<div><div>Hierarchical predictive coding (hPC) provides a compelling framework for understanding how the cortex predicts future sensory inputs by minimizing prediction errors through an internal generative model of the external world. Existing deep learning models inspired by hPC incorporate architectural choices that deviate from core hPC principles, potentially limiting their utility for neuroscientific investigations. We introduce PrediRep (Predicting Representations), a novel deep learning network that adheres more closely to architectural principles of hPC. We validate PrediRep by comparing its functional alignment with hPC to that of existing models after being trained on a next-frame prediction task. Our findings demonstrate that PrediRep, particularly when trained with an all-level loss function (PrediRepAll), exhibits high functional alignment with hPC. In contrast to other contemporary deep learning networks inspired by hPC, it consistently processes input-relevant information at higher hierarchical levels and maintains active representations and accurate predictions across all hierarchical levels. Although PrediRep was designed primarily to serve as a model suitable for neuroscientific research rather than to optimize performance, it nevertheless achieves competitive performance in next-frame prediction while utilizing significantly fewer trainable parameters than alternative models. Our results underscore that even minor architectural deviations from neuroscientific theories like hPC can lead to significant functional discrepancies. By faithfully adhering to hPC principles, PrediRep provides a more accurate tool for in silico exploration of cortical phenomena. PrediRep’s lightweight and biologically plausible design makes it well-suited for future studies aiming to investigate the neural underpinnings of predictive coding and to derive empirically testable predictions.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107246"},"PeriodicalIF":6.0,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bio-inspired two-stage network for efficient RGB-D salient object detection
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-07 DOI: 10.1016/j.neunet.2025.107244
Peng Ren , Tian Bai , Fuming Sun
Recently, with the development of the Convolutional Neural Network and Vision Transformer, the detection accuracy of the RGB-D salient object detection (SOD) model has been greatly improved. However, most of the existing methods cannot balance computational efficiency and performance well. In this paper, inspired by the P visual pathway and the M visual pathway in the primate biological visual system, we propose a Bio-inspired Two-stage Network for Efficient RGB-D SOD, named BTNet. It simulates the visual information processing of the P visual pathway and the M visual pathway. Specifically, BTNet contains two stages: region locking and object refinement. Among them, the region locking stage simulates the visual information processing process of the M visual pathway to obtain coarse-grained visual representation. The object refinement stage simulates the visual information processing process of the P visual pathway to obtain fine-grained visual representation. Experimental results show that BTNet outperforms other state-of-the-art methods on six mainstream benchmark datasets, achieving significant parameter reduction and processing 384 × 384 resolution images at a speed of 175.4 Frames Per Second (FPS). Compared with the cutting-edge method CPNet, BTNet reduces parameters by 93.6% and is nearly 7.2 times faster. The source codes are available at https://github.com/ROC-Star/BTNet.
{"title":"Bio-inspired two-stage network for efficient RGB-D salient object detection","authors":"Peng Ren ,&nbsp;Tian Bai ,&nbsp;Fuming Sun","doi":"10.1016/j.neunet.2025.107244","DOIUrl":"10.1016/j.neunet.2025.107244","url":null,"abstract":"<div><div>Recently, with the development of the Convolutional Neural Network and Vision Transformer, the detection accuracy of the RGB-D salient object detection (SOD) model has been greatly improved. However, most of the existing methods cannot balance computational efficiency and performance well. In this paper, inspired by the P visual pathway and the M visual pathway in the primate biological visual system, we propose a Bio-inspired Two-stage Network for Efficient RGB-D SOD, named BTNet. It simulates the visual information processing of the P visual pathway and the M visual pathway. Specifically, BTNet contains two stages: region locking and object refinement. Among them, the region locking stage simulates the visual information processing process of the M visual pathway to obtain coarse-grained visual representation. The object refinement stage simulates the visual information processing process of the P visual pathway to obtain fine-grained visual representation. Experimental results show that BTNet outperforms other state-of-the-art methods on six mainstream benchmark datasets, achieving significant parameter reduction and processing 384 × 384 resolution images at a speed of 175.4 Frames Per Second (FPS). Compared with the cutting-edge method CPNet, BTNet reduces parameters by 93.6% and is nearly 7.2 times faster. The source codes are available at <span><span>https://github.com/ROC-Star/BTNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107244"},"PeriodicalIF":6.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143377578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust deep learning from weakly dependent data
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-07 DOI: 10.1016/j.neunet.2025.107227
William Kengne , Modou Wade
Recent developments on deep learning established some theoretical properties of deep neural networks estimators. However, most of the existing works on this topic are restricted to bounded loss functions or (sub)-Gaussian or bounded variables. This paper considers robust deep learning from weakly dependent observations, with unbounded loss function and unbounded output. It is only assumed that the output variable has a finite r order moment, with r>1. Non asymptotic bounds for the expected excess risk of the deep neural network estimator are established under strong mixing, and ψ-weak dependence assumptions on the observations. We derive a relationship between these bounds and r, and when the data have moments of any order, the convergence rate is close to some well-known results. When the target predictor belongs to the class of Hölder smooth functions with sufficiently large smoothness index, the rate of the expected excess risk for exponentially strongly mixing data is close to that obtained with i.i.d. samples. Application to robust nonparametric regression and robust nonparametric autoregression are considered. The simulation study for models with heavy-tailed errors shows that, robust estimators with absolute loss and Huber loss function outperform the least squares method.
{"title":"Robust deep learning from weakly dependent data","authors":"William Kengne ,&nbsp;Modou Wade","doi":"10.1016/j.neunet.2025.107227","DOIUrl":"10.1016/j.neunet.2025.107227","url":null,"abstract":"<div><div>Recent developments on deep learning established some theoretical properties of deep neural networks estimators. However, most of the existing works on this topic are restricted to bounded loss functions or (sub)-Gaussian or bounded variables. This paper considers robust deep learning from weakly dependent observations, with unbounded loss function and unbounded output. It is only assumed that the output variable has a finite <span><math><mi>r</mi></math></span> order moment, with <span><math><mrow><mi>r</mi><mo>&gt;</mo><mn>1</mn></mrow></math></span>. Non asymptotic bounds for the expected excess risk of the deep neural network estimator are established under strong mixing, and <span><math><mi>ψ</mi></math></span>-weak dependence assumptions on the observations. We derive a relationship between these bounds and <span><math><mi>r</mi></math></span>, and when the data have moments of any order, the convergence rate is close to some well-known results. When the target predictor belongs to the class of Hölder smooth functions with sufficiently large smoothness index, the rate of the expected excess risk for exponentially strongly mixing data is close to that obtained with i.i.d. samples. Application to robust nonparametric regression and robust nonparametric autoregression are considered. The simulation study for models with heavy-tailed errors shows that, robust estimators with absolute loss and Huber loss function outperform the least squares method.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107227"},"PeriodicalIF":6.0,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143377653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative and contrastive graph representation learning with message passing
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-06 DOI: 10.1016/j.neunet.2025.107224
Ying Tang, Yining Yang, Guodao Sun
Self-supervised graph representation learning (SSGRL) has emerged as a promising approach for graph embeddings because it does not rely on manual labels. SSGRL methods are generally divided into generative and contrastive approaches. Generative methods often suffer from poor graph quality, while contrastive methods, which compare augmented views, are more resistant to noise. However, the performance of contrastive methods depends heavily on well-designed data augmentation and high-quality negative samples. Pure generative or contrastive methods alone cannot balance both robustness and performance. To address these issues, we propose a self-supervised graph representation learning method that integrates generative and contrastive ideas, namely Contrastive Generative Message Passing Graph Learning (CGMP-GL). CGMP-GL incorporates the concept of contrast into the generative model and message aggregation module, enhancing the discriminability of node representations by aligning positive samples and separating negative samples. On one hand, CGMP-GL integrates multi-granularity topology and feature information through cross-view multi-level contrast while reconstructing masked node features. On the other hand, CGMP-GL optimizes node representations through self-supervised contrastive message passing, thereby enhancing model performance in various downstream tasks. Extensive experiments over multiple datasets and downstream tasks demonstrate the effectiveness and robustness of CGMP-GL.
{"title":"Generative and contrastive graph representation learning with message passing","authors":"Ying Tang,&nbsp;Yining Yang,&nbsp;Guodao Sun","doi":"10.1016/j.neunet.2025.107224","DOIUrl":"10.1016/j.neunet.2025.107224","url":null,"abstract":"<div><div>Self-supervised graph representation learning (SSGRL) has emerged as a promising approach for graph embeddings because it does not rely on manual labels. SSGRL methods are generally divided into generative and contrastive approaches. Generative methods often suffer from poor graph quality, while contrastive methods, which compare augmented views, are more resistant to noise. However, the performance of contrastive methods depends heavily on well-designed data augmentation and high-quality negative samples. Pure generative or contrastive methods alone cannot balance both robustness and performance. To address these issues, we propose a self-supervised graph representation learning method that integrates generative and contrastive ideas, namely Contrastive Generative Message Passing Graph Learning (CGMP-GL). CGMP-GL incorporates the concept of contrast into the generative model and message aggregation module, enhancing the discriminability of node representations by aligning positive samples and separating negative samples. On one hand, CGMP-GL integrates multi-granularity topology and feature information through cross-view multi-level contrast while reconstructing masked node features. On the other hand, CGMP-GL optimizes node representations through self-supervised contrastive message passing, thereby enhancing model performance in various downstream tasks. Extensive experiments over multiple datasets and downstream tasks demonstrate the effectiveness and robustness of CGMP-GL.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107224"},"PeriodicalIF":6.0,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143349462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GARNN: An interpretable graph attentive recurrent neural network for predicting blood glucose levels via multivariate time series
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-05 DOI: 10.1016/j.neunet.2025.107229
Chengzhe Piao , Taiyu Zhu , Stephanie E. Baldeweg , Paul Taylor , Pantelis Georgiou , Jiahao Sun , Jun Wang , Kezhi Li
Accurate prediction of future blood glucose (BG) levels can effectively improve BG management for people living with type 1 or 2 diabetes, thereby reducing complications and improving quality of life. The state of the art of BG prediction has been achieved by leveraging advanced deep learning methods to model multimodal data, i.e., sensor data and self-reported event data, organized as multi-variate time series (MTS). However, these methods are mostly regarded as “black boxes” and not entirely trusted by clinicians and patients. In this paper, we propose interpretable graph attentive recurrent neural networks (GARNNs) to model MTS, explaining variable contributions via summarizing variable importance and generating feature maps by graph attention mechanisms instead of post-hoc analysis. We evaluate GARNNs on four datasets, representing diverse clinical scenarios. Upon comparison with fifteen well-established baseline methods, GARNNs not only achieve the best prediction accuracy but also provide high-quality temporal interpretability, in particular for postprandial glucose levels as a result of corresponding meal intake and insulin injection. These findings underline the potential of GARNN as a robust tool for improving diabetes care, bridging the gap between deep learning technology and real-world healthcare solutions.
{"title":"GARNN: An interpretable graph attentive recurrent neural network for predicting blood glucose levels via multivariate time series","authors":"Chengzhe Piao ,&nbsp;Taiyu Zhu ,&nbsp;Stephanie E. Baldeweg ,&nbsp;Paul Taylor ,&nbsp;Pantelis Georgiou ,&nbsp;Jiahao Sun ,&nbsp;Jun Wang ,&nbsp;Kezhi Li","doi":"10.1016/j.neunet.2025.107229","DOIUrl":"10.1016/j.neunet.2025.107229","url":null,"abstract":"<div><div>Accurate prediction of future blood glucose (BG) levels can effectively improve BG management for people living with type 1 or 2 diabetes, thereby reducing complications and improving quality of life. The state of the art of BG prediction has been achieved by leveraging advanced deep learning methods to model multimodal data, i.e., sensor data and self-reported event data, organized as multi-variate time series (MTS). However, these methods are mostly regarded as “black boxes” and not entirely trusted by clinicians and patients. In this paper, we propose interpretable graph attentive recurrent neural networks (GARNNs) to model MTS, explaining variable contributions via summarizing variable importance and generating feature maps by graph attention mechanisms instead of post-hoc analysis. We evaluate GARNNs on four datasets, representing diverse clinical scenarios. Upon comparison with fifteen well-established baseline methods, GARNNs not only achieve the best prediction accuracy but also provide high-quality temporal interpretability, in particular for postprandial glucose levels as a result of corresponding meal intake and insulin injection. These findings underline the potential of GARNN as a robust tool for improving diabetes care, bridging the gap between deep learning technology and real-world healthcare solutions.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107229"},"PeriodicalIF":6.0,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143372942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-level social network alignment via adversarial learning and graphlet modeling
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-05 DOI: 10.1016/j.neunet.2025.107230
Jingyuan Duan , Zhao Kang , Ling Tian , Yichen Xin
Aiming to identify corresponding users in different networks, social network alignment is significant for numerous subsequent applications. Most existing models apply consistency assumptions on undirected networks, ignoring platform disparity caused by diverse functionalities and universal directed relations like follower–followee. Due to indistinguishable nodes and relations, subgraph isomorphism is also unavoidable in neighborhoods. In order to precisely align directed and attributed social networks, we propose the Multi-level Adversarial and Graphlet-based Social Network Alignment (MAGSNA), which unifies networks as a whole at individual-level and learns discriminative graphlet-based features at partition-level simultaneously, thereby alleviating both platform disparity and subgraph isomorphism. Specifically, at individual-level, we relieve topology disparity by the random walk with restart, while developing directed weight-sharing network embeddings and a bidirectional optimizer on Wasserstein graph adversarial networks for attribute disparity. At partition-level, we extract overlapped partitions from graphlet orbits, then design weight-sharing partition embeddings and a hubness-aware refinement to derive discriminative features. By fusing the similarities of these two levels, we obtain a precise and thorough alignment. Experiments on real-world and synthetic datasets demonstrate that MAGSNA outperforms state-of-the-art methods, exhibiting competitive efficiency and superior robustness.
{"title":"Multi-level social network alignment via adversarial learning and graphlet modeling","authors":"Jingyuan Duan ,&nbsp;Zhao Kang ,&nbsp;Ling Tian ,&nbsp;Yichen Xin","doi":"10.1016/j.neunet.2025.107230","DOIUrl":"10.1016/j.neunet.2025.107230","url":null,"abstract":"<div><div>Aiming to identify corresponding users in different networks, social network alignment is significant for numerous subsequent applications. Most existing models apply consistency assumptions on undirected networks, ignoring platform disparity caused by diverse functionalities and universal directed relations like follower–followee. Due to indistinguishable nodes and relations, subgraph isomorphism is also unavoidable in neighborhoods. In order to precisely align directed and attributed social networks, we propose the Multi-level Adversarial and Graphlet-based Social Network Alignment (MAGSNA), which unifies networks as a whole at individual-level and learns discriminative graphlet-based features at partition-level simultaneously, thereby alleviating both platform disparity and subgraph isomorphism. Specifically, at individual-level, we relieve topology disparity by the random walk with restart, while developing directed weight-sharing network embeddings and a bidirectional optimizer on Wasserstein graph adversarial networks for attribute disparity. At partition-level, we extract overlapped partitions from graphlet orbits, then design weight-sharing partition embeddings and a hubness-aware refinement to derive discriminative features. By fusing the similarities of these two levels, we obtain a precise and thorough alignment. Experiments on real-world and synthetic datasets demonstrate that MAGSNA outperforms state-of-the-art methods, exhibiting competitive efficiency and superior robustness.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107230"},"PeriodicalIF":6.0,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143349988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy spatiotemporal event-triggered control for the synchronization of IT2 T–S fuzzy CVRDNNs with mini-batch machine learning supervision
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-05 DOI: 10.1016/j.neunet.2025.107220
Shuoting Wang , Kaibo Shi , Jinde Cao , Shiping Wen
This paper is centered on the development of a fuzzy memory-based spatiotemporal event-triggered mechanism (FMSETM) for the synchronization of the drive-response interval type-2 (IT2) Takagi–Sugeno (T–S) fuzzy complex-valued reaction–diffusion neural networks (CVRDNNs). CVRDNNs have a higher processing capability and can perform better than multilayered real-valued RDNNs. Firstly, a general IT2 T–S fuzzy neural network model is constructed by considering complex-valued parameters and the reaction–diffusion terms. Secondly, a mini-batch semi-stochastic machine learning technique is proposed to optimize the maximum sampling period in an FMSETM. Furthermore, by constructing an asymmetric Lyapunov functional (LF) dependent on the membership function (MF), certain symmetric and positive-definite constraints of matrices are removed. The synchronization criteria are derived via linear matrix inequalities (LMIs) for the IT2 T–S fuzzy CVRDNNs. Finally, two numerical examples are utilized to corroborate the feasibility of the developed approach. From the simulation results, it can be seen that introducing machine learning techniques into the synchronization problem of CVRDNNs can improve the efficiency of convergence.
{"title":"Fuzzy spatiotemporal event-triggered control for the synchronization of IT2 T–S fuzzy CVRDNNs with mini-batch machine learning supervision","authors":"Shuoting Wang ,&nbsp;Kaibo Shi ,&nbsp;Jinde Cao ,&nbsp;Shiping Wen","doi":"10.1016/j.neunet.2025.107220","DOIUrl":"10.1016/j.neunet.2025.107220","url":null,"abstract":"<div><div>This paper is centered on the development of a fuzzy memory-based spatiotemporal event-triggered mechanism (FMSETM) for the synchronization of the drive-response interval type-2 (IT2) Takagi–Sugeno (T–S) fuzzy complex-valued reaction–diffusion neural networks (CVRDNNs). CVRDNNs have a higher processing capability and can perform better than multilayered real-valued RDNNs. Firstly, a general IT2 T–S fuzzy neural network model is constructed by considering complex-valued parameters and the reaction–diffusion terms. Secondly, a mini-batch semi-stochastic machine learning technique is proposed to optimize the maximum sampling period in an FMSETM. Furthermore, by constructing an asymmetric Lyapunov functional (LF) dependent on the membership function (MF), certain symmetric and positive-definite constraints of matrices are removed. The synchronization criteria are derived via linear matrix inequalities (LMIs) for the IT2 T–S fuzzy CVRDNNs. Finally, two numerical examples are utilized to corroborate the feasibility of the developed approach. From the simulation results, it can be seen that introducing machine learning techniques into the synchronization problem of CVRDNNs can improve the efficiency of convergence.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107220"},"PeriodicalIF":6.0,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143377654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RotInv-PCT: Rotation-Invariant Point Cloud Transformer via feature separation and aggregation
IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-04 DOI: 10.1016/j.neunet.2025.107223
Cheng He, Zhenjie Zhao, Xuebo Zhang, Hang Yu, Runhua Wang
The widespread use of point clouds has spurred the rapid development of neural networks for point cloud processing. A crucial property of these networks is maintaining consistent output results under random rotations of the input point cloud, namely, rotation invariance. The dominant approach achieves rotation invariance is to construct local coordinate systems for computing invariant local point cloud coordinates. However, this method neglects the relative pose relationships between local point cloud structures, leading to a decline in network performance. To address this limitation, we propose a novel Rotation-Invariant Point Cloud Transformer (RotInv-PCT). This method extracts the local abstract shape features of the point cloud using Local Reference Frames (LRFs) and explicitly computes the spatial relative pose features between local point clouds, both of which are proven to be rotation-invariant. Furthermore, to capture the long-range pose dependencies between points, we introduce an innovative Feature Aggregation Transformer (FAT) model, which seamlessly fuses the pose features with the shape features to obtain a globally rotation-invariant representation. Moreover, to manage large-scale point clouds, we utilize hierarchical random downsampling to gradually decrease the scale of point clouds, followed by feature aggregation through FAT. To demonstrate the effectiveness of RotInv-PCT, we conducted comparative experiments across various tasks and datasets, including point cloud classification on ScanObjectNN and ModelNet40, part segmentation on ShapeNet, and semantic segmentation on S3DIS and KITTI. Thanks to our provable rotation-invariant features and FAT, our method generally outperforms state-of-the-art networks. In particular, we highlight that RotInv-PCT achieved a 2% improvement in real-world point cloud classification tasks compared to the strongest baseline. Furthermore, in the semantic segmentation task, we improved the performance on the S3DIS dataset by 10% and, for the first time, realized rotation-invariant point cloud semantic segmentation on the KITTI dataset.
{"title":"RotInv-PCT: Rotation-Invariant Point Cloud Transformer via feature separation and aggregation","authors":"Cheng He,&nbsp;Zhenjie Zhao,&nbsp;Xuebo Zhang,&nbsp;Hang Yu,&nbsp;Runhua Wang","doi":"10.1016/j.neunet.2025.107223","DOIUrl":"10.1016/j.neunet.2025.107223","url":null,"abstract":"<div><div>The widespread use of point clouds has spurred the rapid development of neural networks for point cloud processing. A crucial property of these networks is maintaining consistent output results under random rotations of the input point cloud, namely, rotation invariance. The dominant approach achieves rotation invariance is to construct local coordinate systems for computing invariant local point cloud coordinates. However, this method neglects the relative pose relationships between local point cloud structures, leading to a decline in network performance. To address this limitation, we propose a novel Rotation-Invariant Point Cloud Transformer (RotInv-PCT). This method extracts the local abstract shape features of the point cloud using Local Reference Frames (LRFs) and explicitly computes the spatial relative pose features between local point clouds, both of which are proven to be rotation-invariant. Furthermore, to capture the long-range pose dependencies between points, we introduce an innovative Feature Aggregation Transformer (FAT) model, which seamlessly fuses the pose features with the shape features to obtain a globally rotation-invariant representation. Moreover, to manage large-scale point clouds, we utilize hierarchical random downsampling to gradually decrease the scale of point clouds, followed by feature aggregation through FAT. To demonstrate the effectiveness of RotInv-PCT, we conducted comparative experiments across various tasks and datasets, including point cloud classification on ScanObjectNN and ModelNet40, part segmentation on ShapeNet, and semantic segmentation on S3DIS and KITTI. Thanks to our provable rotation-invariant features and FAT, our method generally outperforms state-of-the-art networks. In particular, we highlight that RotInv-PCT achieved a 2% improvement in real-world point cloud classification tasks compared to the strongest baseline. Furthermore, in the semantic segmentation task, we improved the performance on the S3DIS dataset by 10% and, for the first time, realized rotation-invariant point cloud semantic segmentation on the KITTI dataset.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107223"},"PeriodicalIF":6.0,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143349464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neural Networks
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1