首页 > 最新文献

Neurocomputing最新文献

英文 中文
HCKGL: Hyperbolic collaborative knowledge graph learning for recommendation
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-03 DOI: 10.1016/j.neucom.2025.129808
Huijuan Hu , Chaobo He , Xinran Chen , Quanlong Guan
Recently, the integration of knowledge graph and recommendation system has become a hot topic. Its popular solution is firstly combining the knowledge graph and user–item interaction graph to generate a unified Collaborative Knowledge Graph (CKG), and then learn the representations of users and items by applying graph convolutional networks to aggregate high-order neighbor information between entities in CKG. However, existing related methods mainly focus on learning representations in the Euclidean space, posing challenges in capturing the hierarchical structure and intricate relational logic between users and items. In view of this, we propose a novel hyperbolic CKG learning model HCKGL for recommendation, which leverages relation-specific curvature and attention-based geometric transformations to preserve the inherent features of CKG. Additionally, we address two significant challenges that existing methods have often overlooked. Firstly, in order to capture the relationship dependencies between neighbors and accurately calculate the contribution of neighbor information, we propose a hyperbolic graph attention network (HGAT), which combines the curvature of the relationship to assign weights. Secondly, we present a new graph contrastive learning technique (HMCL) that utilizes the hyperbolic embedding propagation and multi-level contrastive learning to improve the representations of users and items. Comprehensive experimental results on two widely used datasets demonstrate that HCKGL outperforms state-of-the-art baselines. The source code for our model is publicly available at: https://github.com/GDM-SCNU/HCKGL.
{"title":"HCKGL: Hyperbolic collaborative knowledge graph learning for recommendation","authors":"Huijuan Hu ,&nbsp;Chaobo He ,&nbsp;Xinran Chen ,&nbsp;Quanlong Guan","doi":"10.1016/j.neucom.2025.129808","DOIUrl":"10.1016/j.neucom.2025.129808","url":null,"abstract":"<div><div>Recently, the integration of knowledge graph and recommendation system has become a hot topic. Its popular solution is firstly combining the knowledge graph and user–item interaction graph to generate a unified Collaborative Knowledge Graph (CKG), and then learn the representations of users and items by applying graph convolutional networks to aggregate high-order neighbor information between entities in CKG. However, existing related methods mainly focus on learning representations in the Euclidean space, posing challenges in capturing the hierarchical structure and intricate relational logic between users and items. In view of this, we propose a novel hyperbolic CKG learning model HCKGL for recommendation, which leverages relation-specific curvature and attention-based geometric transformations to preserve the inherent features of CKG. Additionally, we address two significant challenges that existing methods have often overlooked. Firstly, in order to capture the relationship dependencies between neighbors and accurately calculate the contribution of neighbor information, we propose a hyperbolic graph attention network (HGAT), which combines the curvature of the relationship to assign weights. Secondly, we present a new graph contrastive learning technique (HMCL) that utilizes the hyperbolic embedding propagation and multi-level contrastive learning to improve the representations of users and items. Comprehensive experimental results on two widely used datasets demonstrate that HCKGL outperforms state-of-the-art baselines. The source code for our model is publicly available at: <span><span>https://github.com/GDM-SCNU/HCKGL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"634 ","pages":"Article 129808"},"PeriodicalIF":5.5,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143548141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bilinear-experts network with self-adaptive sampler for long-tailed visual recognition
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-03 DOI: 10.1016/j.neucom.2025.129832
Qin Wang , Sam Kwong , Xizhao Wang
Long-tail distributed data hinders the practical application of state-of-the-art deep models in computer vision. Consequently, exclusive methodologies for handling the long-tailed problem are proposed, focusing on different hierarchies. For embedding hierarchy, existing works manually augment the diversity of tail-class features for specific datasets. However, prior knowledge about datasets is not always available for practical use, which brings unsatisfactory generalization ability in human fine-turned augmentation under such circumstances. To figure out this problem, we introduce a novel model named Bilinear-Experts Network (BENet) with Self-Adaptive Sampler (SAS). This model leverages model-driven perturbations to tail-class embeddings while preserving generalization capability on head classes through a designed bilinear experts system. The designed perturbations adaptively augment tail-class space and shift the class boundary away from the tail-class centers. Moreover, we find that SAS automatically assigns more significant perturbations to specific tail classes with relatively fewer training samples, which indicates SAS is capable of filtering tail classes with lower quality and enhancing them. Also, experiments conducted across various long-tailed benchmarks validate the comparable performance of the proposed BENet.
{"title":"Bilinear-experts network with self-adaptive sampler for long-tailed visual recognition","authors":"Qin Wang ,&nbsp;Sam Kwong ,&nbsp;Xizhao Wang","doi":"10.1016/j.neucom.2025.129832","DOIUrl":"10.1016/j.neucom.2025.129832","url":null,"abstract":"<div><div>Long-tail distributed data hinders the practical application of state-of-the-art deep models in computer vision. Consequently, exclusive methodologies for handling the long-tailed problem are proposed, focusing on different hierarchies. For embedding hierarchy, existing works manually augment the diversity of tail-class features for specific datasets. However, prior knowledge about datasets is not always available for practical use, which brings unsatisfactory generalization ability in human fine-turned augmentation under such circumstances. To figure out this problem, we introduce a novel model named Bilinear-Experts Network (BENet) with Self-Adaptive Sampler (SAS). This model leverages model-driven perturbations to tail-class embeddings while preserving generalization capability on head classes through a designed bilinear experts system. The designed perturbations adaptively augment tail-class space and shift the class boundary away from the tail-class centers. Moreover, we find that SAS automatically assigns more significant perturbations to specific tail classes with relatively fewer training samples, which indicates SAS is capable of filtering tail classes with lower quality and enhancing them. Also, experiments conducted across various long-tailed benchmarks validate the comparable performance of the proposed BENet.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"633 ","pages":"Article 129832"},"PeriodicalIF":5.5,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143548957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correlation-based switching mean teacher for semi-supervised medical image segmentation
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-03 DOI: 10.1016/j.neucom.2025.129818
Guiyuhan Deng , Hao Sun , Wei Xie
The mean teacher framework is one of the mainstream approaches in semi-supervised medical image segmentation. While training together in the traditional mean teacher framework, the teacher model and the student model share the same structure. An Exponential Moving Average (EMA) updating strategy is applied to optimize the teacher model. Although the EMA approach facilitates a smooth training process, it causes the model coupling and error accumulation problems. These issues constrain the model from precisely delineating the regions of pathological structures, especially for the low-contrast regions in medical images. In this paper, we propose a new semi-supervised segmentation model, namely Correlation-based Switching Mean Teacher (CS-MT), which comprises two teacher models and one student model to alleviate these problems. Particularly, two teacher models adopt a switching training strategy at every epoch to avoid the convergence and similarity between the teacher models and the student model. In addition, we introduce a feature correlation module in each model to leverage the similarity information in the feature maps to improve the model’s predictions. Furthermore, the stochastic process of CutMix operation destroys the structures of organs in medical images, generating adverse mixed results. We propose an adaptive CutMix manner to mitigate the negative effects of these mixed results in model training. Extensive experiments validate that CS-MT outperforms the state-of-the-art semi-supervised methods on the LA, Pancreas-NIH, ACDC and BraTS 2019 datasets.
{"title":"Correlation-based switching mean teacher for semi-supervised medical image segmentation","authors":"Guiyuhan Deng ,&nbsp;Hao Sun ,&nbsp;Wei Xie","doi":"10.1016/j.neucom.2025.129818","DOIUrl":"10.1016/j.neucom.2025.129818","url":null,"abstract":"<div><div>The mean teacher framework is one of the mainstream approaches in semi-supervised medical image segmentation. While training together in the traditional mean teacher framework, the teacher model and the student model share the same structure. An Exponential Moving Average (EMA) updating strategy is applied to optimize the teacher model. Although the EMA approach facilitates a smooth training process, it causes the model coupling and error accumulation problems. These issues constrain the model from precisely delineating the regions of pathological structures, especially for the low-contrast regions in medical images. In this paper, we propose a new semi-supervised segmentation model, namely Correlation-based Switching Mean Teacher (CS-MT), which comprises two teacher models and one student model to alleviate these problems. Particularly, two teacher models adopt a switching training strategy at every epoch to avoid the convergence and similarity between the teacher models and the student model. In addition, we introduce a feature correlation module in each model to leverage the similarity information in the feature maps to improve the model’s predictions. Furthermore, the stochastic process of CutMix operation destroys the structures of organs in medical images, generating adverse mixed results. We propose an adaptive CutMix manner to mitigate the negative effects of these mixed results in model training. Extensive experiments validate that CS-MT outperforms the state-of-the-art semi-supervised methods on the LA, Pancreas-NIH, ACDC and BraTS 2019 datasets.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"633 ","pages":"Article 129818"},"PeriodicalIF":5.5,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143548956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zero-shot low-dose CT denoising across variable schemes via strip-scanning diffusion models
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-03 DOI: 10.1016/j.neucom.2025.129828
Bo Su , Jiabo Xu , Xiangyun Hu , Yunfei Zha , Jun Wan , Jiancheng Li
Artifacts and noise in low-dose CT (LDCT) may degrade image quality, potentially impacting subsequent diagnoses. In recent years, supervised image post-processing methods have been extensively studied for their effectiveness in noise reduction. However, clinical conditions often make it difficult to obtain paired normal-dose and low-dose CT images. Additionally, scanning protocols in clinical settings are diverse, necessitating different thickness or dose settings, which further complicates and increases the cost of low-dose data collection. These challenges limit the practical application and widespread adoption of supervised methods. This study introduces a novel end-to-end zero-shot strip-scanning diffusion model (SSDiff) that requires only a single model trained on normal-dose CT (NDCT) images to achieve LDCT image denoising across various scanning protocols with different slice thicknesses, doses, or devices. The sampling process employs a strip scanning strategy that combines overlapping strip information and input LDCT images to solve the maximum a posteriori problem to produce denoising results sequentially. We use only simple convolutional and attentional architectures and perform extensive experiments on three different datasets involving different doses, thicknesses, and devices; the results show that our method outperforms supervised methods in most cases, and visualization and blinded evaluations indicate that our method is very close to NDCT.
{"title":"Zero-shot low-dose CT denoising across variable schemes via strip-scanning diffusion models","authors":"Bo Su ,&nbsp;Jiabo Xu ,&nbsp;Xiangyun Hu ,&nbsp;Yunfei Zha ,&nbsp;Jun Wan ,&nbsp;Jiancheng Li","doi":"10.1016/j.neucom.2025.129828","DOIUrl":"10.1016/j.neucom.2025.129828","url":null,"abstract":"<div><div>Artifacts and noise in low-dose CT (LDCT) may degrade image quality, potentially impacting subsequent diagnoses. In recent years, supervised image post-processing methods have been extensively studied for their effectiveness in noise reduction. However, clinical conditions often make it difficult to obtain paired normal-dose and low-dose CT images. Additionally, scanning protocols in clinical settings are diverse, necessitating different thickness or dose settings, which further complicates and increases the cost of low-dose data collection. These challenges limit the practical application and widespread adoption of supervised methods. This study introduces a novel end-to-end zero-shot strip-scanning diffusion model (SSDiff) that requires only a single model trained on normal-dose CT (NDCT) images to achieve LDCT image denoising across various scanning protocols with different slice thicknesses, doses, or devices. The sampling process employs a strip scanning strategy that combines overlapping strip information and input LDCT images to solve the maximum a posteriori problem to produce denoising results sequentially. We use only simple convolutional and attentional architectures and perform extensive experiments on three different datasets involving different doses, thicknesses, and devices; the results show that our method outperforms supervised methods in most cases, and visualization and blinded evaluations indicate that our method is very close to NDCT.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"633 ","pages":"Article 129828"},"PeriodicalIF":5.5,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143548892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing sentiment analysis with distributional emotion embeddings
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-03 DOI: 10.1016/j.neucom.2025.129822
Charalampos M. Liapis , Aikaterini Karanikola , Sotiris Kotsiantis
Sentiment classification tasks, such as emotion detection and sentiment analysis, are essential in modern natural language processing (NLP). Moreover, vector representation frameworks modeling semantic content underlie each state-of-the-art NLP algorithmic scheme. In sentiment classification, traditional methods often rely on such embedding vectors for semantic representation, yet they typically overlook the dynamic and sequential nature of emotions within textual data. In this work, we present a novel methodology that leverages the distributional patterns of emotions. An embedding framework that captures the inherent serial structure of emotional occurrences in text is introduced, modeling the interdependencies between emotion states as they unfold within a document. Our approach treats each sentence as an observation in a multivariate series of emotions, transforming the emotional flow of a text into a sequence of emotion strings. By applying distributional logic, emotion-based embeddings that represent both emotional and semantic information are derived. Through a comprehensive experimental framework, we demonstrate the effectiveness of these embeddings across various sentiment-related tasks, including emotion detection, irony identification, and hate speech classification, evaluated on multiple datasets. The results show that our distributional emotion embeddings significantly enhance the performance of sentiment classification models, offering improved generalization across diverse domains such as financial news and climate change discourse. Hence, this work highlights the potential of using distributional emotion embeddings to advance sentiment analysis, offering a more nuanced understanding of emotional language and its structured, context-dependent manifestations.
{"title":"Enhancing sentiment analysis with distributional emotion embeddings","authors":"Charalampos M. Liapis ,&nbsp;Aikaterini Karanikola ,&nbsp;Sotiris Kotsiantis","doi":"10.1016/j.neucom.2025.129822","DOIUrl":"10.1016/j.neucom.2025.129822","url":null,"abstract":"<div><div>Sentiment classification tasks, such as emotion detection and sentiment analysis, are essential in modern natural language processing (NLP). Moreover, vector representation frameworks modeling semantic content underlie each state-of-the-art NLP algorithmic scheme. In sentiment classification, traditional methods often rely on such embedding vectors for semantic representation, yet they typically overlook the dynamic and sequential nature of emotions within textual data. In this work, we present a novel methodology that leverages the distributional patterns of emotions. An embedding framework that captures the inherent serial structure of emotional occurrences in text is introduced, modeling the interdependencies between emotion states as they unfold within a document. Our approach treats each sentence as an observation in a multivariate series of emotions, transforming the emotional flow of a text into a sequence of emotion strings. By applying distributional logic, emotion-based embeddings that represent both emotional and semantic information are derived. Through a comprehensive experimental framework, we demonstrate the effectiveness of these embeddings across various sentiment-related tasks, including emotion detection, irony identification, and hate speech classification, evaluated on multiple datasets. The results show that our distributional emotion embeddings significantly enhance the performance of sentiment classification models, offering improved generalization across diverse domains such as financial news and climate change discourse. Hence, this work highlights the potential of using distributional emotion embeddings to advance sentiment analysis, offering a more nuanced understanding of emotional language and its structured, context-dependent manifestations.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"634 ","pages":"Article 129822"},"PeriodicalIF":5.5,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143577672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CMGN: Text GNN and RWKV MLP-mixer combined with cross-feature fusion for fake news detection
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-03 DOI: 10.1016/j.neucom.2025.129811
ShaoDong Cui, Kaibo Duan, Wen Ma, Hiroyuki Shinnou
With the rapid development of social media, the influence and harm of fake news have gradually increased, making accurate detection of fake news particularly important. Current fake news detection methods primarily rely on the main text of the news, neglecting the interrelationships between additional texts. We propose a cross-feature fusion network with additional text graph construction to address this issue and improve fake news detection. Specifically, we utilize a text graph neural network (GNN) to model the graph relationships of additional texts to enhance the model’s perception capabilities. Additionally, we employ the RWKV MLP-mixer to process the news text and design a cross-feature fusion mechanism to achieve mutual fusion of different features, thereby improving fake news detection. Experiments on the LIAR, FA-KES, IFND, and CHEF datasets demonstrate that our proposed model outperforms existing methods in fake news detection.
{"title":"CMGN: Text GNN and RWKV MLP-mixer combined with cross-feature fusion for fake news detection","authors":"ShaoDong Cui,&nbsp;Kaibo Duan,&nbsp;Wen Ma,&nbsp;Hiroyuki Shinnou","doi":"10.1016/j.neucom.2025.129811","DOIUrl":"10.1016/j.neucom.2025.129811","url":null,"abstract":"<div><div>With the rapid development of social media, the influence and harm of fake news have gradually increased, making accurate detection of fake news particularly important. Current fake news detection methods primarily rely on the main text of the news, neglecting the interrelationships between additional texts. We propose a cross-feature fusion network with additional text graph construction to address this issue and improve fake news detection. Specifically, we utilize a text graph neural network (GNN) to model the graph relationships of additional texts to enhance the model’s perception capabilities. Additionally, we employ the RWKV MLP-mixer to process the news text and design a cross-feature fusion mechanism to achieve mutual fusion of different features, thereby improving fake news detection. Experiments on the LIAR, FA-KES, IFND, and CHEF datasets demonstrate that our proposed model outperforms existing methods in fake news detection.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"633 ","pages":"Article 129811"},"PeriodicalIF":5.5,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143534633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel multi-scale salient object detection framework utilizing nonlinear spiking neural P systems 利用非线性尖峰神经 P 系统的新型多尺度突出物体检测框架
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-03 DOI: 10.1016/j.neucom.2025.129821
Nan Zhou, Minglong He, Hong Peng, Zhicai Liu
Salient object detection (SOD) is fundamental to computer vision applications ranging from autonomous driving and surveillance to medical imaging. Despite significant progress, existing methods struggle to effectively model multi-scale features and their complex interdependencies, particularly in challenging real-world scenarios with complex backgrounds and varying scales. To address these limitations, this paper proposes a novel detection framework that leverages the hierarchical processing capabilities of nonlinear spiking neural P (NSNP) systems. The proposed framework introduces three key innovations: a bio-inspired convolution mechanism that captures fine-grained local features with neural dynamics; a semantic learning module enhanced by Contextual Transformer Attention for comprehensive global context understanding; and an adaptive mixed attention-based fusion strategy that optimizes cross-scale feature integration. The experimental results on four challenging benchmark datasets demonstrate that the proposed method outperforms fourteen other state-of-the-art methods, achieving average improvements of 1.02%, 1.3%, 2.3%, and 0.1% on the four evaluation metrics (Sm, Eξm, Fβw, and MAE), respectively. These advances validate the potential of spiking neural P systems in salient object detection, while opening new possibilities for bio-inspired approaches in visual computing.
{"title":"A novel multi-scale salient object detection framework utilizing nonlinear spiking neural P systems","authors":"Nan Zhou,&nbsp;Minglong He,&nbsp;Hong Peng,&nbsp;Zhicai Liu","doi":"10.1016/j.neucom.2025.129821","DOIUrl":"10.1016/j.neucom.2025.129821","url":null,"abstract":"<div><div>Salient object detection (SOD) is fundamental to computer vision applications ranging from autonomous driving and surveillance to medical imaging. Despite significant progress, existing methods struggle to effectively model multi-scale features and their complex interdependencies, particularly in challenging real-world scenarios with complex backgrounds and varying scales. To address these limitations, this paper proposes a novel detection framework that leverages the hierarchical processing capabilities of nonlinear spiking neural P (NSNP) systems. The proposed framework introduces three key innovations: a bio-inspired convolution mechanism that captures fine-grained local features with neural dynamics; a semantic learning module enhanced by Contextual Transformer Attention for comprehensive global context understanding; and an adaptive mixed attention-based fusion strategy that optimizes cross-scale feature integration. The experimental results on four challenging benchmark datasets demonstrate that the proposed method outperforms fourteen other state-of-the-art methods, achieving average improvements of 1.02%, 1.3%, 2.3%, and 0.1% on the four evaluation metrics (<span><math><msub><mrow><mi>S</mi></mrow><mrow><mi>m</mi></mrow></msub></math></span>, <span><math><msubsup><mrow><mi>E</mi></mrow><mrow><mi>ξ</mi></mrow><mrow><mi>m</mi></mrow></msubsup></math></span>, <span><math><msubsup><mrow><mi>F</mi></mrow><mrow><mi>β</mi></mrow><mrow><mi>w</mi></mrow></msubsup></math></span>, and <span><math><mrow><mi>M</mi><mi>A</mi><mi>E</mi></mrow></math></span>), respectively. These advances validate the potential of spiking neural P systems in salient object detection, while opening new possibilities for bio-inspired approaches in visual computing.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"634 ","pages":"Article 129821"},"PeriodicalIF":5.5,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143561944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A bioinspired model of decision making guided by reward dimensions and a motivational state
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-02 DOI: 10.1016/j.neucom.2025.129806
Diana G. Gómez-Martínez , Alison Muñoz-Capote , Oscar Hernández , Francisco Robles , Félix Ramos
The decision-making process is a critical component of computational systems, whose processing involves the evaluation of various alternatives presented as possible solutions to a given problem, depending on the current context. This paper seeks to show how a neuroscience-based decision-making mechanism (DMM) integrating decision criteria, knowledge of reward stimuli, and motivational information helps to contribute to producing human-like adaptive behavior. To fulfill this objective, a computational model on DMM is proposed. The alternatives in this proposed model are constructed based on preferences, and the selection of the best alternative is guided by a goal-directed control scheme influenced by a motivational state (MS). The formation of preferences considers some dimensions of the reward, e.g., magnitude, probability of receiving the reward, incentive salience, and affective value. To validate the model exhibits a behavior considering parameters human being uses to compute its behavior, a case study was proposed. The case study’s objective is to gain the maximum reward (food) from the choice of a 4-choice card (a variation of Iowa Gambling Test), each card has a reward and a contingency probability associated with it. The analysis of the results of the case study shows that the model presents a short exploitation stage to find the contingency rule and choose the best option frequently according to some studies, also observed that the utility value of the card influenced the MS of hunger and other factors play a critical role in the DMM.
{"title":"A bioinspired model of decision making guided by reward dimensions and a motivational state","authors":"Diana G. Gómez-Martínez ,&nbsp;Alison Muñoz-Capote ,&nbsp;Oscar Hernández ,&nbsp;Francisco Robles ,&nbsp;Félix Ramos","doi":"10.1016/j.neucom.2025.129806","DOIUrl":"10.1016/j.neucom.2025.129806","url":null,"abstract":"<div><div>The decision-making process is a critical component of computational systems, whose processing involves the evaluation of various alternatives presented as possible solutions to a given problem, depending on the current context. This paper seeks to show how a neuroscience-based decision-making mechanism (DMM) integrating decision criteria, knowledge of reward stimuli, and motivational information helps to contribute to producing human-like adaptive behavior. To fulfill this objective, a computational model on DMM is proposed. The alternatives in this proposed model are constructed based on preferences, and the selection of the best alternative is guided by a goal-directed control scheme influenced by a motivational state (MS). The formation of preferences considers some dimensions of the reward, e.g., magnitude, probability of receiving the reward, incentive salience, and affective value. To validate the model exhibits a behavior considering parameters human being uses to compute its behavior, a case study was proposed. The case study’s objective is to gain the maximum reward (food) from the choice of a 4-choice card (a variation of Iowa Gambling Test), each card has a reward and a contingency probability associated with it. The analysis of the results of the case study shows that the model presents a short exploitation stage to find the contingency rule and choose the best option frequently according to some studies, also observed that the utility value of the card influenced the MS of hunger and other factors play a critical role in the DMM.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"634 ","pages":"Article 129806"},"PeriodicalIF":5.5,"publicationDate":"2025-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143561942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
H-SGANet: Hybrid sparse graph attention network for deformable medical image registration
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-02 DOI: 10.1016/j.neucom.2025.129810
Yufeng Zhou, Wenming Cao
The integration of Convolutional Neural Networks (ConvNets) and Transformers has become a strong candidate for image registration, combining the strengths of both models and utilizing a large parameter space. However, this hybrid model, which treats brain MRI volumes as grid or sequence structures, struggles to accurately represent anatomical connectivity, diverse brain regions, and critical connections within the brain’s architecture. There are also concerns about the computational expense and GPU memory usage of this model. To address these issues, we propose a lightweight hybrid sparse graph attention network (H-SGANet). The network includes Sparse Graph Attention (SGA), a core mechanism based on Vision Graph Neural Networks (ViG) with predefined anatomical connections. The SGA module expands the model’s receptive field and integrates seamlessly into the network. To further enhance the hybrid network, Separable Self-Attention (SSA) is used as an advanced token mixer, combined with depth-wise convolution to form SSAFormer. This strategic integration is designed to more effectively extract long-range dependencies. As a hybrid ConvNet-ViG-Transformer model, H-SGANet offers three key benefits for volumetric medical image registration. It optimizes fixed and moving images simultaneously through a hybrid feature fusion layer and an end-to-end learning framework. Compared to VoxelMorph, a model with a similar parameter count, H-SGANet demonstrates significant performance enhancements of 3.5% and 1.5% in Dice score on the OASIS dataset and LPBA40 dataset, respectively. The code is publicly available at https://github.com/2250432015/H-SGANet/.
{"title":"H-SGANet: Hybrid sparse graph attention network for deformable medical image registration","authors":"Yufeng Zhou,&nbsp;Wenming Cao","doi":"10.1016/j.neucom.2025.129810","DOIUrl":"10.1016/j.neucom.2025.129810","url":null,"abstract":"<div><div>The integration of Convolutional Neural Networks (ConvNets) and Transformers has become a strong candidate for image registration, combining the strengths of both models and utilizing a large parameter space. However, this hybrid model, which treats brain MRI volumes as grid or sequence structures, struggles to accurately represent anatomical connectivity, diverse brain regions, and critical connections within the brain’s architecture. There are also concerns about the computational expense and GPU memory usage of this model. To address these issues, we propose a lightweight hybrid sparse graph attention network (H-SGANet). The network includes Sparse Graph Attention (SGA), a core mechanism based on Vision Graph Neural Networks (ViG) with predefined anatomical connections. The SGA module expands the model’s receptive field and integrates seamlessly into the network. To further enhance the hybrid network, Separable Self-Attention (SSA) is used as an advanced token mixer, combined with depth-wise convolution to form SSAFormer. This strategic integration is designed to more effectively extract long-range dependencies. As a hybrid ConvNet-ViG-Transformer model, H-SGANet offers three key benefits for volumetric medical image registration. It optimizes fixed and moving images simultaneously through a hybrid feature fusion layer and an end-to-end learning framework. Compared to VoxelMorph, a model with a similar parameter count, H-SGANet demonstrates significant performance enhancements of 3.5% and 1.5% in Dice score on the OASIS dataset and LPBA40 dataset, respectively. The code is publicly available at <span><span>https://github.com/2250432015/H-SGANet/</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"633 ","pages":"Article 129810"},"PeriodicalIF":5.5,"publicationDate":"2025-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143548893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fairness in constrained spectral clustering
IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-03-01 DOI: 10.1016/j.neucom.2025.129815
Laxita Agrawal , V. Vijaya Saradhi , Teena Sharma
Semi-supervised clustering methods have gained significant attention in both theoretical research and real-world applications, including economics, finance, marketing, and healthcare. Among these methods, constrained spectral clustering enhances clustering quality by incorporating pairwise constraints, namely, must-link and cannot-link constraints, which guide the clustering process by specifying whether certain data points should or should not belong to the same cluster. However, traditional constrained spectral clustering methods may inadvertently propagate biases present in the data or constraints, leading to unequal representation of sensitive groups, such as different genders or racial groups, across clusters. This imbalance raises concerns about fairness, an issue that remains largely unexplored in constrained spectral clustering. To address this gap, this paper proposes a novel method named fair-constrained Spectral Clustering (fair-cSC). The proposed method integrates fairness into the must-link and cannot-link constraints by defining a fair constraint matrix, ensuring that pairwise relationships do not introduce bias against any particular group. Additionally, a balance constraint is incorporated to enforce fairness across input data points, promoting equal representation of sensitive groups within clusters. Comprehensive experiments on six benchmarked datasets, including ablation studies, demonstrate that the proposed fair-cSC method effectively enhances fairness while preserving clustering quality. Furthermore, the ablation study provides insights into the method’s performance under different settings, reinforcing its robustness and applicability in real-world scenarios.
{"title":"Fairness in constrained spectral clustering","authors":"Laxita Agrawal ,&nbsp;V. Vijaya Saradhi ,&nbsp;Teena Sharma","doi":"10.1016/j.neucom.2025.129815","DOIUrl":"10.1016/j.neucom.2025.129815","url":null,"abstract":"<div><div>Semi-supervised clustering methods have gained significant attention in both theoretical research and real-world applications, including economics, finance, marketing, and healthcare. Among these methods, constrained spectral clustering enhances clustering quality by incorporating pairwise constraints, namely, must-link and cannot-link constraints, which guide the clustering process by specifying whether certain data points should or should not belong to the same cluster. However, traditional constrained spectral clustering methods may inadvertently propagate biases present in the data or constraints, leading to unequal representation of sensitive groups, such as different genders or racial groups, across clusters. This imbalance raises concerns about fairness, an issue that remains largely unexplored in constrained spectral clustering. To address this gap, this paper proposes a novel method named fair-constrained Spectral Clustering (fair-cSC). The proposed method integrates fairness into the must-link and cannot-link constraints by defining a fair constraint matrix, ensuring that pairwise relationships do not introduce bias against any particular group. Additionally, a balance constraint is incorporated to enforce fairness across input data points, promoting equal representation of sensitive groups within clusters. Comprehensive experiments on six benchmarked datasets, including ablation studies, demonstrate that the proposed fair-cSC method effectively enhances fairness while preserving clustering quality. Furthermore, the ablation study provides insights into the method’s performance under different settings, reinforcing its robustness and applicability in real-world scenarios.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"634 ","pages":"Article 129815"},"PeriodicalIF":5.5,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143548142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Neurocomputing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1