Pub Date : 2025-11-01Epub Date: 2025-06-20DOI: 10.1142/S0129065725500455
Wang Li, Meichen Xia, Hong Peng, Zhicai Liu, Jun Guo
Although a variety of deep learning-based methods have been introduced for Salient Object Detection (SOD) to RGB and Depth (RGB-D) images, existing approaches still encounter challenges, including inadequate cross-modal feature fusion, significant errors in saliency estimation due to noise in depth information, and limited model generalization capabilities. To tackle these challenges, this paper introduces an innovative method for RGB-D SOD, TranSNP-Net, which integrates Nonlinear Spiking Neural P (NSNP) systems with Transformer networks. TranSNP-Net effectively fuses RGB and depth features by introducing an enhanced feature fusion module (SNPFusion) and an attention mechanism. Unlike traditional methods, TranSNP-Net leverages fine-tuned Swin (shifted window transformer) as its backbone network, significantly improving the model's generalization performance. Furthermore, the proposed hierarchical feature decoder (SNP-D) notably enhances accuracy in complex scenes where depth noise is prevalent. According to the experimental findings, the mean scores for the four metrics S-measure, F-measure, E-measure and MEA on the six RGB-D benchmark datasets are 0.9328, 0.9356, 0.9558 and 0.0288. TranSNP-Net achieves superior performance compared to 14 leading methods in six RGB-D benchmark datasets.
{"title":"A Salient Object Detection Network Enhanced by Nonlinear Spiking Neural Systems and Transformer.","authors":"Wang Li, Meichen Xia, Hong Peng, Zhicai Liu, Jun Guo","doi":"10.1142/S0129065725500455","DOIUrl":"10.1142/S0129065725500455","url":null,"abstract":"<p><p>Although a variety of deep learning-based methods have been introduced for Salient Object Detection (SOD) to RGB and Depth (RGB-D) images, existing approaches still encounter challenges, including inadequate cross-modal feature fusion, significant errors in saliency estimation due to noise in depth information, and limited model generalization capabilities. To tackle these challenges, this paper introduces an innovative method for RGB-D SOD, TranSNP-Net, which integrates Nonlinear Spiking Neural P (NSNP) systems with Transformer networks. TranSNP-Net effectively fuses RGB and depth features by introducing an enhanced feature fusion module (SNPFusion) and an attention mechanism. Unlike traditional methods, TranSNP-Net leverages fine-tuned Swin (shifted window transformer) as its backbone network, significantly improving the model's generalization performance. Furthermore, the proposed hierarchical feature decoder (SNP-D) notably enhances accuracy in complex scenes where depth noise is prevalent. According to the experimental findings, the mean scores for the four metrics S-measure, F-measure, E-measure and MEA on the six RGB-D benchmark datasets are 0.9328, 0.9356, 0.9558 and 0.0288. TranSNP-Net achieves superior performance compared to 14 leading methods in six RGB-D benchmark datasets.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550045"},"PeriodicalIF":6.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01Epub Date: 2025-05-19DOI: 10.1142/S0129065725500388
Peng Wang, Minglong He, Hong Peng, Zhicai Liu
Thermal and RGB images exhibit significant differences in information representation, especially in low-light or nighttime environments. Thermal images provide temperature information, complementing the RGB images by restoring details and contextual information. However, the spatial discrepancy between different modalities in RGB-Thermal (RGB-T) semantic segmentation tasks complicates the process of multimodal feature fusion, leading to a loss of spatial contextual information and limited model performance. This paper proposes a channel-space fusion nonlinear spiking neural P system model network (CSPM-SNPNet) to address these challenges. This paper designs a novel color-thermal image fusion module to effectively integrate features from both modalities. During decoding, a nonlinear spiking neural P system is introduced to enhance multi-channel information extraction through the convolution of spiking neural P systems (ConvSNP) operations, fully restoring features learned in the encoder. Experimental results on public datasets MFNet and PST900 demonstrate that CSPM-SNPNet significantly improves segmentation performance. Compared with the existing methods, CSPM-SNPNet achieves a 0.5% improvement in mIOU on MFNet and 1.8% on PST900, showcasing its effectiveness in complex scenes.
{"title":"Nonlinear Spiking Neural Systems for Thermal Image Semantic Segmentation Networks.","authors":"Peng Wang, Minglong He, Hong Peng, Zhicai Liu","doi":"10.1142/S0129065725500388","DOIUrl":"10.1142/S0129065725500388","url":null,"abstract":"<p><p>Thermal and RGB images exhibit significant differences in information representation, especially in low-light or nighttime environments. Thermal images provide temperature information, complementing the RGB images by restoring details and contextual information. However, the spatial discrepancy between different modalities in RGB-Thermal (RGB-T) semantic segmentation tasks complicates the process of multimodal feature fusion, leading to a loss of spatial contextual information and limited model performance. This paper proposes a channel-space fusion nonlinear spiking neural P system model network (CSPM-SNPNet) to address these challenges. This paper designs a novel color-thermal image fusion module to effectively integrate features from both modalities. During decoding, a nonlinear spiking neural P system is introduced to enhance multi-channel information extraction through the convolution of spiking neural P systems (ConvSNP) operations, fully restoring features learned in the encoder. Experimental results on public datasets MFNet and PST900 demonstrate that CSPM-SNPNet significantly improves segmentation performance. Compared with the existing methods, CSPM-SNPNet achieves a 0.5% improvement in mIOU on MFNet and 1.8% on PST900, showcasing its effectiveness in complex scenes.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550038"},"PeriodicalIF":6.4,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144096588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-16DOI: 10.1142/S0129065725500509
Longfei Qi, Shasha Yuan, Feng Li, Junliang Shang, Juan Wang, Shihan Wang
The models used to predict epileptic seizures based on electroencephalogram (EEG) signals often encounter substantial challenges due to the requirement for large, labeled datasets and the inherent complexity of EEG data, which hinders their robustness and generalization capability. This study proposes CLResNet, a framework for predicting epileptic seizures, which combines contrastive self-supervised learning with a modified deep residual neural network to address the above challenges. In contrast to traditional models, CLResNet uses unlabeled EEG data for pre-training to extract robust feature representations. It is then fine-tuned on a smaller labeled dataset to significantly reduce its reliance on labeled data while improving its efficiency and predictive accuracy. The contrastive learning (CL) framework enhances the ability of the model to distinguish between preictal and interictal states, thus improving its robustness and generalizability. The architecture of CLResNet contains residual connections that enable it to learn deep features of the data and ensure an efficient gradient flow. The results of the evaluation of the model on the CHB-MIT dataset showed that it outperformed prevalent methods in the field, with an accuracy of 92.97%, sensitivity of 94.18%, and false-positive rate of 0.043/h. On the Siena dataset, the model also achieved competitive performance, with an accuracy of 92.79%, a sensitivity of 91.47%, and a false-positive rate of 0.041/h. These results confirm the effectiveness of CLResNet in addressing variations in EEG data, and show that contrastive self-supervised learning is a robust and accurate approach for predicting seizures.
{"title":"A Contrastive Learning-Enhanced Residual Network for Predicting Epileptic Seizures Using EEG Signals.","authors":"Longfei Qi, Shasha Yuan, Feng Li, Junliang Shang, Juan Wang, Shihan Wang","doi":"10.1142/S0129065725500509","DOIUrl":"10.1142/S0129065725500509","url":null,"abstract":"<p><p>The models used to predict epileptic seizures based on electroencephalogram (EEG) signals often encounter substantial challenges due to the requirement for large, labeled datasets and the inherent complexity of EEG data, which hinders their robustness and generalization capability. This study proposes CLResNet, a framework for predicting epileptic seizures, which combines contrastive self-supervised learning with a modified deep residual neural network to address the above challenges. In contrast to traditional models, CLResNet uses unlabeled EEG data for pre-training to extract robust feature representations. It is then fine-tuned on a smaller labeled dataset to significantly reduce its reliance on labeled data while improving its efficiency and predictive accuracy. The contrastive learning (CL) framework enhances the ability of the model to distinguish between preictal and interictal states, thus improving its robustness and generalizability. The architecture of CLResNet contains residual connections that enable it to learn deep features of the data and ensure an efficient gradient flow. The results of the evaluation of the model on the CHB-MIT dataset showed that it outperformed prevalent methods in the field, with an accuracy of 92.97%, sensitivity of 94.18%, and false-positive rate of 0.043/h. On the Siena dataset, the model also achieved competitive performance, with an accuracy of 92.79%, a sensitivity of 91.47%, and a false-positive rate of 0.041/h. These results confirm the effectiveness of CLResNet in addressing variations in EEG data, and show that contrastive self-supervised learning is a robust and accurate approach for predicting seizures.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550050"},"PeriodicalIF":6.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144651662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-31DOI: 10.1142/S0129065725500510
Yu Xue, Keyu Liu, Ferrante Neri
Neural Architecture Search (NAS) automates the design of deep neural networks but remains computationally expensive, particularly in multi-objective settings. Existing predictor-assisted evolutionary NAS methods suffer from slow convergence and rank disorder, which undermines prediction accuracy. To overcome these limitations, we propose CHENAS: a Classifier-assisted multi-objective Hybrid Evolutionary NAS framework. CHENAS combines the global exploration of evolutionary algorithms with the local refinement of gradient-based optimization to accelerate convergence and enhance solution quality. A novel dominance classifier predicts Pareto dominance relationships among candidate architectures, reframing multi-objective optimization as a classification task and mitigating rank disorder. To further improve efficiency, we employ a contrastive learning-based autoencoder that maps architectures into a continuous, structured latent space tailored for dominance prediction. Experiments on several benchmark datasets demonstrate that CHENAS outperforms state-of-the-art NAS approaches in identifying high-performing architectures across multiple objectives. Future work will focus on improving the computational efficiency of the framework and extending it to other application domains.
{"title":"Dominant Classifier-assisted Hybrid Evolutionary Multi-objective Neural Architecture Search.","authors":"Yu Xue, Keyu Liu, Ferrante Neri","doi":"10.1142/S0129065725500510","DOIUrl":"10.1142/S0129065725500510","url":null,"abstract":"<p><p>Neural Architecture Search (NAS) automates the design of deep neural networks but remains computationally expensive, particularly in multi-objective settings. Existing predictor-assisted evolutionary NAS methods suffer from slow convergence and rank disorder, which undermines prediction accuracy. To overcome these limitations, we propose CHENAS: a Classifier-assisted multi-objective Hybrid Evolutionary NAS framework. CHENAS combines the global exploration of evolutionary algorithms with the local refinement of gradient-based optimization to accelerate convergence and enhance solution quality. A novel dominance classifier predicts Pareto dominance relationships among candidate architectures, reframing multi-objective optimization as a classification task and mitigating rank disorder. To further improve efficiency, we employ a contrastive learning-based autoencoder that maps architectures into a continuous, structured latent space tailored for dominance prediction. Experiments on several benchmark datasets demonstrate that CHENAS outperforms state-of-the-art NAS approaches in identifying high-performing architectures across multiple objectives. Future work will focus on improving the computational efficiency of the framework and extending it to other application domains.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550051"},"PeriodicalIF":6.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144755506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-08-04DOI: 10.1142/S0129065725500522
Gabriel Rojas-Albarracín, António Pereira, Antonio Fernández-Caballero, María T López
Areas, such as the identification of human activity, have accelerated thanks to the immense development of artificial intelligence (AI). However, the lack of data is a major obstacle to even faster progress. This is particularly true in computer vision, where training a model typically requires at least tens of thousands of images. Moreover, when the activity a researcher is interested in is far from the usual, such as falls, it is difficult to have a sufficiently large dataset. An example of this could be the identification of people suffering from a heart attack. In this sense, this work proposes a novel approach that relies on generative models to extend image datasets, adapting them to generate more domain-relevant images. To this end, a refinement to stable diffusion models was performed using low-rank adaptation. A dataset of 100 images of individuals simulating infarct situations and neutral poses was created, annotated, and used. The images generated with the adapted models were evaluated using learned perceptual image patch similarity to test their closeness to the target scenario. The results obtained demonstrate the potential of synthetic datasets, and in particular the strategy proposed here, to overcome data sparsity in AI-based applications. This approach can not only be more cost-effective than building a dataset in the traditional way, but also reduces the ethical concerns of its applicability in smart environments, health monitoring, and anomaly detection. In fact, all data are owned by the researcher and can be added and modified at any time without requiring additional permissions, streamlining their research.
{"title":"Expanding Domain-Specific Datasets with Stable Diffusion Generative Models for Simulating Myocardial Infarction.","authors":"Gabriel Rojas-Albarracín, António Pereira, Antonio Fernández-Caballero, María T López","doi":"10.1142/S0129065725500522","DOIUrl":"10.1142/S0129065725500522","url":null,"abstract":"<p><p>Areas, such as the identification of human activity, have accelerated thanks to the immense development of artificial intelligence (AI). However, the lack of data is a major obstacle to even faster progress. This is particularly true in computer vision, where training a model typically requires at least tens of thousands of images. Moreover, when the activity a researcher is interested in is far from the usual, such as falls, it is difficult to have a sufficiently large dataset. An example of this could be the identification of people suffering from a heart attack. In this sense, this work proposes a novel approach that relies on generative models to extend image datasets, adapting them to generate more domain-relevant images. To this end, a refinement to stable diffusion models was performed using low-rank adaptation. A dataset of 100 images of individuals simulating infarct situations and neutral poses was created, annotated, and used. The images generated with the adapted models were evaluated using learned perceptual image patch similarity to test their closeness to the target scenario. The results obtained demonstrate the potential of synthetic datasets, and in particular the strategy proposed here, to overcome data sparsity in AI-based applications. This approach can not only be more cost-effective than building a dataset in the traditional way, but also reduces the ethical concerns of its applicability in smart environments, health monitoring, and anomaly detection. In fact, all data are owned by the researcher and can be added and modified at any time without requiring additional permissions, streamlining their research.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550052"},"PeriodicalIF":6.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144786192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-07-29DOI: 10.1142/S0129065725300013
Francisco Portal, Javier De Lope, Manuel Graña
Speech Emotion Recognition (SER) is becoming a key element of speech-based human-computer interfaces, endowing them with some form of empathy towards the emotional status of the human. Transformers have become a central Deep Learning (DL) architecture in natural language processing and signal processing, recently including audio signals for Automatic Speech Recognition (ASR) and SER. A central question addressed in this paper is the achievement of speaker-independent SER systems, i.e. systems that perform independently of a specific training set, enabling their deployment in real-world situations by overcoming the typical limitations of laboratory environments. This paper presents a comprehensive performance evaluation review of transformer architectures that have been proposed to deal with the SER task, carrying out an independent validation at different levels over the most relevant publicly available datasets for validation of SER models. The comprehensive experimental design implemented in this paper provides an accurate picture of the performance achieved by current state-of-the-art transformer models in speaker-independent SER. We have found that most experimental instances reach accuracies below 40% when a model is trained on a dataset and tested on a different one. A speaker-independent evaluation combining up to five datasets and testing on a different one achieves up to 58.85% accuracy. In conclusion, the SER results improved with the aggregation of datasets, indicating that model generalization can be enhanced by extracting data from diverse datasets.
{"title":"A Performance Benchmarking Review of Transformers for Speaker-Independent Speech Emotion Recognition.","authors":"Francisco Portal, Javier De Lope, Manuel Graña","doi":"10.1142/S0129065725300013","DOIUrl":"10.1142/S0129065725300013","url":null,"abstract":"<p><p>Speech Emotion Recognition (SER) is becoming a key element of speech-based human-computer interfaces, endowing them with some form of empathy towards the emotional status of the human. Transformers have become a central Deep Learning (DL) architecture in natural language processing and signal processing, recently including audio signals for Automatic Speech Recognition (ASR) and SER. A central question addressed in this paper is the achievement of speaker-independent SER systems, i.e. systems that perform independently of a specific training set, enabling their deployment in real-world situations by overcoming the typical limitations of laboratory environments. This paper presents a comprehensive performance evaluation review of transformer architectures that have been proposed to deal with the SER task, carrying out an independent validation at different levels over the most relevant publicly available datasets for validation of SER models. The comprehensive experimental design implemented in this paper provides an accurate picture of the performance achieved by current state-of-the-art transformer models in speaker-independent SER. We have found that most experimental instances reach accuracies below 40% when a model is trained on a dataset and tested on a different one. A speaker-independent evaluation combining up to five datasets and testing on a different one achieves up to 58.85% accuracy. In conclusion, the SER results improved with the aggregation of datasets, indicating that model generalization can be enhanced by extracting data from diverse datasets.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2530001"},"PeriodicalIF":6.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144736404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01Epub Date: 2025-06-27DOI: 10.1142/S0129065725500479
Hussain Ahmad Madni, Hafsa Shujat, Axel De Nardin, Silvia Zottin, Gian Luca Foresti
Accurate anomaly detection in brain Magnetic Resonance Imaging (MRI) is crucial for early diagnosis of neurological disorders, yet remains a significant challenge due to the high heterogeneity of brain abnormalities and the scarcity of annotated data. Traditional one-class classification models require extensive training on normal samples, limiting their adaptability to diverse clinical cases. In this work, we introduce MadIRC, an unsupervised anomaly detection framework that leverages Inter-Realization Channels (IRC) to construct a robust nominal model without any reliance on labeled data. We extensively evaluate MadIRC on brain MRI as the primary application domain, achieving a localization AUROC of 0.96 outperforming state-of-the-art supervised anomaly detection methods. Additionally, we further validate our approach on liver CT and retinal images to assess its generalizability across medical imaging modalities. Our results demonstrate that MadIRC provides a scalable, label-free solution for brain MRI anomaly detection, offering a promising avenue for integration into real-world clinical workflows.
{"title":"Unsupervised Brain MRI Anomaly Detection via Inter-Realization Channels.","authors":"Hussain Ahmad Madni, Hafsa Shujat, Axel De Nardin, Silvia Zottin, Gian Luca Foresti","doi":"10.1142/S0129065725500479","DOIUrl":"10.1142/S0129065725500479","url":null,"abstract":"<p><p>Accurate anomaly detection in brain Magnetic Resonance Imaging (MRI) is crucial for early diagnosis of neurological disorders, yet remains a significant challenge due to the high heterogeneity of brain abnormalities and the scarcity of annotated data. Traditional one-class classification models require extensive training on normal samples, limiting their adaptability to diverse clinical cases. In this work, we introduce MadIRC, an unsupervised anomaly detection framework that leverages Inter-Realization Channels (IRC) to construct a robust nominal model without any reliance on labeled data. We extensively evaluate MadIRC on brain MRI as the primary application domain, achieving a localization AUROC of 0.96 outperforming state-of-the-art supervised anomaly detection methods. Additionally, we further validate our approach on liver CT and retinal images to assess its generalizability across medical imaging modalities. Our results demonstrate that MadIRC provides a scalable, label-free solution for brain MRI anomaly detection, offering a promising avenue for integration into real-world clinical workflows.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550047"},"PeriodicalIF":6.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144510084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-28DOI: 10.1142/S0129065725500480
María Paula Bonomini, Eduardo Ghiglioni, Noelia Belén Ríos
Graph theory has proven to be useful in studying brain dysfunction in Alzheimer's disease using MagnetoEncephaloGraphy (MEG) and fMRI signals. However, it has not yet been tested enough with reduced sets of electrodes, as in the 10-20 EEG. In this paper, we applied techniques from the Graph Spectral Analysis (GSA) derived from EEG signals of patients with Alzheimer, Frontotemporal Dementia and control subjects. A collection of global GSA metrics were computed, accounting for general properties of the adjacency or Laplacian matrices. Also, regional GSA metrics were calculated, disentangling centrality measures in five cortical regions (frontal, central, parietal, temporal and occipital). These two sort of measures were then utilized in a binary AD/controls classification problem to test their utility in AD diagnosis and identify most valuable parameters. The Theta band appeared as the most connected and synchronizable rhythm for all three groups. Also, it was the rhythm with most preserved connections among temporal electrodes, exhibiting the shortest average distances among [Formula: see text], [Formula: see text], [Formula: see text] and [Formula: see text]. In addition, Theta emerged as the rhythm with the highest classification performances based on regional parameters according to a [Formula: see text] cross-validation scheme (mean [Formula: see text], mean [Formula: see text] and mean F1-[Formula: see text]). In general, regional parameters produced better classification performances for most of the rhythms, encouraging further investigation into GSA parameters with refined spatial and functional specificity.
{"title":"Graph Spectral Analysis Using Electroencephalography in Alzheimer Disease and Frontotemporal Dementia Patients.","authors":"María Paula Bonomini, Eduardo Ghiglioni, Noelia Belén Ríos","doi":"10.1142/S0129065725500480","DOIUrl":"https://doi.org/10.1142/S0129065725500480","url":null,"abstract":"<p><p>Graph theory has proven to be useful in studying brain dysfunction in Alzheimer's disease using MagnetoEncephaloGraphy (MEG) and fMRI signals. However, it has not yet been tested enough with reduced sets of electrodes, as in the 10-20 EEG. In this paper, we applied techniques from the Graph Spectral Analysis (GSA) derived from EEG signals of patients with Alzheimer, Frontotemporal Dementia and control subjects. A collection of global GSA metrics were computed, accounting for general properties of the adjacency or Laplacian matrices. Also, regional GSA metrics were calculated, disentangling centrality measures in five cortical regions (frontal, central, parietal, temporal and occipital). These two sort of measures were then utilized in a binary AD/controls classification problem to test their utility in AD diagnosis and identify most valuable parameters. The Theta band appeared as the most connected and synchronizable rhythm for all three groups. Also, it was the rhythm with most preserved connections among temporal electrodes, exhibiting the shortest average distances among [Formula: see text], [Formula: see text], [Formula: see text] and [Formula: see text]. In addition, Theta emerged as the rhythm with the highest classification performances based on regional parameters according to a [Formula: see text] cross-validation scheme (mean [Formula: see text], mean [Formula: see text] and mean <i>F</i>1-[Formula: see text]). In general, regional parameters produced better classification performances for most of the rhythms, encouraging further investigation into GSA parameters with refined spatial and functional specificity.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"35 9","pages":"2550048"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144585974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-12DOI: 10.1142/S012906572550042X
Pablo Zubasti, Miguel A Patricio, Antonio Berlanga, Jose M Molina
The reduction of dimensionality in machine learning and artificial intelligence problems constitutes a pivotal element in the simplification of models, significantly enhancing both their performance and execution time. This process enables the generation of results more rapidly while also facilitating the scalability and optimization of systems that rely on such models. Two primary approaches are commonly employed to achieve dimensionality reduction: feature selection-based methods and those grounded in feature extraction. In this paper, we propose a distance-correlation feature space, upon which we define a dimensionality reduction algorithm based on space transformations and graph embeddings. This methodology is applied in the context of dementia diagnosis through learning models, with the overarching objective of optimizing the diagnostic process.
{"title":"Optimizing Dementia Diagnosis Through Distance-Correlation Feature Space and Dimensionality Reduction.","authors":"Pablo Zubasti, Miguel A Patricio, Antonio Berlanga, Jose M Molina","doi":"10.1142/S012906572550042X","DOIUrl":"10.1142/S012906572550042X","url":null,"abstract":"<p><p>The reduction of dimensionality in machine learning and artificial intelligence problems constitutes a pivotal element in the simplification of models, significantly enhancing both their performance and execution time. This process enables the generation of results more rapidly while also facilitating the scalability and optimization of systems that rely on such models. Two primary approaches are commonly employed to achieve dimensionality reduction: feature selection-based methods and those grounded in feature extraction. In this paper, we propose a distance-correlation feature space, upon which we define a dimensionality reduction algorithm based on space transformations and graph embeddings. This methodology is applied in the context of dementia diagnosis through learning models, with the overarching objective of optimizing the diagnostic process.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550042"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144287655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-01Epub Date: 2025-06-01DOI: 10.1142/S0129065725500443
Juan A Barios, Yolanda Vales, Jose M Catalán, Andrea Blanco-Ivorra, David Martínez-Pascual, Nicolás García-Aracil
Task-oriented rehabilitation is essential for hand function recovery in stroke patients, and recent advancements in BCI-controlled exoskeletons and neural biomarkers - such as post-movement beta rebound (PMBR) - offer new pathways to optimize these therapies. Movement-related EEG signals from the sensorimotor cortex, particularly PMBR (post-movement) and event-related desynchronization (ERD, during movement), exhibit high task specificity and correlate with stroke severity. This study evaluated PMBR in 34 chronic stroke patients across two cohorts, along with a control group of 16 healthy participants, during voluntary and exoskeleton-assisted movement tasks. Longitudinal tracking in the second cohort enabled the analysis of PMBR changes, with EEG recordings acquired at three timepoints over a 30-session rehabilitation program. Findings revealed significant PMBR alterations in both passive and active movement tasks: patients with severe impairment lacked a PMBR dipole in the ipsilesional hemisphere, while moderately impaired patients showed a diminished response. The marked differences in PMBR patterns between stroke patients and controls highlight the extent of sensorimotor cortex disruption due to stroke. ERD showed minimal task-specific variation, underscoring PMBR as a more reliable biomarker of motor function impairment. These findings support the use of PMBR, particularly the PMBR/ERD ratio, as a biomarker for EEG-guided monitoring of motor recovery over time during exoskeleton-assisted rehabilitation.
{"title":"Post-Movement Beta Rebound for Longitudinal Monitoring of Motor Rehabilitation in Stroke Patients Using an Exoskeleton-Assisted Paradigm.","authors":"Juan A Barios, Yolanda Vales, Jose M Catalán, Andrea Blanco-Ivorra, David Martínez-Pascual, Nicolás García-Aracil","doi":"10.1142/S0129065725500443","DOIUrl":"https://doi.org/10.1142/S0129065725500443","url":null,"abstract":"<p><p>Task-oriented rehabilitation is essential for hand function recovery in stroke patients, and recent advancements in BCI-controlled exoskeletons and neural biomarkers - such as post-movement beta rebound (PMBR) - offer new pathways to optimize these therapies. Movement-related EEG signals from the sensorimotor cortex, particularly PMBR (post-movement) and event-related desynchronization (ERD, during movement), exhibit high task specificity and correlate with stroke severity. This study evaluated PMBR in 34 chronic stroke patients across two cohorts, along with a control group of 16 healthy participants, during voluntary and exoskeleton-assisted movement tasks. Longitudinal tracking in the second cohort enabled the analysis of PMBR changes, with EEG recordings acquired at three timepoints over a 30-session rehabilitation program. Findings revealed significant PMBR alterations in both passive and active movement tasks: patients with severe impairment lacked a PMBR dipole in the ipsilesional hemisphere, while moderately impaired patients showed a diminished response. The marked differences in PMBR patterns between stroke patients and controls highlight the extent of sensorimotor cortex disruption due to stroke. ERD showed minimal task-specific variation, underscoring PMBR as a more reliable biomarker of motor function impairment. These findings support the use of PMBR, particularly the PMBR/ERD ratio, as a biomarker for EEG-guided monitoring of motor recovery over time during exoskeleton-assisted rehabilitation.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"35 9","pages":"2550044"},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144585975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}