Pub Date : 2024-12-01Epub Date: 2024-09-23DOI: 10.1142/S012906572482001X
Han Sun
{"title":"The 2024 Hojjat Adeli Award for Outstanding Contributions in Neural Systems.","authors":"Han Sun","doi":"10.1142/S012906572482001X","DOIUrl":"10.1142/S012906572482001X","url":null,"abstract":"","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2482001"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-09-30DOI: 10.1142/S0129065724500655
Haozhou Cui, Xiangwen Zhong, Haotian Li, Chuanyu Li, Xingchen Dong, Dezan Ji, Landi He, Weidong Zhou
A real-time and reliable automatic detection system for epileptic seizures holds significant value in assisting physicians with rapid diagnosis and treatment of epilepsy. Aiming to address this issue, a novel lightweight model called Convolutional Neural Network-Reformer (CNN-Reformer) is proposed for seizure detection on long-term EEG. The CNN-Reformer consists of two main parts: the Data Reshaping (DR) module and the Efficient Attention and Concentration (EAC) module. This framework reduces network parameters while retaining effective feature extraction of multi-channel EEGs, thereby improving model computational efficiency and real-time performance. Initially, the raw EEG signals undergo Discrete Wavelet Transform (DWT) for signal filtering, and then fed into the DR module for data compression and reshaping while preserving local features. Subsequently, these local features are sent to the EAC module to extract global features and perform categorization. Post-processing involving sliding window averaging, thresholding, and collar techniques is further deployed to reduce the false detection rate (FDR) and improve detection performance. On the CHB-MIT scalp EEG dataset, our method achieves an average sensitivity of 97.57%, accuracy of 98.09%, and specificity of 98.11% at segment-based level, and a sensitivity of 96.81%, along with FDR of 0.27/h, and latency of 17.81 s at the event-based level. On the SH-SDU dataset we collected, our method yielded segment-based sensitivity of 94.51%, specificity of 92.83%, and accuracy of 92.81%, along with event-based sensitivity of 94.11%. The average testing time for 1[Formula: see text]h of multi-channel EEG signals is 1.92[Formula: see text]s. The excellent results and fast computational speed of the CNN-Reformer model demonstrate its potential for efficient seizure detection.
{"title":"A Lightweight Convolutional Neural Network-Reformer Model for Efficient Epileptic Seizure Detection.","authors":"Haozhou Cui, Xiangwen Zhong, Haotian Li, Chuanyu Li, Xingchen Dong, Dezan Ji, Landi He, Weidong Zhou","doi":"10.1142/S0129065724500655","DOIUrl":"10.1142/S0129065724500655","url":null,"abstract":"<p><p>A real-time and reliable automatic detection system for epileptic seizures holds significant value in assisting physicians with rapid diagnosis and treatment of epilepsy. Aiming to address this issue, a novel lightweight model called Convolutional Neural Network-Reformer (CNN-Reformer) is proposed for seizure detection on long-term EEG. The CNN-Reformer consists of two main parts: the Data Reshaping (DR) module and the Efficient Attention and Concentration (EAC) module. This framework reduces network parameters while retaining effective feature extraction of multi-channel EEGs, thereby improving model computational efficiency and real-time performance. Initially, the raw EEG signals undergo Discrete Wavelet Transform (DWT) for signal filtering, and then fed into the DR module for data compression and reshaping while preserving local features. Subsequently, these local features are sent to the EAC module to extract global features and perform categorization. Post-processing involving sliding window averaging, thresholding, and collar techniques is further deployed to reduce the false detection rate (FDR) and improve detection performance. On the CHB-MIT scalp EEG dataset, our method achieves an average sensitivity of 97.57%, accuracy of 98.09%, and specificity of 98.11% at segment-based level, and a sensitivity of 96.81%, along with FDR of 0.27/h, and latency of 17.81 s at the event-based level. On the SH-SDU dataset we collected, our method yielded segment-based sensitivity of 94.51%, specificity of 92.83%, and accuracy of 92.81%, along with event-based sensitivity of 94.11%. The average testing time for 1[Formula: see text]h of multi-channel EEG signals is 1.92[Formula: see text]s. The excellent results and fast computational speed of the CNN-Reformer model demonstrate its potential for efficient seizure detection.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2450065"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-09-23DOI: 10.1142/S0129065724500643
Siyan Sun, Peng Wang, Hong Peng, Zhicai Liu
Referring image segmentation aims to accurately align image pixels and text features for object segmentation based on natural language descriptions. This paper proposes NSNPRIS (convolutional nonlinear spiking neural P systems for referring image segmentation), a novel model based on convolutional nonlinear spiking neural P systems. NSNPRIS features NSNPFusion and Language Gate modules to enhance feature interaction during encoding, along with an NSNPDecoder for feature alignment and decoding. Experimental results on RefCOCO, RefCOCO[Formula: see text], and G-Ref datasets demonstrate that NSNPRIS performs better than mainstream methods. Our contributions include advances in the alignment of pixel and textual features and the improvement of segmentation accuracy.
参考图像分割的目的是根据自然语言描述,准确对齐图像像素和文本特征,以进行对象分割。本文提出的 NSNPRIS(用于指代图像分割的卷积非线性尖峰神经 P 系统)是一种基于卷积非线性尖峰神经 P 系统的新型模型。NSNPRIS 具有 NSNPFusion 和 Language Gate 模块,可增强编码过程中的特征交互,以及用于特征对齐和解码的 NSNPDecoder。在 RefCOCO、RefCOCO[公式:见正文]和 G-Ref 数据集上的实验结果表明,NSNPRIS 的性能优于主流方法。我们的贡献包括像素和文本特征对齐方面的进步以及分割准确性的提高。
{"title":"Referring Image Segmentation with Multi-Modal Feature Interaction and Alignment Based on Convolutional Nonlinear Spiking Neural Membrane Systems.","authors":"Siyan Sun, Peng Wang, Hong Peng, Zhicai Liu","doi":"10.1142/S0129065724500643","DOIUrl":"10.1142/S0129065724500643","url":null,"abstract":"<p><p>Referring image segmentation aims to accurately align image pixels and text features for object segmentation based on natural language descriptions. This paper proposes NSNPRIS (convolutional nonlinear spiking neural P systems for referring image segmentation), a novel model based on convolutional nonlinear spiking neural P systems. NSNPRIS features NSNPFusion and Language Gate modules to enhance feature interaction during encoding, along with an NSNPDecoder for feature alignment and decoding. Experimental results on RefCOCO, RefCOCO[Formula: see text], and G-Ref datasets demonstrate that NSNPRIS performs better than mainstream methods. Our contributions include advances in the alignment of pixel and textual features and the improvement of segmentation accuracy.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2450064"},"PeriodicalIF":0.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142304880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-30DOI: 10.1142/S0129065724500710
Chenchen Cheng, Yunbo Shi, Yan Liu, Bo You, Yuanfeng Zhou, Ardalan Aarabi, Yakang Dai
Interictal epileptiform spikes (spikes) and epileptogenic focus are strongly correlated. However, partial spikes are insensitive to epileptogenic focus, which restricts epilepsy neurosurgery. Therefore, identifying spike subtypes that are strongly associated with epileptogenic focus (traceable spikes) could facilitate their use as reliable signal sources for accurately tracing epileptogenic focus. However, the sparse firing phenomenon in the transmission of intracranial neuronal discharges leads to differences within spikes that cannot be observed visually. Therefore, neuro-electro-physiologists are unable to identify traceable spikes that could accurately locate epileptogenic focus. Herein, we propose a novel sparse spike feature learning method to recognize traceable spikes and extract discrimination information related to epileptogenic focus. First, a multilevel eigensystem feature representation was determined based on a multilevel feature representation module to express the intrinsic properties of a spike. Second, the sparse feature learning module expressed the sparse spike multi-domain context feature representation to extract sparse spike feature representations. Among them, a sparse spike encoding strategy was implemented to effectively simulate the sparse firing phenomenon for the accurate encoding of the activity of intracranial neurosources. The sensitivity of the proposed method was 97.1%, demonstrating its effectiveness and significant efficiency relative to other state-of-the-art methods.
{"title":"Sparse Spike Feature Learning to Recognize Traceable Interictal Epileptiform Spikes.","authors":"Chenchen Cheng, Yunbo Shi, Yan Liu, Bo You, Yuanfeng Zhou, Ardalan Aarabi, Yakang Dai","doi":"10.1142/S0129065724500710","DOIUrl":"https://doi.org/10.1142/S0129065724500710","url":null,"abstract":"<p><p>Interictal epileptiform spikes (spikes) and epileptogenic focus are strongly correlated. However, partial spikes are insensitive to epileptogenic focus, which restricts epilepsy neurosurgery. Therefore, identifying spike subtypes that are strongly associated with epileptogenic focus (traceable spikes) could facilitate their use as reliable signal sources for accurately tracing epileptogenic focus. However, the sparse firing phenomenon in the transmission of intracranial neuronal discharges leads to differences within spikes that cannot be observed visually. Therefore, neuro-electro-physiologists are unable to identify traceable spikes that could accurately locate epileptogenic focus. Herein, we propose a novel sparse spike feature learning method to recognize traceable spikes and extract discrimination information related to epileptogenic focus. First, a multilevel eigensystem feature representation was determined based on a multilevel feature representation module to express the intrinsic properties of a spike. Second, the sparse feature learning module expressed the sparse spike multi-domain context feature representation to extract sparse spike feature representations. Among them, a sparse spike encoding strategy was implemented to effectively simulate the sparse firing phenomenon for the accurate encoding of the activity of intracranial neurosources. The sensitivity of the proposed method was 97.1%, demonstrating its effectiveness and significant efficiency relative to other state-of-the-art methods.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2450071"},"PeriodicalIF":0.0,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142756066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research presents a robust adversarial method for anomaly detection in real-world scenarios, leveraging the power of generative adversarial neural networks (GANs) through cycle consistency in reconstruction error. Traditional approaches often falter due to high variance in class-wise accuracy, rendering them ineffective across different anomaly types. Our proposed model addresses these challenges by introducing an innovative flow of information in the training procedure and integrating it as a new discriminator into the framework, thereby optimizing the training dynamics. Furthermore, it employs a supplementary distribution in the input space to steer reconstructions toward the normal data distribution. This adjustment distinctly isolates anomalous instances and enhances detection precision. Also, two unique anomaly scoring mechanisms were developed to augment detection capabilities. Comprehensive evaluations on six varied datasets have confirmed that our model outperforms one-class anomaly detection benchmarks. The implementation is openly accessible to the academic community, available on Github.a.
{"title":"Anomaly Detection Using Complete Cycle Consistent Generative Adversarial Network.","authors":"Zahra Dehghanian, Saeed Saravani, Maryam Amirmazlaghani, Mohamad Rahmati","doi":"10.1142/S0129065725500042","DOIUrl":"https://doi.org/10.1142/S0129065725500042","url":null,"abstract":"<p><p>This research presents a robust adversarial method for anomaly detection in real-world scenarios, leveraging the power of generative adversarial neural networks (GANs) through cycle consistency in reconstruction error. Traditional approaches often falter due to high variance in class-wise accuracy, rendering them ineffective across different anomaly types. Our proposed model addresses these challenges by introducing an innovative flow of information in the training procedure and integrating it as a new discriminator into the framework, thereby optimizing the training dynamics. Furthermore, it employs a supplementary distribution in the input space to steer reconstructions toward the normal data distribution. This adjustment distinctly isolates anomalous instances and enhances detection precision. Also, two unique anomaly scoring mechanisms were developed to augment detection capabilities. Comprehensive evaluations on six varied datasets have confirmed that our model outperforms one-class anomaly detection benchmarks. The implementation is openly accessible to the academic community, available on Github.<sup>a</sup>.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550004"},"PeriodicalIF":0.0,"publicationDate":"2024-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142776000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-20DOI: 10.1142/S0129065725500029
Romeo Lanzino, Danilo Avola, Federico Fontana, Luigi Cinque, Francesco Scarcello, Gian Luca Foresti
This study presents a Subject-Aware Transformer-based neural network designed for the Electroencephalogram (EEG) Emotion Recognition task (SATEER), which entails the analysis of EEG signals to classify and interpret human emotional states. SATEER processes the EEG waveforms by transforming them into Mel spectrograms, which can be seen as particular cases of images with the number of channels equal to the number of electrodes used during the recording process; this type of data can thus be processed using a Computer Vision pipeline. Distinct from preceding approaches, this model addresses the variability in individual responses to identical stimuli by incorporating a User Embedder module. This module enables the association of individual profiles with their EEGs, thereby enhancing classification accuracy. The efficacy of the model was rigorously evaluated using four publicly available datasets, demonstrating superior performance over existing methods in all conducted benchmarks. For instance, on the AMIGOS dataset (A dataset for Multimodal research of affect, personality traits, and mood on Individuals and GrOupS), SATEER's accuracy exceeds 99.8% accuracy across all labels and showcases an improvement of 0.47% over the state of the art. Furthermore, an exhaustive ablation study underscores the pivotal role of the User Embedder module and each other component of the presented model in achieving these advancements.
{"title":"SATEER: Subject-Aware Transformer for EEG-Based Emotion Recognition.","authors":"Romeo Lanzino, Danilo Avola, Federico Fontana, Luigi Cinque, Francesco Scarcello, Gian Luca Foresti","doi":"10.1142/S0129065725500029","DOIUrl":"10.1142/S0129065725500029","url":null,"abstract":"<p><p>This study presents a Subject-Aware Transformer-based neural network designed for the Electroencephalogram (EEG) Emotion Recognition task (SATEER), which entails the analysis of EEG signals to classify and interpret human emotional states. SATEER processes the EEG waveforms by transforming them into Mel spectrograms, which can be seen as particular cases of images with the number of channels equal to the number of electrodes used during the recording process; this type of data can thus be processed using a Computer Vision pipeline. Distinct from preceding approaches, this model addresses the variability in individual responses to identical stimuli by incorporating a User Embedder module. This module enables the association of individual profiles with their EEGs, thereby enhancing classification accuracy. The efficacy of the model was rigorously evaluated using four publicly available datasets, demonstrating superior performance over existing methods in all conducted benchmarks. For instance, on the AMIGOS dataset (A dataset for Multimodal research of affect, personality traits, and mood on Individuals and GrOupS), SATEER's accuracy exceeds 99.8% accuracy across all labels and showcases an improvement of 0.47% over the state of the art. Furthermore, an exhaustive ablation study underscores the pivotal role of the User Embedder module and each other component of the presented model in achieving these advancements.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550002"},"PeriodicalIF":0.0,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seizures have a serious impact on the physical function and daily life of epileptic patients. The automated detection of seizures can assist clinicians in taking preventive measures for patients during the diagnosis process. The combination of deep learning (DL) model with convolutional neural network (CNN) and transformer network can effectively extract both local and global features, resulting in improved seizure detection performance. In this study, an enhanced transformer network named Inresformer is proposed for seizure detection, which is combined with Inception and Residual network extracting different scale features of electroencephalography (EEG) signals to enrich the feature representation. In addition, the improved transformer network replaces the existing Feedforward layers with two half-step Feedforward layers to enhance the nonlinear representation of the model. The proposed architecture utilizes discrete wavelet transform (DWT) to decompose the original EEG signals, and the three sub-bands are selected for signal reconstruction. Then, the Co-MixUp method is adopted to solve the problem of data imbalance, and the processed signals are sent to the Inresformer network for seizure information capture and recognition. Finally, discriminant fusion is performed on the results of three-scale EEG sub-signals to achieve final seizure recognition. The proposed network achieves the best accuracy of 100% on Bonn dataset and the average accuracy of 98.03%, sensitivity of 95.65%, and specificity of 98.57% on the long-term CHB-MIT dataset. Compared to the existing DL networks, the proposed method holds significant potential for clinical research and diagnosis applications with competitive performance.
{"title":"A Modified Transformer Network for Seizure Detection Using EEG Signals.","authors":"Wenrong Hu, Juan Wang, Feng Li, Daohui Ge, Yuxia Wang, Qingwei Jia, Shasha Yuan","doi":"10.1142/S0129065725500030","DOIUrl":"10.1142/S0129065725500030","url":null,"abstract":"<p><p>Seizures have a serious impact on the physical function and daily life of epileptic patients. The automated detection of seizures can assist clinicians in taking preventive measures for patients during the diagnosis process. The combination of deep learning (DL) model with convolutional neural network (CNN) and transformer network can effectively extract both local and global features, resulting in improved seizure detection performance. In this study, an enhanced transformer network named Inresformer is proposed for seizure detection, which is combined with Inception and Residual network extracting different scale features of electroencephalography (EEG) signals to enrich the feature representation. In addition, the improved transformer network replaces the existing Feedforward layers with two half-step Feedforward layers to enhance the nonlinear representation of the model. The proposed architecture utilizes discrete wavelet transform (DWT) to decompose the original EEG signals, and the three sub-bands are selected for signal reconstruction. Then, the Co-MixUp method is adopted to solve the problem of data imbalance, and the processed signals are sent to the Inresformer network for seizure information capture and recognition. Finally, discriminant fusion is performed on the results of three-scale EEG sub-signals to achieve final seizure recognition. The proposed network achieves the best accuracy of 100% on Bonn dataset and the average accuracy of 98.03%, sensitivity of 95.65%, and specificity of 98.57% on the long-term CHB-MIT dataset. Compared to the existing DL networks, the proposed method holds significant potential for clinical research and diagnosis applications with competitive performance.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550003"},"PeriodicalIF":0.0,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although the density peak clustering (DPC) algorithm can effectively distribute samples and quickly identify noise points, it lacks adaptability and cannot consider the local data structure. In addition, clustering algorithms generally suffer from high time complexity. Prior research suggests that clustering algorithms grounded in P systems can mitigate time complexity concerns. Within the realm of membrane systems (P systems), spiking neural P systems (SN P systems), inspired by biological nervous systems, are third-generation neural networks that possess intricate structures and offer substantial parallelism advantages. Thus, this study first improved the DPC by introducing the maximum nearest neighbor distance and K-nearest neighbors (KNN). Moreover, a method based on delayed spiking neural P systems (DSN P systems) was proposed to improve the performance of the algorithm. Subsequently, the DSNP-ANDPC algorithm was proposed. The effectiveness of DSNP-ANDPC was evaluated through comprehensive evaluations across four synthetic datasets and 10 real-world datasets. The proposed method outperformed the other comparison methods in most cases.
虽然密度峰聚类(DPC)算法可以有效地分布样本并快速识别噪声点,但它缺乏适应性,无法考虑局部数据结构。此外,聚类算法普遍存在时间复杂度高的问题。先前的研究表明,基于 P 系统的聚类算法可以缓解时间复杂性问题。在膜系统(P 系统)领域,尖峰神经 P 系统(SN P 系统)受到生物神经系统的启发,是第三代神经网络,具有复杂的结构和巨大的并行性优势。因此,本研究首先通过引入最大近邻距离和 K 近邻(KNN)对 DPC 进行了改进。此外,还提出了一种基于延迟尖峰神经 P 系统(DSN P 系统)的方法,以提高算法的性能。随后,提出了 DSNP-ANDPC 算法。通过对四个合成数据集和十个真实世界数据集的综合评估,评估了 DSNP-ANDPC 的有效性。所提出的方法在大多数情况下都优于其他比较方法。
{"title":"A Delayed Spiking Neural Membrane System for Adaptive Nearest Neighbor-Based Density Peak Clustering.","authors":"Qianqian Ren, Lianlian Zhang, Shaoyi Liu, Jin-Xing Liu, Junliang Shang, Xiyu Liu","doi":"10.1142/S0129065724500503","DOIUrl":"10.1142/S0129065724500503","url":null,"abstract":"<p><p>Although the density peak clustering (DPC) algorithm can effectively distribute samples and quickly identify noise points, it lacks adaptability and cannot consider the local data structure. In addition, clustering algorithms generally suffer from high time complexity. Prior research suggests that clustering algorithms grounded in P systems can mitigate time complexity concerns. Within the realm of membrane systems (P systems), spiking neural P systems (SN P systems), inspired by biological nervous systems, are third-generation neural networks that possess intricate structures and offer substantial parallelism advantages. Thus, this study first improved the DPC by introducing the maximum nearest neighbor distance and K-nearest neighbors (KNN). Moreover, a method based on delayed spiking neural P systems (DSN P systems) was proposed to improve the performance of the algorithm. Subsequently, the DSNP-ANDPC algorithm was proposed. The effectiveness of DSNP-ANDPC was evaluated through comprehensive evaluations across four synthetic datasets and 10 real-world datasets. The proposed method outperformed the other comparison methods in most cases.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2450050"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141556258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-07-17DOI: 10.1142/S0129065724500539
Changxu Dong, Dengdi Sun
Recently, Graph Neural Networks (GNNs) have gained widespread application in automatic brain network classification tasks, owing to their ability to directly capture crucial information in non-Euclidean structures. However, two primary challenges persist in this domain. First, within the realm of clinical neuro-medicine, signals from cerebral regions are inevitably contaminated with noise stemming from physiological or external factors. The construction of brain networks heavily relies on set thresholds and feature information within brain regions, making it susceptible to the incorporation of such noises into the brain topology. Additionally, the static nature of the artificially constructed brain network's adjacent structure restricts real-time changes in brain topology. Second, mainstream GNN-based approaches tend to focus solely on capturing information interactions of nearest neighbor nodes, overlooking high-order topology features. In response to these challenges, we propose an adaptive unsupervised Spatial-Temporal Dynamic Hypergraph Information Bottleneck (ST-DHIB) framework for dynamically optimizing brain networks. Specifically, adopting an information theory perspective, Graph Information Bottleneck (GIB) is employed for purifying graph structure, and dynamically updating the processed input brain signals. From a graph theory standpoint, we utilize the designed Hypergraph Neural Network (HGNN) and Bi-LSTM to capture higher-order spatial-temporal context associations among brain channels. Comprehensive patient-specific and cross-patient experiments have been conducted on two available datasets. The results demonstrate the advancement and generalization of the proposed framework.
{"title":"Spatial-Temporal Dynamic Hypergraph Information Bottleneck for Brain Network Classification.","authors":"Changxu Dong, Dengdi Sun","doi":"10.1142/S0129065724500539","DOIUrl":"10.1142/S0129065724500539","url":null,"abstract":"<p><p>Recently, Graph Neural Networks (GNNs) have gained widespread application in automatic brain network classification tasks, owing to their ability to directly capture crucial information in non-Euclidean structures. However, two primary challenges persist in this domain. First, within the realm of clinical neuro-medicine, signals from cerebral regions are inevitably contaminated with noise stemming from physiological or external factors. The construction of brain networks heavily relies on set thresholds and feature information within brain regions, making it susceptible to the incorporation of such noises into the brain topology. Additionally, the static nature of the artificially constructed brain network's adjacent structure restricts real-time changes in brain topology. Second, mainstream GNN-based approaches tend to focus solely on capturing information interactions of nearest neighbor nodes, overlooking high-order topology features. In response to these challenges, we propose an adaptive unsupervised Spatial-Temporal Dynamic Hypergraph Information Bottleneck (ST-DHIB) framework for dynamically optimizing brain networks. Specifically, adopting an information theory perspective, Graph Information Bottleneck (GIB) is employed for purifying graph structure, and dynamically updating the processed input brain signals. From a graph theory standpoint, we utilize the designed Hypergraph Neural Network (HGNN) and Bi-LSTM to capture higher-order spatial-temporal context associations among brain channels. Comprehensive patient-specific and cross-patient experiments have been conducted on two available datasets. The results demonstrate the advancement and generalization of the proposed framework.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2450053"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141629547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The quality of medical images is crucial for accurately diagnosing and treating various diseases. However, current automated methods for assessing image quality are based on neural networks, which often focus solely on pixel distortion and overlook the significance of complex structures within the images. This study introduces a novel neural network model designed explicitly for automated image quality assessment that addresses pixel and semantic distortion. The model introduces an adaptive ranking mechanism enhanced with contrast sensitivity weighting to refine the detection of minor variances in similar images for pixel distortion assessment. More significantly, the model integrates a structure-aware learning module employing graph neural networks. This module is adept at deciphering the intricate relationships between an image's semantic structure and quality. When evaluated on two ultrasound imaging datasets, the proposed method outshines existing leading models in performance. Additionally, it boasts seamless integration into clinical workflows, enabling real-time image quality assessment, crucial for precise disease diagnosis and treatment.
{"title":"Automated Quality Assessment of Medical Images in Echocardiography Using Neural Networks with Adaptive Ranking and Structure-Aware Learning.","authors":"Gadeng Luosang, Zhihua Wang, Jian Liu, Fanxin Zeng, Zhang Yi, Jianyong Wang","doi":"10.1142/S0129065724500540","DOIUrl":"10.1142/S0129065724500540","url":null,"abstract":"<p><p>The quality of medical images is crucial for accurately diagnosing and treating various diseases. However, current automated methods for assessing image quality are based on neural networks, which often focus solely on pixel distortion and overlook the significance of complex structures within the images. This study introduces a novel neural network model designed explicitly for automated image quality assessment that addresses pixel and semantic distortion. The model introduces an adaptive ranking mechanism enhanced with contrast sensitivity weighting to refine the detection of minor variances in similar images for pixel distortion assessment. More significantly, the model integrates a structure-aware learning module employing graph neural networks. This module is adept at deciphering the intricate relationships between an image's semantic structure and quality. When evaluated on two ultrasound imaging datasets, the proposed method outshines existing leading models in performance. Additionally, it boasts seamless integration into clinical workflows, enabling real-time image quality assessment, crucial for precise disease diagnosis and treatment.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2450054"},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141565421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}