首页 > 最新文献

International journal of neural systems最新文献

英文 中文
A Novel State Space Model with Dynamic Graphic Neural Network for EEG Event Detection. 用于脑电图事件检测的新型状态空间模型与动态图形神经网络
Pub Date : 2025-03-01 Epub Date: 2024-12-31 DOI: 10.1142/S012906572550008X
Xinying Li, Shengjie Yan, Yonglin Wu, Chenyun Dai, Yao Guo

Electroencephalography (EEG) is a widely used physiological signal to obtain information of brain activity, and its automatic detection holds significant research importance, which saves doctors' time, improves detection efficiency and accuracy. However, current automatic detection studies face several challenges: large EEG data volumes require substantial time and space for data reading and model training; EEG's long-term dependencies test the temporal feature extraction capabilities of models; and the dynamic changes in brain activity and the non-Euclidean spatial structure between electrodes complicate the acquisition of spatial information. The proposed method uses range-EEG (rEEG) to extract time-frequency features from EEG to reduce data volume and resource consumption. Additionally, the next-generation state-space model Mamba is utilized as a temporal feature extractor to effectively capture the temporal information in EEG data. To address the limitations of state space models (SSMs) in spatial feature extraction, Mamba is combined with Dynamic Graph Neural Networks, creating an efficient model called DG-Mamba for EEG event detection. Testing on seizure detection and sleep stage classification tasks showed that the proposed method improved training speed by 10 times and reduced memory usage to less than one-seventh of the original data while maintaining superior performance. On the TUSZ dataset, DG-Mamba achieved an AUROC of 0.931 for seizure detection and in the sleep stage classification task, the proposed model surpassed all baselines.

{"title":"A Novel State Space Model with Dynamic Graphic Neural Network for EEG Event Detection.","authors":"Xinying Li, Shengjie Yan, Yonglin Wu, Chenyun Dai, Yao Guo","doi":"10.1142/S012906572550008X","DOIUrl":"https://doi.org/10.1142/S012906572550008X","url":null,"abstract":"<p><p>Electroencephalography (EEG) is a widely used physiological signal to obtain information of brain activity, and its automatic detection holds significant research importance, which saves doctors' time, improves detection efficiency and accuracy. However, current automatic detection studies face several challenges: large EEG data volumes require substantial time and space for data reading and model training; EEG's long-term dependencies test the temporal feature extraction capabilities of models; and the dynamic changes in brain activity and the non-Euclidean spatial structure between electrodes complicate the acquisition of spatial information. The proposed method uses range-EEG (rEEG) to extract time-frequency features from EEG to reduce data volume and resource consumption. Additionally, the next-generation state-space model Mamba is utilized as a temporal feature extractor to effectively capture the temporal information in EEG data. To address the limitations of state space models (SSMs) in spatial feature extraction, Mamba is combined with Dynamic Graph Neural Networks, creating an efficient model called DG-Mamba for EEG event detection. Testing on seizure detection and sleep stage classification tasks showed that the proposed method improved training speed by 10 times and reduced memory usage to less than one-seventh of the original data while maintaining superior performance. On the TUSZ dataset, DG-Mamba achieved an AUROC of 0.931 for seizure detection and in the sleep stage classification task, the proposed model surpassed all baselines.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"35 3","pages":"2550008"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Label Zero-Shot Learning Via Contrastive Label-Based Attention.
Pub Date : 2025-03-01 Epub Date: 2025-01-23 DOI: 10.1142/S0129065725500108
Shixuan Meng, Rongxin Jiang, Xiang Tian, Fan Zhou, Yaowu Chen, Junjie Liu, Chen Shen

Multi-label zero-shot learning (ML-ZSL) strives to recognize all objects in an image, regardless of whether they are present in the training data. Recent methods incorporate an attention mechanism to locate labels in the image and generate class-specific semantic information. However, the attention mechanism built on visual features treats label embeddings equally in the prediction score, leading to severe semantic ambiguity. This study focuses on efficiently utilizing semantic information in the attention mechanism. We propose a contrastive label-based attention method (CLA) to associate each label with the most relevant image regions. Specifically, our label-based attention, guided by the latent label embedding, captures discriminative image details. To distinguish region-wise correlations, we implement a region-level contrastive loss. In addition, we utilize a global feature alignment module to identify labels with general information. Extensive experiments on two benchmarks, NUS-WIDE and Open Images, demonstrate that our CLA outperforms the state-of-the-art methods. Especially under the ZSL setting, our method achieves 2.0% improvements in mean Average Precision (mAP) for NUS-WIDE and 4.0% for Open Images compared with recent methods.

{"title":"Multi-Label Zero-Shot Learning Via Contrastive Label-Based Attention.","authors":"Shixuan Meng, Rongxin Jiang, Xiang Tian, Fan Zhou, Yaowu Chen, Junjie Liu, Chen Shen","doi":"10.1142/S0129065725500108","DOIUrl":"10.1142/S0129065725500108","url":null,"abstract":"<p><p>Multi-label zero-shot learning (ML-ZSL) strives to recognize all objects in an image, regardless of whether they are present in the training data. Recent methods incorporate an attention mechanism to locate labels in the image and generate class-specific semantic information. However, the attention mechanism built on visual features treats label embeddings equally in the prediction score, leading to severe semantic ambiguity. This study focuses on efficiently utilizing semantic information in the attention mechanism. We propose a contrastive label-based attention method (CLA) to associate each label with the most relevant image regions. Specifically, our label-based attention, guided by the latent label embedding, captures discriminative image details. To distinguish region-wise correlations, we implement a region-level contrastive loss. In addition, we utilize a global feature alignment module to identify labels with general information. Extensive experiments on two benchmarks, NUS-WIDE and Open Images, demonstrate that our CLA outperforms the state-of-the-art methods. Especially under the ZSL setting, our method achieves 2.0% improvements in mean Average Precision (mAP) for NUS-WIDE and 4.0% for Open Images compared with recent methods.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550010"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Supervised Image Segmentation Using Meta-Learning and Multi-Backbone Feature Fusion.
Pub Date : 2025-02-03 DOI: 10.1142/S0129065725500121
Muhammad Shahroz Ajmal, Guohua Geng, Xiaofeng Wang, Mohsin Ashraf

Few-shot segmentation (FSS) aims to reduce the need for manual annotation, which is both expensive and time-consuming. While FSS enhances model generalization to new concepts with only limited test samples, it still relies on a substantial amount of labeled training data for base classes. To address these issues, we propose a multi-backbone few shot segmentation (MBFSS) method. This self-supervised FSS technique utilizes unsupervised saliency for pseudo-labeling, allowing the model to be trained on unlabeled data. In addition, it integrates features from multiple backbones (ResNet, ResNeXt, and PVT v2) to generate a richer feature representation than a single backbone. Through extensive experimentation on PASCAL-5i and COCO-20i, our method achieves 54.3% and 25.1% on one-shot segmentation, exceeding the baseline methods by 13.5% and 4%, respectively. These improvements significantly enhance the model's performance in real-world applications with negligible labeling effort.

{"title":"Self-Supervised Image Segmentation Using Meta-Learning and Multi-Backbone Feature Fusion.","authors":"Muhammad Shahroz Ajmal, Guohua Geng, Xiaofeng Wang, Mohsin Ashraf","doi":"10.1142/S0129065725500121","DOIUrl":"https://doi.org/10.1142/S0129065725500121","url":null,"abstract":"<p><p>Few-shot segmentation (FSS) aims to reduce the need for manual annotation, which is both expensive and time-consuming. While FSS enhances model generalization to new concepts with only limited test samples, it still relies on a substantial amount of labeled training data for base classes. To address these issues, we propose a multi-backbone few shot segmentation (MBFSS) method. This self-supervised FSS technique utilizes unsupervised saliency for pseudo-labeling, allowing the model to be trained on unlabeled data. In addition, it integrates features from multiple backbones (ResNet, ResNeXt, and PVT v2) to generate a richer feature representation than a single backbone. Through extensive experimentation on PASCAL-5i and COCO-20i, our method achieves 54.3% and 25.1% on one-shot segmentation, exceeding the baseline methods by 13.5% and 4%, respectively. These improvements significantly enhance the model's performance in real-world applications with negligible labeling effort.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550012"},"PeriodicalIF":0.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143191645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial - A journal that promotes excellence through uncompromising review process: Reflection of freedom of speech and scientific publication.
Pub Date : 2025-02-03 DOI: 10.1142/S0129065725020010
Zvi Kam, Giovanna Nicora
{"title":"Editorial - A journal that promotes excellence through uncompromising review process: Reflection of freedom of speech and scientific publication.","authors":"Zvi Kam, Giovanna Nicora","doi":"10.1142/S0129065725020010","DOIUrl":"https://doi.org/10.1142/S0129065725020010","url":null,"abstract":"","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2502001"},"PeriodicalIF":0.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143191643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Motor Imagery Classification with Residual Graph Convolutional Networks and Multi-Feature Fusion. 利用残差图卷积网络和多特征融合增强运动图像分类能力
Pub Date : 2025-01-01 Epub Date: 2024-11-19 DOI: 10.1142/S0129065724500692
Fangzhou Xu, Weiyou Shi, Chengyan Lv, Yuan Sun, Shuai Guo, Chao Feng, Yang Zhang, Tzyy-Ping Jung, Jiancai Leng

Stroke, an abrupt cerebrovascular ailment resulting in brain tissue damage, has prompted the adoption of motor imagery (MI)-based brain-computer interface (BCI) systems in stroke rehabilitation. However, analyzing electroencephalogram (EEG) signals from stroke patients poses challenges. To address the issues of low accuracy and efficiency in EEG classification, particularly involving MI, the study proposes a residual graph convolutional network (M-ResGCN) framework based on the modified S-transform (MST), and introduces the self-attention mechanism into residual graph convolutional network (ResGCN). This study uses MST to extract EEG time-frequency domain features, derives spatial EEG features by calculating the absolute Pearson correlation coefficient (aPcc) between channels, and devises a method to construct the adjacency matrix of the brain network using aPcc to measure the strength of the connection between channels. Experimental results involving 16 stroke patients and 16 healthy subjects demonstrate significant improvements in classification quality and robustness across tests and subjects. The highest classification accuracy reached 94.91% and a Kappa coefficient of 0.8918. The average accuracy and F1 scores from 10 times 10-fold cross-validation are 94.38% and 94.36%, respectively. By validating the feasibility and applicability of brain networks constructed using the aPcc in EEG signal analysis and feature encoding, it was established that the aPcc effectively reflects overall brain activity. The proposed method presents a novel approach to exploring channel relationships in MI-EEG and improving classification performance. It holds promise for real-time applications in MI-based BCI systems.

中风是一种导致脑组织损伤的突发性脑血管疾病,它促使人们在中风康复中采用基于运动图像(MI)的脑机接口(BCI)系统。然而,分析中风患者的脑电图(EEG)信号是一项挑战。为了解决脑电图分类(尤其是涉及 MI 的脑电图分类)的低准确率和低效率问题,本研究提出了一种基于修正 S 变换(MST)的残差图卷积网络(M-ResGCN)框架,并在残差图卷积网络(ResGCN)中引入了自注意机制。本研究利用 MST 提取脑电图时频域特征,通过计算通道间的绝对皮尔逊相关系数(aPcc)得出脑电图空间特征,并设计了一种利用 aPcc 构建脑网络邻接矩阵的方法,以衡量通道间连接的强度。16 名中风患者和 16 名健康受试者的实验结果表明,在不同的测试和受试者中,分类质量和鲁棒性都有显著提高。最高分类准确率达到 94.91%,Kappa 系数为 0.8918。10 次 10 倍交叉验证的平均准确率和 F1 分数分别为 94.38% 和 94.36%。通过验证利用 aPcc 构建的脑网络在脑电信号分析和特征编码中的可行性和适用性,确定了 aPcc 能有效反映大脑的整体活动。所提出的方法为探索 MI-EEG 中的通道关系和提高分类性能提供了一种新方法。它有望在基于 MI 的生物识别(BCI)系统中得到实时应用。
{"title":"Enhancing Motor Imagery Classification with Residual Graph Convolutional Networks and Multi-Feature Fusion.","authors":"Fangzhou Xu, Weiyou Shi, Chengyan Lv, Yuan Sun, Shuai Guo, Chao Feng, Yang Zhang, Tzyy-Ping Jung, Jiancai Leng","doi":"10.1142/S0129065724500692","DOIUrl":"10.1142/S0129065724500692","url":null,"abstract":"<p><p>Stroke, an abrupt cerebrovascular ailment resulting in brain tissue damage, has prompted the adoption of motor imagery (MI)-based brain-computer interface (BCI) systems in stroke rehabilitation. However, analyzing electroencephalogram (EEG) signals from stroke patients poses challenges. To address the issues of low accuracy and efficiency in EEG classification, particularly involving MI, the study proposes a residual graph convolutional network (M-ResGCN) framework based on the modified <i>S</i>-transform (MST), and introduces the self-attention mechanism into residual graph convolutional network (ResGCN). This study uses MST to extract EEG time-frequency domain features, derives spatial EEG features by calculating the absolute Pearson correlation coefficient (aPcc) between channels, and devises a method to construct the adjacency matrix of the brain network using aPcc to measure the strength of the connection between channels. Experimental results involving 16 stroke patients and 16 healthy subjects demonstrate significant improvements in classification quality and robustness across tests and subjects. The highest classification accuracy reached 94.91% and a Kappa coefficient of 0.8918. The average accuracy and <i>F</i>1 scores from 10 times 10-fold cross-validation are 94.38% and 94.36%, respectively. By validating the feasibility and applicability of brain networks constructed using the aPcc in EEG signal analysis and feature encoding, it was established that the aPcc effectively reflects overall brain activity. The proposed method presents a novel approach to exploring channel relationships in MI-EEG and improving classification performance. It holds promise for real-time applications in MI-based BCI systems.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2450069"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatially Selective Retinal Ganglion Cell Activation Using Low Invasive Extraocular Temporal Interference Stimulation. 利用低侵入性眼外时空干扰刺激进行空间选择性视网膜神经节细胞激活
Pub Date : 2025-01-01 Epub Date: 2024-09-25 DOI: 10.1142/S0129065724500667
Xiaoyu Song, Tianruo Guo, Saidong Ma, Feng Zhou, Jiaxin Tian, Zhengyang Liu, Jiao Liu, Heng Li, Yao Chen, Xinyu Chai, Liming Li

Conventional retinal implants involve complex surgical procedures and require invasive implantation. Temporal Interference Stimulation (TIS) has achieved noninvasive and focused stimulation of deep brain regions by delivering high-frequency currents with small frequency differences on multiple electrodes. In this study, we conducted in silico investigations to evaluate extraocular TIS's potential as a novel visual restoration approach. Different from the previously published retinal TIS model, the new model of extraocular TIS incorporated a biophysically detailed retinal ganglion cell (RGC) population, enabling a more accurate simulation of retinal outputs under electrical stimulation. Using this improved model, we made the following major discoveries: (1) the maximum value of TIS envelope electric potential ([Formula: see text] showed a strong correlation with TIS-induced RGC activation; (2) the preferred stimulating/return electrode (SE/RE) locations to achieve focalized TIS were predicted; (3) the performance of extraocular TIS was better than same-frequency sinusoidal stimulation (SSS) in terms of lower RGC threshold and more focused RGC activation; (4) the optimal stimulation parameters to achieve lower threshold and focused activation were identified; and (5) spatial selectivity of TIS could be improved by integrating current steering strategy and reducing electrode size. This study provides insights into the feasibility and effectiveness of a low-invasive stimulation approach in enhancing vision restoration.

传统的视网膜植入术涉及复杂的外科手术,需要进行侵入性植入。颞叶干扰刺激(TIS)通过在多个电极上传递频率差异较小的高频电流,实现了对大脑深部区域的非侵入式集中刺激。在本研究中,我们进行了硅学研究,以评估眼外干扰刺激作为一种新型视觉恢复方法的潜力。与之前发表的视网膜 TIS 模型不同,新的眼外 TIS 模型纳入了生物物理上详细的视网膜神经节细胞(RGC)群,从而能够更准确地模拟电刺激下的视网膜输出。利用这一改进模型,我们有了以下重大发现:(1) TIS 包络电势的最大值([公式:见正文]显示与 TIS 诱导的 RGC 激活密切相关;(2) 预测了实现聚焦 TIS 的首选刺激/回流电极(SE/RE)位置;(3) 就更低的 RGC 阈值和更集中的 RGC 激活而言,眼外 TIS 的性能优于同频率正弦波刺激(SSS);(4) 确定了实现较低阈值和集中激活的最佳刺激参数;以及 (5) 通过整合电流转向策略和缩小电极尺寸,可以提高 TIS 的空间选择性。这项研究深入探讨了低创刺激方法在增强视力恢复方面的可行性和有效性。
{"title":"Spatially Selective Retinal Ganglion Cell Activation Using Low Invasive Extraocular Temporal Interference Stimulation.","authors":"Xiaoyu Song, Tianruo Guo, Saidong Ma, Feng Zhou, Jiaxin Tian, Zhengyang Liu, Jiao Liu, Heng Li, Yao Chen, Xinyu Chai, Liming Li","doi":"10.1142/S0129065724500667","DOIUrl":"10.1142/S0129065724500667","url":null,"abstract":"<p><p>Conventional retinal implants involve complex surgical procedures and require invasive implantation. Temporal Interference Stimulation (TIS) has achieved noninvasive and focused stimulation of deep brain regions by delivering high-frequency currents with small frequency differences on multiple electrodes. In this study, we conducted <i>in silico</i> investigations to evaluate extraocular TIS's potential as a novel visual restoration approach. Different from the previously published retinal TIS model, the new model of extraocular TIS incorporated a biophysically detailed retinal ganglion cell (RGC) population, enabling a more accurate simulation of retinal outputs under electrical stimulation. Using this improved model, we made the following major discoveries: (1) the maximum value of TIS envelope electric potential ([Formula: see text] showed a strong correlation with TIS-induced RGC activation; (2) the preferred stimulating/return electrode (SE/RE) locations to achieve focalized TIS were predicted; (3) the performance of extraocular TIS was better than same-frequency sinusoidal stimulation (SSS) in terms of lower RGC threshold and more focused RGC activation; (4) the optimal stimulation parameters to achieve lower threshold and focused activation were identified; and (5) spatial selectivity of TIS could be improved by integrating current steering strategy and reducing electrode size. This study provides insights into the feasibility and effectiveness of a low-invasive stimulation approach in enhancing vision restoration.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2450066"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Memory State Space Models for Medical Image Segmentation. 用于医学图像分割的神经记忆状态空间模型
Pub Date : 2025-01-01 Epub Date: 2024-09-30 DOI: 10.1142/S0129065724500680
Zhihua Wang, Jingjun Gu, Wang Zhou, Quansong He, Tianli Zhao, Jialong Guo, Li Lu, Tao He, Jiajun Bu

With the rapid advancement of deep learning, computer-aided diagnosis and treatment have become crucial in medicine. UNet is a widely used architecture for medical image segmentation, and various methods for improving UNet have been extensively explored. One popular approach is incorporating transformers, though their quadratic computational complexity poses challenges. Recently, State-Space Models (SSMs), exemplified by Mamba, have gained significant attention as a promising alternative due to their linear computational complexity. Another approach, neural memory Ordinary Differential Equations (nmODEs), exhibits similar principles and achieves good results. In this paper, we explore the respective strengths and weaknesses of nmODEs and SSMs and propose a novel architecture, the nmSSM decoder, which combines the advantages of both approaches. This architecture possesses powerful nonlinear representation capabilities while retaining the ability to preserve input and process global information. We construct nmSSM-UNet using the nmSSM decoder and conduct comprehensive experiments on the PH2, ISIC2018, and BU-COCO datasets to validate its effectiveness in medical image segmentation. The results demonstrate the promising application value of nmSSM-UNet. Additionally, we conducted ablation experiments to verify the effectiveness of our proposed improvements on SSMs and nmODEs.

随着深度学习的快速发展,计算机辅助诊断和治疗已成为医学领域的关键。UNet 是一种广泛应用于医学图像分割的架构,人们广泛探索了各种改进 UNet 的方法。其中一种流行的方法是加入变换器,但其二次计算复杂性带来了挑战。最近,以 Mamba 为代表的状态空间模型(SSM)因其线性计算复杂度而作为一种有前途的替代方法受到广泛关注。另一种方法,即神经记忆常微分方程(nmODEs),也表现出类似的原理,并取得了良好的效果。在本文中,我们探讨了 nmODE 和 SSM 各自的优缺点,并提出了一种新型架构 nmSSM 解码器,它结合了两种方法的优点。该架构具有强大的非线性表示能力,同时保留了保留输入和处理全局信息的能力。我们利用 nmSSM 解码器构建了 nmSSM-UNet,并在 PH2、ISIC2018 和 BU-COCO 数据集上进行了全面实验,以验证其在医学图像分割中的有效性。实验结果证明了 nmSSM-UNet 的应用价值。此外,我们还进行了消融实验,以验证我们提出的改进在 SSM 和 nmODE 上的有效性。
{"title":"Neural Memory State Space Models for Medical Image Segmentation.","authors":"Zhihua Wang, Jingjun Gu, Wang Zhou, Quansong He, Tianli Zhao, Jialong Guo, Li Lu, Tao He, Jiajun Bu","doi":"10.1142/S0129065724500680","DOIUrl":"10.1142/S0129065724500680","url":null,"abstract":"<p><p>With the rapid advancement of deep learning, computer-aided diagnosis and treatment have become crucial in medicine. UNet is a widely used architecture for medical image segmentation, and various methods for improving UNet have been extensively explored. One popular approach is incorporating transformers, though their quadratic computational complexity poses challenges. Recently, State-Space Models (SSMs), exemplified by Mamba, have gained significant attention as a promising alternative due to their linear computational complexity. Another approach, neural memory Ordinary Differential Equations (nmODEs), exhibits similar principles and achieves good results. In this paper, we explore the respective strengths and weaknesses of nmODEs and SSMs and propose a novel architecture, the nmSSM decoder, which combines the advantages of both approaches. This architecture possesses powerful nonlinear representation capabilities while retaining the ability to preserve input and process global information. We construct nmSSM-UNet using the nmSSM decoder and conduct comprehensive experiments on the PH2, ISIC2018, and BU-COCO datasets to validate its effectiveness in medical image segmentation. The results demonstrate the promising application value of nmSSM-UNet. Additionally, we conducted ablation experiments to verify the effectiveness of our proposed improvements on SSMs and nmODEs.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2450068"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142335214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding Continuous Tracking Eye Movements from Cortical Spiking Activity. 从皮层尖峰活动解码连续跟踪眼球运动
Pub Date : 2025-01-01 Epub Date: 2024-11-15 DOI: 10.1142/S0129065724500709
Kendra K Noneman, J Patrick Mayo

Eye movements are the primary way primates interact with the world. Understanding how the brain controls the eyes is therefore crucial for improving human health and designing visual rehabilitation devices. However, brain activity is challenging to decipher. Here, we leveraged machine learning algorithms to reconstruct tracking eye movements from high-resolution neuronal recordings. We found that continuous eye position could be decoded with high accuracy using spiking data from only a few dozen cortical neurons. We tested eight decoders and found that neural network models yielded the highest decoding accuracy. Simpler models performed well above chance with a substantial reduction in training time. We measured the impact of data quantity (e.g. number of neurons) and data format (e.g. bin width) on training time, inference time, and generalizability. Training models with more input data improved performance, as expected, but the format of the behavioral output was critical for emphasizing or omitting specific oculomotor events. Our results provide the first demonstration, to our knowledge, of continuously decoded eye movements across a large field of view. Our comprehensive investigation of predictive power and computational efficiency for common decoder architectures provides a much-needed foundation for future work on real-time gaze-tracking devices.

眼球运动是灵长类动物与世界互动的主要方式。因此,了解大脑如何控制眼睛对于改善人类健康和设计视觉康复设备至关重要。然而,脑部活动的解密具有挑战性。在这里,我们利用机器学习算法从高分辨率神经元记录中重建跟踪眼球运动。我们发现,只需使用几十个皮层神经元的尖峰数据,就能高精度地解码连续的眼球位置。我们测试了八种解码器,发现神经网络模型的解码精度最高。更简单的模型在大幅减少训练时间的情况下,表现也远超偶然性。我们测量了数据数量(如神经元数量)和数据格式(如二进制宽度)对训练时间、推理时间和泛化能力的影响。正如预期的那样,使用更多输入数据训练模型可以提高性能,但行为输出的格式对于强调或忽略特定眼球运动事件至关重要。据我们所知,我们的研究结果首次展示了在大视野范围内对眼球运动的连续解码。我们对常见解码器架构的预测能力和计算效率进行了全面的研究,为未来实时凝视跟踪设备的研究工作提供了亟需的基础。
{"title":"Decoding Continuous Tracking Eye Movements from Cortical Spiking Activity.","authors":"Kendra K Noneman, J Patrick Mayo","doi":"10.1142/S0129065724500709","DOIUrl":"10.1142/S0129065724500709","url":null,"abstract":"<p><p>Eye movements are the primary way primates interact with the world. Understanding how the brain controls the eyes is therefore crucial for improving human health and designing visual rehabilitation devices. However, brain activity is challenging to decipher. Here, we leveraged machine learning algorithms to reconstruct tracking eye movements from high-resolution neuronal recordings. We found that continuous eye position could be decoded with high accuracy using spiking data from only a few dozen cortical neurons. We tested eight decoders and found that neural network models yielded the highest decoding accuracy. Simpler models performed well above chance with a substantial reduction in training time. We measured the impact of data quantity (e.g. number of neurons) and data format (e.g. bin width) on training time, inference time, and generalizability. Training models with more input data improved performance, as expected, but the format of the behavioral output was critical for emphasizing or omitting specific oculomotor events. Our results provide the first demonstration, to our knowledge, of continuously decoded eye movements across a large field of view. Our comprehensive investigation of predictive power and computational efficiency for common decoder architectures provides a much-needed foundation for future work on real-time gaze-tracking devices.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2450070"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142640247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Recognition of Paroxysmal Kinesigenic Dyskinesia Based on EEG Functional Connectivity. 基于脑电图功能连接性的阵发性运动障碍深度学习识别。
Pub Date : 2025-01-01 Epub Date: 2024-11-19 DOI: 10.1142/S0129065725500017
Liang Zhao, Renling Zou, Linpeng Jin

Paroxysmal kinesigenic dyskinesia (PKD) is a rare neurological disorder marked by transient involuntary movements triggered by sudden actions. Current diagnostic approaches, including genetic screening, face challenges in identifying secondary cases due to symptom overlap with other disorders. This study introduces a novel PKD recognition method utilizing a resting-state electroencephalogram (EEG) functional connectivity matrix and a deep learning architecture (AT-1CBL). Resting-state EEG data from 44 PKD patients and 44 healthy controls (HCs) were collected using a 128-channel EEG system. Functional connectivity matrices were computed and transformed into graph data to examine brain network property differences between PKD patients and controls through graph theory. Source localization was conducted to explore neural circuit differences in patients. The AT-1CBL model, integrating 1D-CNN and Bi-LSTM with attentional mechanisms, achieved a classification accuracy of 93.77% on phase lag index (PLI) features in the Theta band. Graph theoretic analysis revealed significant phase synchronization impairments in the Theta band of the functional brain network in PKD patients, particularly in the distribution of weak connections compared to HCs. Source localization analyses indicated greater differences in functional connectivity in sensorimotor regions and the frontal-limbic system in PKD patients, suggesting abnormalities in motor integration related to clinical symptoms. This study highlights the potential of deep learning models based on EEG functional connectivity for accurate and cost-effective PKD diagnosis, supporting the development of portable EEG devices for clinical monitoring and diagnosis. However, the limited dataset size may affect generalizability, and further exploration of multimodal data integration and advanced deep learning architectures is necessary to enhance the robustness of PKD diagnostic models.

阵发性运动障碍(PKD)是一种罕见的神经系统疾病,其特征是由突然动作引发的短暂不自主运动。由于症状与其他疾病重叠,目前的诊断方法(包括基因筛查)在识别继发性病例方面面临挑战。本研究利用静息态脑电图(EEG)功能连接矩阵和深度学习架构(AT-1CBL),介绍了一种新型 PKD 识别方法。研究使用 128 通道脑电图系统收集了 44 名 PKD 患者和 44 名健康对照组(HCs)的静息态脑电图数据。计算功能连接矩阵并将其转化为图数据,通过图理论研究 PKD 患者和对照组之间大脑网络属性的差异。通过源定位来探索患者的神经回路差异。AT-1CBL模型将1D-CNN和Bi-LSTM与注意机制相结合,在Theta波段的相位滞后指数(PLI)特征上达到了93.77%的分类准确率。图论分析表明,与普通人相比,PKD 患者大脑功能网络 Theta 波段的相位同步性明显受损,尤其是弱连接的分布。源定位分析表明,PKD 患者感觉运动区和额叶-边缘系统的功能连接差异更大,这表明运动整合异常与临床症状有关。这项研究凸显了基于脑电图功能连接的深度学习模型在准确、经济地诊断 PKD 方面的潜力,为开发用于临床监测和诊断的便携式脑电图设备提供了支持。然而,有限的数据集规模可能会影响普适性,因此有必要进一步探索多模态数据整合和先进的深度学习架构,以增强 PKD 诊断模型的稳健性。
{"title":"Deep Learning Recognition of Paroxysmal Kinesigenic Dyskinesia Based on EEG Functional Connectivity.","authors":"Liang Zhao, Renling Zou, Linpeng Jin","doi":"10.1142/S0129065725500017","DOIUrl":"10.1142/S0129065725500017","url":null,"abstract":"<p><p>Paroxysmal kinesigenic dyskinesia (PKD) is a rare neurological disorder marked by transient involuntary movements triggered by sudden actions. Current diagnostic approaches, including genetic screening, face challenges in identifying secondary cases due to symptom overlap with other disorders. This study introduces a novel PKD recognition method utilizing a resting-state electroencephalogram (EEG) functional connectivity matrix and a deep learning architecture (AT-1CBL). Resting-state EEG data from 44 PKD patients and 44 healthy controls (HCs) were collected using a 128-channel EEG system. Functional connectivity matrices were computed and transformed into graph data to examine brain network property differences between PKD patients and controls through graph theory. Source localization was conducted to explore neural circuit differences in patients. The AT-1CBL model, integrating 1D-CNN and Bi-LSTM with attentional mechanisms, achieved a classification accuracy of 93.77% on phase lag index (PLI) features in the Theta band. Graph theoretic analysis revealed significant phase synchronization impairments in the Theta band of the functional brain network in PKD patients, particularly in the distribution of weak connections compared to HCs. Source localization analyses indicated greater differences in functional connectivity in sensorimotor regions and the frontal-limbic system in PKD patients, suggesting abnormalities in motor integration related to clinical symptoms. This study highlights the potential of deep learning models based on EEG functional connectivity for accurate and cost-effective PKD diagnosis, supporting the development of portable EEG devices for clinical monitoring and diagnosis. However, the limited dataset size may affect generalizability, and further exploration of multimodal data integration and advanced deep learning architectures is necessary to enhance the robustness of PKD diagnostic models.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550001"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142670040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Cloud Detection Network Based on Adaptive Laplacian Coordination Enhanced Cross-Feature U-Net. 基于自适应拉普拉斯协调增强型交叉特征 U-Net 的云检测网络。
Pub Date : 2024-12-13 DOI: 10.1142/S0129065725500054
Kaizheng Wang, Ruohan Zhou, Jian Wang, Ferrante Neri, Yitong Fu, Shunzhen Zhou

Cloud cover experiences rapid fluctuations, significantly impacting the irradiance reaching the ground and causing frequent variations in photovoltaic power output. Accurate detection of thin and fragmented clouds is crucial for reliable photovoltaic power generation forecasting. In this paper, we introduce a novel cloud detection method, termed Adaptive Laplacian Coordination Enhanced Cross-Feature U-Net (ALCU-Net). This method augments the traditional U-Net architecture with three innovative components: an Adaptive Feature Coordination (AFC) module, an Adaptive Laplacian Cross-Feature U-Net with a Multi-Grained Laplacian-Enhanced (MLE) feature module, and a Criss-Cross Feature Fused Detection (CCFE) module. The AFC module enhances spatial coherence and bridges semantic gaps across multi-channel images. The Adaptive Laplacian Cross-Feature U-Net integrates features from adjacent hierarchical levels, using the MLE module to refine cloud characteristics and edge details over time. The CCFE module, embedded in the U-Net decoder, leverages criss-cross features to improve detection accuracy. Experimental evaluations show that ALCU-Net consistently outperforms existing cloud detection methods, demonstrating superior accuracy in identifying both thick and thin clouds and in mapping fragmented cloud patches across various environments, including oceans, polar regions, and complex ocean-land mixtures.

云量波动迅速,严重影响到达地面的辐照度,导致光伏发电输出频繁变化。准确探测薄云和碎片云对于光伏发电的可靠预测至关重要。本文介绍了一种新的云检测方法,称为自适应拉普拉斯协调增强交叉特征U-Net (ALCU-Net)。该方法对传统的U-Net体系结构进行了改进,采用了三个创新组件:自适应特征协调(AFC)模块、带有多粒度拉普拉斯增强(MLE)特征模块的自适应拉普拉斯交叉特征U-Net模块和交叉特征融合检测(CCFE)模块。AFC模块增强了空间一致性,并在多通道图像之间弥合了语义差距。自适应拉普拉斯交叉特征U-Net集成了相邻层次的特征,使用MLE模块随着时间的推移细化云特征和边缘细节。CCFE模块,嵌入在U-Net解码器,利用纵横交错的特点,以提高检测精度。实验评估表明,ALCU-Net始终优于现有的云检测方法,在识别厚云和薄云以及在各种环境(包括海洋、极地和复杂的海洋-陆地混合)中绘制碎片云斑块方面表现出卓越的准确性。
{"title":"A Cloud Detection Network Based on Adaptive Laplacian Coordination Enhanced Cross-Feature U-Net.","authors":"Kaizheng Wang, Ruohan Zhou, Jian Wang, Ferrante Neri, Yitong Fu, Shunzhen Zhou","doi":"10.1142/S0129065725500054","DOIUrl":"https://doi.org/10.1142/S0129065725500054","url":null,"abstract":"<p><p>Cloud cover experiences rapid fluctuations, significantly impacting the irradiance reaching the ground and causing frequent variations in photovoltaic power output. Accurate detection of thin and fragmented clouds is crucial for reliable photovoltaic power generation forecasting. In this paper, we introduce a novel cloud detection method, termed Adaptive Laplacian Coordination Enhanced Cross-Feature U-Net (ALCU-Net). This method augments the traditional U-Net architecture with three innovative components: an Adaptive Feature Coordination (AFC) module, an Adaptive Laplacian Cross-Feature U-Net with a Multi-Grained Laplacian-Enhanced (MLE) feature module, and a Criss-Cross Feature Fused Detection (CCFE) module. The AFC module enhances spatial coherence and bridges semantic gaps across multi-channel images. The Adaptive Laplacian Cross-Feature U-Net integrates features from adjacent hierarchical levels, using the MLE module to refine cloud characteristics and edge details over time. The CCFE module, embedded in the U-Net decoder, leverages criss-cross features to improve detection accuracy. Experimental evaluations show that ALCU-Net consistently outperforms existing cloud detection methods, demonstrating superior accuracy in identifying both thick and thin clouds and in mapping fragmented cloud patches across various environments, including oceans, polar regions, and complex ocean-land mixtures.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550005"},"PeriodicalIF":0.0,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142824883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International journal of neural systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1