Pub Date : 2026-02-01Epub Date: 2025-11-26DOI: 10.1142/S0129065725500753
Nicolas Ivanov, Madeline Wong, Tom Chau
High inter- and intra-individual variation is a prominent characteristic of electroencephalography (EEG) signals and a significant inhibitor to the practical implementation of brain-computer interfaces (BCIs) outside of research laboratories. However, a few methods exist to assess EEG signal variability. Here, a novel multi-class intra-trial trajectory (MITT) analysis to study EEG variability for mental imagery BCIs is presented. The methods yield insight into different aspects of signal variation, specifically (i) inter-individual, (ii) inter-task, (iii) inter-trial, and (iv) intra-trial. A novel representation of the time evolution of EEG signals was developed. Task trials were segmented into short temporal windows and represented in a feature space derived from unsupervised clustering of trial covariance matrices. Using this representation, temporal trajectories through the feature space were constructed. Two metrics were defined to assess user performance based on these trajectories: (1) InterTaskDiff, based on time-varying distances between the mean trajectories of different tasks, and (2) InterTrialVar, which measured the inter-trial variation of the temporal trajectories along the feature dimensions. Analysis of three-class BCI data from 14 adolescents revealed both metrics correlated significantly with classification results. Further analysis of intra-trial trajectories suggested the existence of characteristic task- and user-specific temporal dynamics. The participant-specific insights provided by MITT analysis could be used to overcome EEG-variability challenges impeding practical implementation of BCIs by elucidating avenues to improve user training feedback or selection of user-optimal classifiers and hyperparameters.
{"title":"A Multi-Class Intra-Trial Trajectory Analysis Technique to Visualize and Quantify Variability of Mental Imagery EEG Signals.","authors":"Nicolas Ivanov, Madeline Wong, Tom Chau","doi":"10.1142/S0129065725500753","DOIUrl":"10.1142/S0129065725500753","url":null,"abstract":"<p><p>High inter- and intra-individual variation is a prominent characteristic of electroencephalography (EEG) signals and a significant inhibitor to the practical implementation of brain-computer interfaces (BCIs) outside of research laboratories. However, a few methods exist to assess EEG signal variability. Here, a novel multi-class intra-trial trajectory (MITT) analysis to study EEG variability for mental imagery BCIs is presented. The methods yield insight into different aspects of signal variation, specifically (i) inter-individual, (ii) inter-task, (iii) inter-trial, and (iv) intra-trial. A novel representation of the time evolution of EEG signals was developed. Task trials were segmented into short temporal windows and represented in a feature space derived from unsupervised clustering of trial covariance matrices. Using this representation, temporal trajectories through the feature space were constructed. Two metrics were defined to assess user performance based on these trajectories: (1) <i>InterTaskDiff</i>, based on time-varying distances between the mean trajectories of different tasks, and (2) <i>InterTrialVar</i>, which measured the inter-trial variation of the temporal trajectories along the feature dimensions. Analysis of three-class BCI data from 14 adolescents revealed both metrics correlated significantly with classification results. Further analysis of intra-trial trajectories suggested the existence of characteristic task- and user-specific temporal dynamics. The participant-specific insights provided by MITT analysis could be used to overcome EEG-variability challenges impeding practical implementation of BCIs by elucidating avenues to improve user training feedback or selection of user-optimal classifiers and hyperparameters.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550075"},"PeriodicalIF":6.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145608099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-09-27DOI: 10.1142/S0129065725500716
Minglong He, Nan Zhou, Hong Peng, Zhicai Liu
Multivariate workload prediction in cloud computing environments is a critical research problem. Effectively capturing inter-variable correlations and temporal patterns in multivariate time series is key to addressing this challenge. To address this issue, this paper proposes a convolutional model based on a Nonlinear Spiking Neural P System (ConvNSNP), which enhances the ability to process nonlinear data compared to conventional convolutional models. Building upon this, a hybrid forecasting model is developed by integrating ConvNSNP with a Bidirectional Long Short-Term Memory (BiLSTM) network. ConvNSNP is first employed to extract temporal and cross-variable dependencies from the multivariate time series, followed by BiLSTM to further strengthen long-term temporal modeling. Comprehensive experiments are conducted on three public cloud workload traces from Alibaba and Google. The proposed model is compared with a range of established deep learning approaches, including CNN, RNN, LSTM, TCN and hybrid models such as LSTNet, CNN-GRU and CNN-LSTM. Experimental results on three public datasets demonstrate that our proposed model achieves up to 9.9% improvement in RMSE and 11.6% improvement in MAE compared with the most effective baseline methods. The model also achieves favorable performance in terms of MAPE, further validating its effectiveness in multivariate workload prediction.
{"title":"A Multivariate Cloud Workload Prediction Method Integrating Convolutional Nonlinear Spiking Neural Model with Bidirectional Long Short-Term Memory.","authors":"Minglong He, Nan Zhou, Hong Peng, Zhicai Liu","doi":"10.1142/S0129065725500716","DOIUrl":"10.1142/S0129065725500716","url":null,"abstract":"<p><p>Multivariate workload prediction in cloud computing environments is a critical research problem. Effectively capturing inter-variable correlations and temporal patterns in multivariate time series is key to addressing this challenge. To address this issue, this paper proposes a convolutional model based on a Nonlinear Spiking Neural P System (ConvNSNP), which enhances the ability to process nonlinear data compared to conventional convolutional models. Building upon this, a hybrid forecasting model is developed by integrating ConvNSNP with a Bidirectional Long Short-Term Memory (BiLSTM) network. ConvNSNP is first employed to extract temporal and cross-variable dependencies from the multivariate time series, followed by BiLSTM to further strengthen long-term temporal modeling. Comprehensive experiments are conducted on three public cloud workload traces from Alibaba and Google. The proposed model is compared with a range of established deep learning approaches, including CNN, RNN, LSTM, TCN and hybrid models such as LSTNet, CNN-GRU and CNN-LSTM. Experimental results on three public datasets demonstrate that our proposed model achieves up to 9.9% improvement in RMSE and 11.6% improvement in MAE compared with the most effective baseline methods. The model also achieves favorable performance in terms of MAPE, further validating its effectiveness in multivariate workload prediction.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550071"},"PeriodicalIF":6.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145194256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-10-09DOI: 10.1142/S0129065725500728
Wei Meng, Fazheng Hou, Kun Chen, Li Ma, Quan Liu
Advancements in artificial intelligence have propelled affective computing toward unprecedented accuracy and real-world impact. By leveraging the unique strengths of brain signals and ocular dynamics, we introduce a novel multimodal framework that integrates EEG and eye-movement (EM) features synergistically to achieve more reliable emotion recognition. First, our EEG Feature Encoder (EFE) uses a convolutional architecture inspired by the human visual cortex's eccentricity-receptive-field mapping, enabling the extraction of highly discriminative neural patterns. Second, our EM Feature Encoder (EMFE) employs a Kolmogorov-Arnold Network (KAN) to overcome the sparse sampling and dimensional mismatch inherent in EM data; through a tailored multilayer design and interpolation alignment, it generates rich, modality-compatible representations. Finally, the core Multimodal Iterative Attentional Feature Fusion (MIAFF) module unites these streams: alternating global and local attention via a Hierarchical Channel Attention Module (HCAM) to iteratively refine and integrate features. Comprehensive evaluations on SEED (3-class) and SEED-IV (4-class) benchmarks show that our method reaches leading-edge accuracy. However, our experiments are limited by small homogeneous datasets, untested cross-cultural robustness, and potential degradation in noisy or edge-deployment settings. Nevertheless, this work not only underscores the power of biomimetic encoding and iterative attention but also paves the way for next-generation brain-computer interface applications in affective health, adaptive gaming, and beyond.
{"title":"Visually-Inspired Multimodal Iterative Attentional Network for High-Precision EEG-Eye-Movement Emotion Recognition.","authors":"Wei Meng, Fazheng Hou, Kun Chen, Li Ma, Quan Liu","doi":"10.1142/S0129065725500728","DOIUrl":"10.1142/S0129065725500728","url":null,"abstract":"<p><p>Advancements in artificial intelligence have propelled affective computing toward unprecedented accuracy and real-world impact. By leveraging the unique strengths of brain signals and ocular dynamics, we introduce a novel multimodal framework that integrates EEG and eye-movement (EM) features synergistically to achieve more reliable emotion recognition. First, our EEG Feature Encoder (EFE) uses a convolutional architecture inspired by the human visual cortex's eccentricity-receptive-field mapping, enabling the extraction of highly discriminative neural patterns. Second, our EM Feature Encoder (EMFE) employs a Kolmogorov-Arnold Network (KAN) to overcome the sparse sampling and dimensional mismatch inherent in EM data; through a tailored multilayer design and interpolation alignment, it generates rich, modality-compatible representations. Finally, the core Multimodal Iterative Attentional Feature Fusion (MIAFF) module unites these streams: alternating global and local attention via a Hierarchical Channel Attention Module (HCAM) to iteratively refine and integrate features. Comprehensive evaluations on SEED (3-class) and SEED-IV (4-class) benchmarks show that our method reaches leading-edge accuracy. However, our experiments are limited by small homogeneous datasets, untested cross-cultural robustness, and potential degradation in noisy or edge-deployment settings. Nevertheless, this work not only underscores the power of biomimetic encoding and iterative attention but also paves the way for next-generation brain-computer interface applications in affective health, adaptive gaming, and beyond.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550072"},"PeriodicalIF":6.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145254111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual neural decoding not only aids in elucidating the neural mechanisms underlying the processing of visual information but also facilitates the advancement of brain-computer interface technologies. However, most current decoding studies focus on developing separate decoding models for individual subjects and specific tasks, an approach that escalates training costs and consumes a substantial amount of computational resources. This paper introduces a Prompt-Guided Generative Visual Language Decoding Model (PG-GVLDM), which uses prompt text that includes information about subjects and tasks to decode both primary categories and detailed textual descriptions from the visual response activities of multiple individuals. In addition to visual response activities, this study also incorporates a multi-head cross-attention module and feeds the model with whole-brain response activities to capture global semantic information in the brain. Experiments on the Natural Scenes Dataset (NSD) demonstrate that PG-GVLDM attains an average category decoding accuracy of 66.6% across four subjects, reflecting strong cross-subject generalization, and achieves text decoding scores of 0.342 (METEOR), 0.450 (Sentence-Transformer), 0.283 (ROUGE-1), and 0.262 (ROUGE-L), establishing state-of-the-art performance in text decoding. Furthermore, incorporating whole-brain response activities significantly enhances decoding performance by enabling the integration of distributed neural signals into coherent global semantic representations, underscoring its methodological importance for unified neural decoding. This research not only represents a breakthrough in visual neural decoding methodologies but also provides theoretical and technical support for the development of generalized brain-computer interfaces.
{"title":"A Prompt-Guided Generative Language Model for Unifying Visual Neural Decoding Across Multiple Subjects and Tasks.","authors":"Wei Huang, Hengjiang Li, Fan Qin, Diwei Wu, Kaiwen Cheng, Huafu Chen","doi":"10.1142/S0129065725500686","DOIUrl":"10.1142/S0129065725500686","url":null,"abstract":"<p><p>Visual neural decoding not only aids in elucidating the neural mechanisms underlying the processing of visual information but also facilitates the advancement of brain-computer interface technologies. However, most current decoding studies focus on developing separate decoding models for individual subjects and specific tasks, an approach that escalates training costs and consumes a substantial amount of computational resources. This paper introduces a Prompt-Guided Generative Visual Language Decoding Model (PG-GVLDM), which uses prompt text that includes information about subjects and tasks to decode both primary categories and detailed textual descriptions from the visual response activities of multiple individuals. In addition to visual response activities, this study also incorporates a multi-head cross-attention module and feeds the model with whole-brain response activities to capture global semantic information in the brain. Experiments on the Natural Scenes Dataset (NSD) demonstrate that PG-GVLDM attains an average category decoding accuracy of 66.6% across four subjects, reflecting strong cross-subject generalization, and achieves text decoding scores of 0.342 (METEOR), 0.450 (Sentence-Transformer), 0.283 (ROUGE-1), and 0.262 (ROUGE-L), establishing state-of-the-art performance in text decoding. Furthermore, incorporating whole-brain response activities significantly enhances decoding performance by enabling the integration of distributed neural signals into coherent global semantic representations, underscoring its methodological importance for unified neural decoding. This research not only represents a breakthrough in visual neural decoding methodologies but also provides theoretical and technical support for the development of generalized brain-computer interfaces.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550068"},"PeriodicalIF":6.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145180045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-10-18DOI: 10.1142/S0129065725500741
Ruimin Dan, Honghui Zhang, Jianchao Bai
This study proposes a novel adaptive DBS control strategy for epilepsy treatment based on deep reinforcement learning. By establishing a random disturbance model of the cortical-thalamus loop, the neural modulation problem is successfully transformed into a Markov decision process. Deep Deterministic Policy Gradient (DDPG) algorithm is employed to achieve adaptive dynamic regulation of stimulation parameters, significantly reducing seizure frequency and duration in various epilepsy simulation scenarios. Experimental results demonstrate that the closed-loop control system can further reduce energy loss by [Formula: see text] ([Formula: see text]) compared to conventional open-loop system, while increase the proportion of non-epileptic states by [Formula: see text] ([Formula: see text]). Furthermore, we innovatively integrate Model-Agnostic Meta-Learning (MAML) with DDPG to develop a collaborative control strategy with transfer learning capabilities. This strategy demonstrates significant advantages across different epilepsy patient scenarios, which offers crucial technical support for the precise and adaptive development of epilepsy treatment.
{"title":"Closed-Loop Control of Epilepsy Based on Reinforcement Learning.","authors":"Ruimin Dan, Honghui Zhang, Jianchao Bai","doi":"10.1142/S0129065725500741","DOIUrl":"10.1142/S0129065725500741","url":null,"abstract":"<p><p>This study proposes a novel adaptive DBS control strategy for epilepsy treatment based on deep reinforcement learning. By establishing a random disturbance model of the cortical-thalamus loop, the neural modulation problem is successfully transformed into a Markov decision process. Deep Deterministic Policy Gradient (DDPG) algorithm is employed to achieve adaptive dynamic regulation of stimulation parameters, significantly reducing seizure frequency and duration in various epilepsy simulation scenarios. Experimental results demonstrate that the closed-loop control system can further reduce energy loss by [Formula: see text] ([Formula: see text]) compared to conventional open-loop system, while increase the proportion of non-epileptic states by [Formula: see text] ([Formula: see text]). Furthermore, we innovatively integrate Model-Agnostic Meta-Learning (MAML) with DDPG to develop a collaborative control strategy with transfer learning capabilities. This strategy demonstrates significant advantages across different epilepsy patient scenarios, which offers crucial technical support for the precise and adaptive development of epilepsy treatment.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550074"},"PeriodicalIF":6.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145310351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative Adversarial Networks (GANs) have demonstrated remarkable success in high-quality image synthesis, with StyleGAN and its successor, StyleGAN2, achieving state-of-the-art performance in terms of realism and control over generated features. However, the large number of parameters and high floating-point operations per second (FLOPs) hinder real-time applications and scalability, posing challenges for deploying these models in resource-constrained environments such as edge devices and mobile platforms. To address this issue, we propose Evolutionary Channel Pruning for StyleGANs (ECP-StyleGANs), a novel algorithm that leverages evolutionary algorithms to compress StyleGAN and StyleGAN2 while maintaining competitive image quality. Our approach encodes pruning configurations as binary masks on the model's convolutional channels and iteratively refines them through selection, crossover, and mutation. By integrating carefully designed fitness functions that balance model complexity and generation quality, ECP-StyleGANs identifies optimally pruned architectures that reduce computational demands without compromising visual fidelity, achieving approximately a 4 × reduction in FLOPs and parameters, while maintaining visual fidelity with only a slight increase in FID (Fréchet Inception Distance) compared to the original un-pruned model. This study should be interpreted as a preliminary step towards the formulation and management of the generative AI pruning problem as a multi-objective optimisation task, aimed at enhancing the trade-off between model efficiency and image quality, thereby making large deep models more accessible for real-world applications such as edge devices and resource-constrained environments. Source codes will be available.
{"title":"Evolutionary Channel Pruning for Style-Based Generative Adversarial Networks.","authors":"Yixia Zhang, Ferrante Neri, Xilu Wang, Pengcheng Jiang, Yu Xue","doi":"10.1142/S0129065725500704","DOIUrl":"10.1142/S0129065725500704","url":null,"abstract":"<p><p>Generative Adversarial Networks (GANs) have demonstrated remarkable success in high-quality image synthesis, with StyleGAN and its successor, StyleGAN2, achieving state-of-the-art performance in terms of realism and control over generated features. However, the large number of parameters and high floating-point operations per second (FLOPs) hinder real-time applications and scalability, posing challenges for deploying these models in resource-constrained environments such as edge devices and mobile platforms. To address this issue, we propose Evolutionary Channel Pruning for StyleGANs (ECP-StyleGANs), a novel algorithm that leverages evolutionary algorithms to compress StyleGAN and StyleGAN2 while maintaining competitive image quality. Our approach encodes pruning configurations as binary masks on the model's convolutional channels and iteratively refines them through selection, crossover, and mutation. By integrating carefully designed fitness functions that balance model complexity and generation quality, ECP-StyleGANs identifies optimally pruned architectures that reduce computational demands without compromising visual fidelity, achieving approximately a 4 × reduction in FLOPs and parameters, while maintaining visual fidelity with only a slight increase in FID (Fréchet Inception Distance) compared to the original un-pruned model. This study should be interpreted as a preliminary step towards the formulation and management of the generative AI pruning problem as a multi-objective optimisation task, aimed at enhancing the trade-off between model efficiency and image quality, thereby making large deep models more accessible for real-world applications such as edge devices and resource-constrained environments. <b>Source codes will be available.</b></p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2550070"},"PeriodicalIF":6.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145194195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1142/S0129065726500061
Romeo Lanzino, Luigi Cinque, Gian Luca Foresti, Giuseppe Placidi
Deep Learning (DL) models excel at automatically learning intricate patterns within complex data, but their black box nature undermines human trust. To address this, current validation strategies typically focus on the model itself, modifying its architecture to assess the role and importance of the components. However, this model-centric view overlooks the critical learning substrate, which is represented by the data, implicitly assuming that it accurately represents the target phenomenon. This implicit trust in data means that evaluation may fail to detect whether high performance stems from exploiting biases or data quirks rather than learning relevant patterns. We present a novel data-related ablation as a complement to the traditional architectural ablation. Using this framework for Electroencephalography (EEG) signals of Emotional Recognition (ER) and Motor Execution (ME) as a case study, we show that seemingly high-accuracy models often rely heavily on process-irrelevant features, maintaining performance even when key information is eliminated. This shows that a standard, data-independent evaluation can be misleading about whether a model truly captured the intended process; the proposed approach helps distinguish robust learning from leaning on incidental characteristics. Therefore, incorporating data-related ablation is essential for developing reliable and generalizable DL models in fields that rely on data derived from complex and often not completely known phenomena.
{"title":"Data-related Ablation for Reinforcing Deep Learning in Explaining Complex Phenomena.","authors":"Romeo Lanzino, Luigi Cinque, Gian Luca Foresti, Giuseppe Placidi","doi":"10.1142/S0129065726500061","DOIUrl":"https://doi.org/10.1142/S0129065726500061","url":null,"abstract":"<p><p>Deep Learning (DL) models excel at automatically learning intricate patterns within complex data, but their black box nature undermines human trust. To address this, current validation strategies typically focus on the model itself, modifying its architecture to assess the role and importance of the components. However, this model-centric view overlooks the critical learning substrate, which is represented by the data, implicitly assuming that it accurately represents the target phenomenon. This implicit trust in data means that evaluation may fail to detect whether high performance stems from exploiting biases or data quirks rather than learning relevant patterns. We present a novel <i>data-related ablation</i> as a complement to the traditional architectural ablation. Using this framework for Electroencephalography (EEG) signals of Emotional Recognition (ER) and Motor Execution (ME) as a case study, we show that seemingly high-accuracy models often rely heavily on process-irrelevant features, maintaining performance even when key information is eliminated. This shows that a standard, data-independent evaluation can be misleading about whether a model truly captured the intended process; the proposed approach helps distinguish robust learning from leaning on incidental characteristics. Therefore, incorporating data-related ablation is essential for developing reliable and generalizable DL models in fields that rely on data derived from complex and often not completely known phenomena.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2650006"},"PeriodicalIF":6.4,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146088546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we aimed to elicit cerebellar activity using a visually cued task involving alternating button presses and foot pedaling at varying speeds. Functional MRI data were acquired using a multiband sequence on a 3T scanner. Thirty-three healthy volunteers participated, and their blood oxygen-level dependent (BOLD) signals were recorded at a spatial resolution of [Formula: see text] [Formula: see text]2.5[Formula: see text]mm3. The fMRI data were analyzed using a general linear model (GLM) to delineate brain regions activated by the button press and foot pedaling conditions, respectively. The BOLD signal changes in each active region of interest (ROI) were then linearly regressed against the mean reaction times (RTs), with age as a covariate, for all participants. All ROIs exhibited a negative relationship with RTs, indicating that higher BOLD activations were associated with faster responses across all conditions. Interestingly, the button press task significantly activated the pyramis (inferior cerebellar vermis), whereas the foot pedaling task activated the superior cerebellar vermis. This finding reflects a functional segmentation along the superior-inferior axis of the cerebellar vermis, corresponding to a foot-hand distribution. Using multiband fMRI, we achieved the spatial resolution necessary to delineate this functional topography within the cerebellum.
{"title":"Exploring Cerebral and Cerebellar Blood Oxygenation-Level Dependent Activations During Visually Cued Alternating Hand and Foot Movements with 3T Multiband fMRI.","authors":"Jeng-Ren Duann, Yun-Chieh Wang, Siao-Jhen Wu, Chun-Ming Chen","doi":"10.1142/S0129065726500152","DOIUrl":"https://doi.org/10.1142/S0129065726500152","url":null,"abstract":"<p><p>In this study, we aimed to elicit cerebellar activity using a visually cued task involving alternating button presses and foot pedaling at varying speeds. Functional MRI data were acquired using a multiband sequence on a 3T scanner. Thirty-three healthy volunteers participated, and their blood oxygen-level dependent (BOLD) signals were recorded at a spatial resolution of [Formula: see text] [Formula: see text]2.5[Formula: see text]mm<sup>3</sup>. The fMRI data were analyzed using a general linear model (GLM) to delineate brain regions activated by the button press and foot pedaling conditions, respectively. The BOLD signal changes in each active region of interest (ROI) were then linearly regressed against the mean reaction times (RTs), with age as a covariate, for all participants. All ROIs exhibited a negative relationship with RTs, indicating that higher BOLD activations were associated with faster responses across all conditions. Interestingly, the button press task significantly activated the pyramis (inferior cerebellar vermis), whereas the foot pedaling task activated the superior cerebellar vermis. This finding reflects a functional segmentation along the superior-inferior axis of the cerebellar vermis, corresponding to a foot-hand distribution. Using multiband fMRI, we achieved the spatial resolution necessary to delineate this functional topography within the cerebellum.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2650015"},"PeriodicalIF":6.4,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Relieving driver fatigue is crucial for ensuring traffic safety. Existing research lacks an exploration of the feasibility and effectiveness of using implicit emotion modulation methods to alleviate driver fatigue. In this study, the effects of Emotional Sensory (olfactory or olfactory-auditory) Stimuli (ESS) on modulating driver fatigue are explored, and the underlying neural mechanisms are analyzed based on the spatio-temporal dynamic patterns of Electroencephalogram (EEG) signals. First, a real-world driver fatigue modulation experiment based on ESS was designed to record EEG signals. Second, brain activation patterns under various ESS were investigated by analyzing brain functional networks. Furthermore, dynamic changes in fatigue-related features were analyzed to examine the strength and persistence of driver fatigue modulation for each ESS. Finally, a fatigue similarity measure method was adopted to quantify the fatigue recovery level under ESS in a more intuitive manner. The results demonstrate that the mint odor-High-Arousal-Low-Valence (HALV) music stimulus exhibits the best driver fatigue modulation effects, and is superior to singular olfactory stimuli. Furthermore, dynamic brain functional connectivity analysis reveals that effective driver fatigue modulation tends to be strongly synchronized in the frontal and parietal lobes. The optimal olfactory-auditory mixed stimuli restores driver fatigue to the level 58-60[Formula: see text]min ago. Our findings shed light on the dynamic characterization of functional connectivity during driver fatigue modulation and demonstrate the potential of using ESS as a reliable implicit tool for modulating driver fatigue.
{"title":"Exploring the Effects of Emotional Sensory Stimuli on Modulating Driver Fatigue via EEG-based Spatial-Temporal Dynamic Analysis.","authors":"Fo Hu, Qinxu Zheng, Junlong Xiong, Hongsheng Chang, Zukang Qiao","doi":"10.1142/S0129065726500140","DOIUrl":"https://doi.org/10.1142/S0129065726500140","url":null,"abstract":"<p><p>Relieving driver fatigue is crucial for ensuring traffic safety. Existing research lacks an exploration of the feasibility and effectiveness of using implicit emotion modulation methods to alleviate driver fatigue. In this study, the effects of Emotional Sensory (olfactory or olfactory-auditory) Stimuli (ESS) on modulating driver fatigue are explored, and the underlying neural mechanisms are analyzed based on the spatio-temporal dynamic patterns of Electroencephalogram (EEG) signals. First, a real-world driver fatigue modulation experiment based on ESS was designed to record EEG signals. Second, brain activation patterns under various ESS were investigated by analyzing brain functional networks. Furthermore, dynamic changes in fatigue-related features were analyzed to examine the strength and persistence of driver fatigue modulation for each ESS. Finally, a fatigue similarity measure method was adopted to quantify the fatigue recovery level under ESS in a more intuitive manner. The results demonstrate that the mint odor-High-Arousal-Low-Valence (HALV) music stimulus exhibits the best driver fatigue modulation effects, and is superior to singular olfactory stimuli. Furthermore, dynamic brain functional connectivity analysis reveals that effective driver fatigue modulation tends to be strongly synchronized in the frontal and parietal lobes. The optimal olfactory-auditory mixed stimuli restores driver fatigue to the level 58-60[Formula: see text]min ago. Our findings shed light on the dynamic characterization of functional connectivity during driver fatigue modulation and demonstrate the potential of using ESS as a reliable implicit tool for modulating driver fatigue.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2650014"},"PeriodicalIF":6.4,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-26DOI: 10.1142/S0129065726500139
Liping Wang, Xiyu Liu, Yuzhen Zhao
Spiking neural membrane systems (SNP systems) are distributed parallel computing models inspired by neuronal spike mechanisms. Traditional SNP systems execute rules serially within each neuron, limiting their efficiency. This paper introduces MNSNP systems, a novel variant where neurons can distinguish spike sources and execute multiple rules in parallel at one time step. MNSNP systems maintain global distributed parallelism while integrating local parallelism, significantly enhancing information processing capabilities. Computational completeness is demonstrated, proving MNSNP systems as Turing universal devices for number generation, acceptance, and function computation. Compared to existing models, MNSNP systems require fewer neurons (only 60 for universal computation), showcasing resource efficiency. An application in smoke detection achieves an AUC value of 0.9840, demonstrating practical utility. This work advances SNP systems by introducing multiplexing, paving the way for applications in robotics, feature recognition, and real-time processing.
{"title":"Spiking Neural Membrane Systems with Multiplexed Neurons for Enhanced Parallel Computing.","authors":"Liping Wang, Xiyu Liu, Yuzhen Zhao","doi":"10.1142/S0129065726500139","DOIUrl":"https://doi.org/10.1142/S0129065726500139","url":null,"abstract":"<p><p>Spiking neural membrane systems (SNP systems) are distributed parallel computing models inspired by neuronal spike mechanisms. Traditional SNP systems execute rules serially within each neuron, limiting their efficiency. This paper introduces MNSNP systems, a novel variant where neurons can distinguish spike sources and execute multiple rules in parallel at one time step. MNSNP systems maintain global distributed parallelism while integrating local parallelism, significantly enhancing information processing capabilities. Computational completeness is demonstrated, proving MNSNP systems as Turing universal devices for number generation, acceptance, and function computation. Compared to existing models, MNSNP systems require fewer neurons (only 60 for universal computation), showcasing resource efficiency. An application in smoke detection achieves an AUC value of 0.9840, demonstrating practical utility. This work advances SNP systems by introducing multiplexing, paving the way for applications in robotics, feature recognition, and real-time processing.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2650013"},"PeriodicalIF":6.4,"publicationDate":"2026-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}