Pub Date : 2025-06-18eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1561401
Sreejith Chandrasekharan, Jisu Elsa Jacob
Electroencephalogram (EEG) signal analysis is important for the diagnosis of various neurological conditions. Traditional deep neural networks, such as convolutional networks, sequence-to-sequence networks, and hybrids of such neural networks were proven to be effective for a wide range of neurological disease classifications. However, these are limited by the requirement of a large dataset, extensive training, and hyperparameter tuning, which require expert-level machine learning knowledge. This survey paper aims to explore the ability of Large Language Models (LLMs) to transform existing systems of EEG-based disease diagnostics. LLMs have a vast background knowledge in neuroscience, disease diagnostics, and EEG signal processing techniques. Thus, these models are capable of achieving expert-level performance with minimal training data, nominal fine-tuning, and less computational overhead, leading to a shorter time to find effective solutions for diagnostics. Further, in comparison with traditional methods, LLM's capability to generate intermediate results and meaningful reasoning makes it more reliable and transparent. This paper delves into several use cases of LLM in EEG signal analysis and attempts to provide a comprehensive understanding of techniques in the domain that can be applied to different disease diagnostics. The study also strives to highlight challenges in the deployment of LLM models, ethical considerations, and bottlenecks in optimizing models due to requirements of specialized methods such as Low-Rank Adapation. In general, this survey aims to stimulate research in the area of EEG disease diagnostics by effectively using LLMs and associated techniques in machine learning pipelines.
{"title":"Bridging neuroscience and AI: a survey on large language models for neurological signal interpretation.","authors":"Sreejith Chandrasekharan, Jisu Elsa Jacob","doi":"10.3389/fninf.2025.1561401","DOIUrl":"10.3389/fninf.2025.1561401","url":null,"abstract":"<p><p>Electroencephalogram (EEG) signal analysis is important for the diagnosis of various neurological conditions. Traditional deep neural networks, such as convolutional networks, sequence-to-sequence networks, and hybrids of such neural networks were proven to be effective for a wide range of neurological disease classifications. However, these are limited by the requirement of a large dataset, extensive training, and hyperparameter tuning, which require expert-level machine learning knowledge. This survey paper aims to explore the ability of Large Language Models (LLMs) to transform existing systems of EEG-based disease diagnostics. LLMs have a vast background knowledge in neuroscience, disease diagnostics, and EEG signal processing techniques. Thus, these models are capable of achieving expert-level performance with minimal training data, nominal fine-tuning, and less computational overhead, leading to a shorter time to find effective solutions for diagnostics. Further, in comparison with traditional methods, LLM's capability to generate intermediate results and meaningful reasoning makes it more reliable and transparent. This paper delves into several use cases of LLM in EEG signal analysis and attempts to provide a comprehensive understanding of techniques in the domain that can be applied to different disease diagnostics. The study also strives to highlight challenges in the deployment of LLM models, ethical considerations, and bottlenecks in optimizing models due to requirements of specialized methods such as Low-Rank Adapation. In general, this survey aims to stimulate research in the area of EEG disease diagnostics by effectively using LLMs and associated techniques in machine learning pipelines.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1561401"},"PeriodicalIF":2.5,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12213581/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144552890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-13eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1568116
Shailesh Appukuttan, Aude-Marie Grapperon, Mounir Mohamed El Mendili, Hugo Dary, Maxime Guye, Annie Verschueren, Jean-Philippe Ranjeva, Shahram Attarian, Wafaa Zaaraoui, Matthieu Gilson
Advancements in machine learning hold great promise for the analysis of multimodal neuroimaging data. They can help identify biomarkers and improve diagnosis for various neurological disorders. However, the application of such techniques for rare and heterogeneous diseases remains challenging due to small-cohorts available for acquiring data. Efforts are therefore commonly directed toward improving the classification models, in an effort to optimize outcomes given the limited data. In this study, we systematically evaluated the impact of various machine learning pipeline configurations, including scaling methods, feature selection, dimensionality reduction, and hyperparameter optimization. The efficacy of such components in the pipeline was evaluated on classification performance using multimodal MRI data from a cohort of 16 ALS patients and 14 healthy controls. Our findings reveal that, while certain pipeline components, such as subject-wise feature normalization, help improve classification outcomes, the overall influence of pipeline refinements on performance is modest. Feature selection and dimensionality reduction steps were found to have limited utility, and the choice of hyperparameter optimization strategies produced only marginal gains. Our results suggest that, for small-cohort studies, the emphasis should shift from extensive tuning of these pipelines to addressing data-related limitations, such as progressively expanding cohort size, integrating additional modalities, and maximizing the information extracted from existing datasets. This study provides a methodological framework to guide future research and emphasizes the need for dataset enrichment to improve clinical utility.
{"title":"Evaluating machine learning pipelines for multimodal neuroimaging in small cohorts: an ALS case study.","authors":"Shailesh Appukuttan, Aude-Marie Grapperon, Mounir Mohamed El Mendili, Hugo Dary, Maxime Guye, Annie Verschueren, Jean-Philippe Ranjeva, Shahram Attarian, Wafaa Zaaraoui, Matthieu Gilson","doi":"10.3389/fninf.2025.1568116","DOIUrl":"10.3389/fninf.2025.1568116","url":null,"abstract":"<p><p>Advancements in machine learning hold great promise for the analysis of multimodal neuroimaging data. They can help identify biomarkers and improve diagnosis for various neurological disorders. However, the application of such techniques for rare and heterogeneous diseases remains challenging due to small-cohorts available for acquiring data. Efforts are therefore commonly directed toward improving the classification models, in an effort to optimize outcomes given the limited data. In this study, we systematically evaluated the impact of various machine learning pipeline configurations, including scaling methods, feature selection, dimensionality reduction, and hyperparameter optimization. The efficacy of such components in the pipeline was evaluated on classification performance using multimodal MRI data from a cohort of 16 ALS patients and 14 healthy controls. Our findings reveal that, while certain pipeline components, such as subject-wise feature normalization, help improve classification outcomes, the overall influence of pipeline refinements on performance is modest. Feature selection and dimensionality reduction steps were found to have limited utility, and the choice of hyperparameter optimization strategies produced only marginal gains. Our results suggest that, for small-cohort studies, the emphasis should shift from extensive tuning of these pipelines to addressing data-related limitations, such as progressively expanding cohort size, integrating additional modalities, and maximizing the information extracted from existing datasets. This study provides a methodological framework to guide future research and emphasizes the need for dataset enrichment to improve clinical utility.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1568116"},"PeriodicalIF":2.5,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12202540/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144527220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-04eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1544143
Charl Linssen, Pooja N Babu, Jochen M Eppler, Luca Koll, Bernhard Rumpe, Abigail Morrison
With increasing model complexity, models are typically re-used and evolved rather than starting from scratch. There is also a growing challenge in ensuring that these models can seamlessly work across various simulation backends and hardware platforms. This underscores the need to ensure that models are easily findable, accessible, interoperable, and reusable-adhering to the FAIR principles. NESTML addresses these requirements by providing a domain-specific language for describing neuron and synapse models that covers a wide range of neuroscientific use cases. The language is supported by a code generation toolchain that automatically generates low-level simulation code for a given target platform (for example, C++ code targeting NEST Simulator). Code generation allows an accessible and easy-to-use language syntax to be combined with good runtime simulation performance and scalability. With an intuitive and highly generic language, combined with the generation of efficient, optimized simulation code supporting large-scale simulations, it opens up neuronal network model development and simulation as a research tool to a much wider community. While originally developed in the context of NEST Simulator, NESTML has been extended to target other simulation platforms, such as the SpiNNaker neuromorphic hardware platform. The processing toolchain is written in Python and is lightweight and easily customizable, making it easy to add support for new simulation platforms.
{"title":"NESTML: a generic modeling language and code generation tool for the simulation of spiking neural networks with advanced plasticity rules.","authors":"Charl Linssen, Pooja N Babu, Jochen M Eppler, Luca Koll, Bernhard Rumpe, Abigail Morrison","doi":"10.3389/fninf.2025.1544143","DOIUrl":"10.3389/fninf.2025.1544143","url":null,"abstract":"<p><p>With increasing model complexity, models are typically re-used and evolved rather than starting from scratch. There is also a growing challenge in ensuring that these models can seamlessly work across various simulation backends and hardware platforms. This underscores the need to ensure that models are easily findable, accessible, interoperable, and reusable-adhering to the FAIR principles. NESTML addresses these requirements by providing a domain-specific language for describing neuron and synapse models that covers a wide range of neuroscientific use cases. The language is supported by a code generation toolchain that automatically generates low-level simulation code for a given target platform (for example, C++ code targeting NEST Simulator). Code generation allows an accessible and easy-to-use language syntax to be combined with good runtime simulation performance and scalability. With an intuitive and highly generic language, combined with the generation of efficient, optimized simulation code supporting large-scale simulations, it opens up neuronal network model development and simulation as a research tool to a much wider community. While originally developed in the context of NEST Simulator, NESTML has been extended to target other simulation platforms, such as the SpiNNaker neuromorphic hardware platform. The processing toolchain is written in Python and is lightweight and easily customizable, making it easy to add support for new simulation platforms.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1544143"},"PeriodicalIF":2.5,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12174165/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144324957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-14eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1513374
Sayan Ghosh, Dipayan Biswas, N R Rohan, Sujith Vijayan, V Srinivasa Chakravarthy
This study presents a general trainable network of Hopf oscillators to model high-dimensional electroencephalogram (EEG) signals across different sleep stages. The proposed architecture consists of two main components: a layer of interconnected oscillators and a complex-valued feed-forward network designed with and without a hidden layer. Incorporating a hidden layer in the feed-forward network leads to lower reconstruction errors than the simpler version without it. Our model reconstructs EEG signals across all five sleep stages and predicts the subsequent 5 s of EEG activity. The predicted data closely aligns with the empirical EEG regarding mean absolute error, power spectral similarity, and complexity measures. We propose three models, each representing a stage of increasing complexity from initial training to architectures with and without hidden layers. In these models, the oscillators initially lack spatial localization. However, we introduce spatial constraints in the final two models by superimposing spherical shells and rectangular geometries onto the oscillator network. Overall, the proposed model represents a step toward constructing a large-scale, biologically inspired model of brain dynamics.
{"title":"Modeling of whole brain sleep electroencephalogram using deep oscillatory neural network.","authors":"Sayan Ghosh, Dipayan Biswas, N R Rohan, Sujith Vijayan, V Srinivasa Chakravarthy","doi":"10.3389/fninf.2025.1513374","DOIUrl":"10.3389/fninf.2025.1513374","url":null,"abstract":"<p><p>This study presents a general trainable network of Hopf oscillators to model high-dimensional electroencephalogram (EEG) signals across different sleep stages. The proposed architecture consists of two main components: a layer of interconnected oscillators and a complex-valued feed-forward network designed with and without a hidden layer. Incorporating a hidden layer in the feed-forward network leads to lower reconstruction errors than the simpler version without it. Our model reconstructs EEG signals across all five sleep stages and predicts the subsequent 5 s of EEG activity. The predicted data closely aligns with the empirical EEG regarding mean absolute error, power spectral similarity, and complexity measures. We propose three models, each representing a stage of increasing complexity from initial training to architectures with and without hidden layers. In these models, the oscillators initially lack spatial localization. However, we introduce spatial constraints in the final two models by superimposing spherical shells and rectangular geometries onto the oscillator network. Overall, the proposed model represents a step toward constructing a large-scale, biologically inspired model of brain dynamics.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1513374"},"PeriodicalIF":2.5,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12116487/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144173389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-06eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1515873
Domenic Bersch, Martina G Vilas, Sari Saba-Sadiya, Timothy Schaumlöffel, Kshitij Dwivedi, Christina Sartzetaki, Radoslaw M Cichy, Gemma Roig
In cognitive neuroscience, the integration of deep neural networks (DNNs) with traditional neuroscientific analyses has significantly advanced our understanding of both biological neural processes and the functioning of DNNs. However, challenges remain in effectively comparing the representational spaces of artificial models and brain data, particularly due to the growing variety of models and the specific demands of neuroimaging research. To address these challenges, we present Net2Brain, a Python-based toolbox that provides an end-to-end pipeline for incorporating DNNs into neuroscience research, encompassing dataset download, a large selection of models, feature extraction, evaluation, and visualization. Net2Brain provides functionalities in four key areas. First, it offers access to over 600 DNNs trained on diverse tasks across multiple modalities, including vision, language, audio, and multimodal data, organized through a carefully structured taxonomy. Second, it provides a streamlined API for downloading and handling popular neuroscience datasets, such as the NSD and THINGS dataset, allowing researchers to easily access corresponding brain data. Third, Net2Brain facilitates a wide range of analysis options, including feature extraction, representational similarity analysis (RSA), and linear encoding, while also supporting advanced techniques like variance partitioning and searchlight analysis. Finally, the toolbox integrates seamlessly with other established open source libraries, enhancing interoperability and promoting collaborative research. By simplifying model selection, data processing, and evaluation, Net2Brain empowers researchers to conduct more robust, flexible, and reproducible investigations of the relationships between artificial and biological neural representations.
{"title":"Net2Brain: a toolbox to compare artificial vision models with human brain responses.","authors":"Domenic Bersch, Martina G Vilas, Sari Saba-Sadiya, Timothy Schaumlöffel, Kshitij Dwivedi, Christina Sartzetaki, Radoslaw M Cichy, Gemma Roig","doi":"10.3389/fninf.2025.1515873","DOIUrl":"10.3389/fninf.2025.1515873","url":null,"abstract":"<p><p>In cognitive neuroscience, the integration of deep neural networks (DNNs) with traditional neuroscientific analyses has significantly advanced our understanding of both biological neural processes and the functioning of DNNs. However, challenges remain in effectively comparing the representational spaces of artificial models and brain data, particularly due to the growing variety of models and the specific demands of neuroimaging research. To address these challenges, we present Net2Brain, a Python-based toolbox that provides an end-to-end pipeline for incorporating DNNs into neuroscience research, encompassing dataset download, a large selection of models, feature extraction, evaluation, and visualization. Net2Brain provides functionalities in four key areas. First, it offers access to over 600 DNNs trained on diverse tasks across multiple modalities, including vision, language, audio, and multimodal data, organized through a carefully structured taxonomy. Second, it provides a streamlined API for downloading and handling popular neuroscience datasets, such as the NSD and THINGS dataset, allowing researchers to easily access corresponding brain data. Third, Net2Brain facilitates a wide range of analysis options, including feature extraction, representational similarity analysis (RSA), and linear encoding, while also supporting advanced techniques like variance partitioning and searchlight analysis. Finally, the toolbox integrates seamlessly with other established open source libraries, enhancing interoperability and promoting collaborative research. By simplifying model selection, data processing, and evaluation, Net2Brain empowers researchers to conduct more robust, flexible, and reproducible investigations of the relationships between artificial and biological neural representations.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1515873"},"PeriodicalIF":2.5,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12089098/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144110310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-02eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1557177
Muhammad Liaquat Raza, Syed Tawassul Hassan, Subia Jamil, Noorulain Hyder, Kinza Batool, Sajidah Walji, Muhammad Khizar Abbas
Introduction: Alzheimer's disease is a progressive neurodegenerative disorder challenging early diagnosis and treatment. Recent advancements in deep learning algorithms applied to multimodal brain imaging offer promising solutions for improving diagnostic accuracy and predicting disease progression.
Method: This narrative review synthesizes current literature on deep learning applications in Alzheimer's disease diagnosis using multimodal neuroimaging. The review process involved a comprehensive search of relevant databases (PubMed, Embase, Google Scholar and ClinicalTrials.gov), selection of pertinent studies, and critical analysis of findings. We employed a best-evidence approach, prioritizing high-quality studies and identifying consistent patterns across the literature.
Results: Deep learning architectures, including convolutional neural networks, recurrent neural networks, and transformer-based models, have shown remarkable potential in analyzing multimodal neuroimaging data. These models can effectively process structural and functional imaging modalities, extracting relevant features and patterns associated with Alzheimer's pathology. Integration of multiple imaging modalities has demonstrated improved diagnostic accuracy compared to single-modality approaches. Deep learning models have also shown promise in predictive modeling, identifying potential biomarkers and forecasting disease progression.
Discussion: While deep learning approaches show great potential, several challenges remain. Data heterogeneity, small sample sizes, and limited generalizability across diverse populations are significant hurdles. The clinical translation of these models requires careful consideration of interpretability, transparency, and ethical implications. The future of AI in neurodiagnostics for Alzheimer's disease looks promising, with potential applications in personalized treatment strategies.
{"title":"Advancements in deep learning for early diagnosis of Alzheimer's disease using multimodal neuroimaging: challenges and future directions.","authors":"Muhammad Liaquat Raza, Syed Tawassul Hassan, Subia Jamil, Noorulain Hyder, Kinza Batool, Sajidah Walji, Muhammad Khizar Abbas","doi":"10.3389/fninf.2025.1557177","DOIUrl":"10.3389/fninf.2025.1557177","url":null,"abstract":"<p><strong>Introduction: </strong>Alzheimer's disease is a progressive neurodegenerative disorder challenging early diagnosis and treatment. Recent advancements in deep learning algorithms applied to multimodal brain imaging offer promising solutions for improving diagnostic accuracy and predicting disease progression.</p><p><strong>Method: </strong>This narrative review synthesizes current literature on deep learning applications in Alzheimer's disease diagnosis using multimodal neuroimaging. The review process involved a comprehensive search of relevant databases (PubMed, Embase, Google Scholar and ClinicalTrials.gov), selection of pertinent studies, and critical analysis of findings. We employed a best-evidence approach, prioritizing high-quality studies and identifying consistent patterns across the literature.</p><p><strong>Results: </strong>Deep learning architectures, including convolutional neural networks, recurrent neural networks, and transformer-based models, have shown remarkable potential in analyzing multimodal neuroimaging data. These models can effectively process structural and functional imaging modalities, extracting relevant features and patterns associated with Alzheimer's pathology. Integration of multiple imaging modalities has demonstrated improved diagnostic accuracy compared to single-modality approaches. Deep learning models have also shown promise in predictive modeling, identifying potential biomarkers and forecasting disease progression.</p><p><strong>Discussion: </strong>While deep learning approaches show great potential, several challenges remain. Data heterogeneity, small sample sizes, and limited generalizability across diverse populations are significant hurdles. The clinical translation of these models requires careful consideration of interpretability, transparency, and ethical implications. The future of AI in neurodiagnostics for Alzheimer's disease looks promising, with potential applications in personalized treatment strategies.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1557177"},"PeriodicalIF":2.5,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12081360/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144093326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-17eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1550432
Leondry Mayeta-Revilla, Eduardo P Cavieres, Matías Salinas, Diego Mellado, Sebastian Ponce, Francisco Torres Moyano, Steren Chabert, Marvin Querales, Julio Sotelo, Rodrigo Salas
Introduction: Brain tumors are a leading cause of mortality worldwide, with early and accurate diagnosis being essential for effective treatment. Although Deep Learning (DL) models offer strong performance in tumor detection and segmentation using MRI, their black-box nature hinders clinical adoption due to a lack of interpretability.
Methods: We present a hybrid AI framework that integrates a 3D U-Net Convolutional Neural Network for MRI-based tumor segmentation with radiomic feature extraction. Dimensionality reduction is performed using machine learning, and an Adaptive Neuro-Fuzzy Inference System (ANFIS) is employed to produce interpretable decision rules. Each experiment is constrained to a small set of high-impact radiomic features to enhance clarity and reduce complexity.
Results: The framework was validated on the BraTS2020 dataset, achieving an average DICE Score of 82.94% for tumor core segmentation and 76.06% for edema segmentation. Classification tasks yielded accuracies of 95.43% for binary (healthy vs. tumor) and 92.14% for multi-class (healthy vs. tumor core vs. edema) problems. A concise set of 18 fuzzy rules was generated to provide clinically interpretable outputs.
Discussion: Our approach balances high diagnostic accuracy with enhanced interpretability, addressing a critical barrier in applying DL models in clinical settings. Integrating of ANFIS and radiomics supports transparent decision-making, facilitating greater trust and applicability in real-world medical diagnostics assistance.
{"title":"Radiomics-driven neuro-fuzzy framework for rule generation to enhance explainability in MRI-based brain tumor segmentation.","authors":"Leondry Mayeta-Revilla, Eduardo P Cavieres, Matías Salinas, Diego Mellado, Sebastian Ponce, Francisco Torres Moyano, Steren Chabert, Marvin Querales, Julio Sotelo, Rodrigo Salas","doi":"10.3389/fninf.2025.1550432","DOIUrl":"https://doi.org/10.3389/fninf.2025.1550432","url":null,"abstract":"<p><strong>Introduction: </strong>Brain tumors are a leading cause of mortality worldwide, with early and accurate diagnosis being essential for effective treatment. Although Deep Learning (DL) models offer strong performance in tumor detection and segmentation using MRI, their black-box nature hinders clinical adoption due to a lack of interpretability.</p><p><strong>Methods: </strong>We present a hybrid AI framework that integrates a 3D U-Net Convolutional Neural Network for MRI-based tumor segmentation with radiomic feature extraction. Dimensionality reduction is performed using machine learning, and an Adaptive Neuro-Fuzzy Inference System (ANFIS) is employed to produce interpretable decision rules. Each experiment is constrained to a small set of high-impact radiomic features to enhance clarity and reduce complexity.</p><p><strong>Results: </strong>The framework was validated on the BraTS2020 dataset, achieving an average DICE Score of 82.94% for tumor core segmentation and 76.06% for edema segmentation. Classification tasks yielded accuracies of 95.43% for binary (healthy vs. tumor) and 92.14% for multi-class (healthy vs. tumor core vs. edema) problems. A concise set of 18 fuzzy rules was generated to provide clinically interpretable outputs.</p><p><strong>Discussion: </strong>Our approach balances high diagnostic accuracy with enhanced interpretability, addressing a critical barrier in applying DL models in clinical settings. Integrating of ANFIS and radiomics supports transparent decision-making, facilitating greater trust and applicability in real-world medical diagnostics assistance.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1550432"},"PeriodicalIF":2.5,"publicationDate":"2025-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12043696/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143989629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-11eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1563799
Guoqiang Zhao, Ao Cheng, Jiahao Shi, Peiyao Shi, Jun Guo, Chunying Yin, Hafsh Khan, Jiachi Chen, Pengcheng Wang, Jiao Chen, Ruobing Zhang
Introduction: Autism spectrum disorder (ASD) encompasses a diverse range of neurodevelopmental disorders with complex etiologies, including genetic, environmental, and neuroanatomical factors. While the exact mechanisms underlying ASD remain unclear, structural abnormalities in the brain offer valuable insights into its pathophysiology. The corpus callosum, the largest white matter tract in the brain, plays a crucial role in interhemispheric communication, and its structural abnormalities may contribute to ASD-related phenotypes.
Methods: To investigate the ultrastructural alterations in the corpus callosum associated with ASD, we utilized serial scanning electron microscopy (sSEM) in mice. A dataset of the entire sagittal sections of the corpus callosum from wild-type and Shank3B mutant mice was acquired at 4 nm resolution, enabling precise comparisons of myelinated axon properties. Leveraging a fine-tuned EM-SAM model for automated segmentation, we quantitatively analyzed key metrics, including G-ratio, myelin thickness, and axonal density.
Results: In the corpus callosum of Shank3B autism model mouse, we observed a significant increase in myelinated axon density, accompanied by thinner myelin sheaths compared to wild-type. Additionally, we identified abnormalities in the diameter distribution of myelinated axons and deviations in G-ratio. Notably, these ultrastructural alterations were widespread across the corpus callosum, suggesting a global disruption of myelinated axon integrity.
Discussion: This study provides novel insights into the microstructural abnormalities of the corpus callosum in ASD mouse, supporting the hypothesis that myelination deficits contribute to ASD-related communication impairments between brain hemispheres. However, given the structural focus of this study, further research integrating functional assessments is necessary to establish a direct link between these morphological changes and ASD-related neural dysfunction.
{"title":"Large-scale EM data reveals myelinated axonal changes and altered connectivity in the corpus callosum of an autism mouse model.","authors":"Guoqiang Zhao, Ao Cheng, Jiahao Shi, Peiyao Shi, Jun Guo, Chunying Yin, Hafsh Khan, Jiachi Chen, Pengcheng Wang, Jiao Chen, Ruobing Zhang","doi":"10.3389/fninf.2025.1563799","DOIUrl":"https://doi.org/10.3389/fninf.2025.1563799","url":null,"abstract":"<p><strong>Introduction: </strong>Autism spectrum disorder (ASD) encompasses a diverse range of neurodevelopmental disorders with complex etiologies, including genetic, environmental, and neuroanatomical factors. While the exact mechanisms underlying ASD remain unclear, structural abnormalities in the brain offer valuable insights into its pathophysiology. The corpus callosum, the largest white matter tract in the brain, plays a crucial role in interhemispheric communication, and its structural abnormalities may contribute to ASD-related phenotypes.</p><p><strong>Methods: </strong>To investigate the ultrastructural alterations in the corpus callosum associated with ASD, we utilized serial scanning electron microscopy (sSEM) in mice. A dataset of the entire sagittal sections of the corpus callosum from wild-type and Shank3B mutant mice was acquired at 4 nm resolution, enabling precise comparisons of myelinated axon properties. Leveraging a fine-tuned EM-SAM model for automated segmentation, we quantitatively analyzed key metrics, including G-ratio, myelin thickness, and axonal density.</p><p><strong>Results: </strong>In the corpus callosum of Shank3B autism model mouse, we observed a significant increase in myelinated axon density, accompanied by thinner myelin sheaths compared to wild-type. Additionally, we identified abnormalities in the diameter distribution of myelinated axons and deviations in G-ratio. Notably, these ultrastructural alterations were widespread across the corpus callosum, suggesting a global disruption of myelinated axon integrity.</p><p><strong>Discussion: </strong>This study provides novel insights into the microstructural abnormalities of the corpus callosum in ASD mouse, supporting the hypothesis that myelination deficits contribute to ASD-related communication impairments between brain hemispheres. However, given the structural focus of this study, further research integrating functional assessments is necessary to establish a direct link between these morphological changes and ASD-related neural dysfunction.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1563799"},"PeriodicalIF":2.5,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12021825/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143961848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-09eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1559335
Zhibin Jiang, Keli Hu, Jia Qu, Zekang Bian, Donghua Yu, Jie Zhou
Introduction: Motor imagery electroencephalographic (MI-EEG) signal recognition is used in various brain-computer interface (BCI) systems. In most existing BCI systems, this identification relies on classification algorithms. However, generally, a large amount of subject-specific labeled training data is required to reliably calibrate the classification algorithm for each new subject. To address this challenge, an effective strategy is to integrate transfer learning into the construction of intelligent models, allowing knowledge to be transferred from the source domain to enhance the performance of models trained in the target domain. Although transfer learning has been implemented in EEG signal recognition, many existing methods are designed specifically for certain intelligent models, limiting their application and generalization.
Methods: To broaden application and generalization, an extended-LSR-based inductive transfer learning method is proposed to facilitate transfer learning across various classical intelligent models, including neural networks, Takagi-SugenoKang (TSK) fuzzy systems, and kernel methods.
Results and discussion: The proposed method not only promotes the transfer of valuable knowledge from the source domain to improve learning performance in the target domain when target domain training data are insufficient but also enhances application and generalization by incorporating multiple classic base models. The experimental results demonstrate the effectiveness of the proposed method in MI-EEG signal recognition.
{"title":"Recognition of MI-EEG signals using extended-LSR-based inductive transfer learning.","authors":"Zhibin Jiang, Keli Hu, Jia Qu, Zekang Bian, Donghua Yu, Jie Zhou","doi":"10.3389/fninf.2025.1559335","DOIUrl":"https://doi.org/10.3389/fninf.2025.1559335","url":null,"abstract":"<p><strong>Introduction: </strong>Motor imagery electroencephalographic (MI-EEG) signal recognition is used in various brain-computer interface (BCI) systems. In most existing BCI systems, this identification relies on classification algorithms. However, generally, a large amount of subject-specific labeled training data is required to reliably calibrate the classification algorithm for each new subject. To address this challenge, an effective strategy is to integrate transfer learning into the construction of intelligent models, allowing knowledge to be transferred from the source domain to enhance the performance of models trained in the target domain. Although transfer learning has been implemented in EEG signal recognition, many existing methods are designed specifically for certain intelligent models, limiting their application and generalization.</p><p><strong>Methods: </strong>To broaden application and generalization, an extended-LSR-based inductive transfer learning method is proposed to facilitate transfer learning across various classical intelligent models, including neural networks, Takagi-SugenoKang (TSK) fuzzy systems, and kernel methods.</p><p><strong>Results and discussion: </strong>The proposed method not only promotes the transfer of valuable knowledge from the source domain to improve learning performance in the target domain when target domain training data are insufficient but also enhances application and generalization by incorporating multiple classic base models. The experimental results demonstrate the effectiveness of the proposed method in MI-EEG signal recognition.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1559335"},"PeriodicalIF":2.5,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12014663/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143977845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}