Pub Date : 2025-12-17eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1649440
Marco Ganzetti, Paola Valsasina, Frederik Barkhof, Maria A Rocca, Massimo Filippi, Ferran Prados, Licinio Craveiro
Background: Spinal cord atrophy is a key biomarker for tracking disease progression in neurological disorders, including multiple sclerosis, amyotrophic lateral sclerosis, and spinal cord injury. Recent MRI advancements have improved atrophy detection, particularly in the cervical region, facilitating longitudinal studies. However, validating atrophy quantification algorithms remains challenging due to limited ground truth data.
Objective: This study introduces SynSpine, a workflow for generating synthetic spinal cord MRI data (i.e., digital phantoms) with controlled levels of artificial atrophy. These phantoms support the development and preliminary validation of spinal cord imaging pipelines designed to measure degeneration over time.
Methods: The workflow consists of two phases: (1) generating synthetic MR images by isolating, extracting and scaling the spinal cord, simulating atrophy on the PAM50 template; (2) performing non-rigid registration to align the synthetic images with the subject's native space, ensuring accurate anatomical correspondence. A proof-of-concept application utilizing the Active Surface and Reg methods implemented in Jim demonstrated its effectiveness in detecting atrophy across various levels of simulated atrophy and noise.
Results: SynSpine successfully generates synthetic spinal cord images with varying atrophy levels. Non-rigid registration did not significantly affect atrophy measurements. Atrophy estimation errors, estimated using Active Surface and Reg methods, varied with both simulated atrophy magnitude and noise level, exhibiting region-dependent differences. Increased noise led to higher measurement errors.
Conclusion: This work presents a novel and modular framework for simulating spinal cord atrophy data using digital phantoms, offering a controlled setting for testing spinal cord analysis pipelines. As the simulated atrophy may over-simplify in vivo conditions, future research will focus on enhancing the realism of the synthetic dataset by simulating additional pathologies, thus improving its application for evaluating spinal cord atrophy in clinical and research contexts.
{"title":"SynSpine: an automated workflow for the generation of longitudinal spinal cord synthetic MRI data.","authors":"Marco Ganzetti, Paola Valsasina, Frederik Barkhof, Maria A Rocca, Massimo Filippi, Ferran Prados, Licinio Craveiro","doi":"10.3389/fninf.2025.1649440","DOIUrl":"10.3389/fninf.2025.1649440","url":null,"abstract":"<p><strong>Background: </strong>Spinal cord atrophy is a key biomarker for tracking disease progression in neurological disorders, including multiple sclerosis, amyotrophic lateral sclerosis, and spinal cord injury. Recent MRI advancements have improved atrophy detection, particularly in the cervical region, facilitating longitudinal studies. However, validating atrophy quantification algorithms remains challenging due to limited ground truth data.</p><p><strong>Objective: </strong>This study introduces SynSpine, a workflow for generating synthetic spinal cord MRI data (i.e., digital phantoms) with controlled levels of artificial atrophy. These phantoms support the development and preliminary validation of spinal cord imaging pipelines designed to measure degeneration over time.</p><p><strong>Methods: </strong>The workflow consists of two phases: (1) generating synthetic MR images by isolating, extracting and scaling the spinal cord, simulating atrophy on the PAM50 template; (2) performing non-rigid registration to align the synthetic images with the subject's native space, ensuring accurate anatomical correspondence. A proof-of-concept application utilizing the Active Surface and Reg methods implemented in Jim demonstrated its effectiveness in detecting atrophy across various levels of simulated atrophy and noise.</p><p><strong>Results: </strong>SynSpine successfully generates synthetic spinal cord images with varying atrophy levels. Non-rigid registration did not significantly affect atrophy measurements. Atrophy estimation errors, estimated using Active Surface and Reg methods, varied with both simulated atrophy magnitude and noise level, exhibiting region-dependent differences. Increased noise led to higher measurement errors.</p><p><strong>Conclusion: </strong>This work presents a novel and modular framework for simulating spinal cord atrophy data using digital phantoms, offering a controlled setting for testing spinal cord analysis pipelines. As the simulated atrophy may over-simplify <i>in vivo</i> conditions, future research will focus on enhancing the realism of the synthetic dataset by simulating additional pathologies, thus improving its application for evaluating spinal cord atrophy in clinical and research contexts.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1649440"},"PeriodicalIF":2.5,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12753887/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145888911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1668395
Mateusz Dorochowicz, Arkadiusz Kacała, Michał Puła, Adrian Korbecki, Aleksandra Kosikowska, Aleksandra Tołkacz, Anna Zimny, Maciej Guziński
Background: Timely and accurate assessment of acute ischemic stroke is crucial for determining eligibility for mechanical thrombectomy. The Alberta Stroke Program Early CT Score (ASPECTS) is a widely used tool for evaluating early ischemic changes on non-contrast CT (NCCT), but its interpretation is subject to interobserver variability. Brainomix e-ASPECTS is an automated software designed to standardize and expedite this assessment. We aimed to evaluate the clinical utility and diagnostic performance of the Brainomix e-ASPECTS software in an unselected, real-world cohort of patients undergoing NCCT for suspected acute ischemic stroke.
Methods: We retrospectively analyzed 1,029 NCCT studies from 954 patients between March 2020 and December 2024. e-ASPECTS scores were compared to radiologist-assigned ASPECTS, which served as the reference standard. Diagnostic accuracy, sensitivity, specificity, and correlation between scoring methods were assessed.
Results: There was a strong correlation between e-ASPECTS and radiologist ASPECTS (ρ = 0.953, p < 0.001). For detecting acute ischemia, sensitivity was 95.8% (95% CI, 93.6-97.3%), specificity 96.9% (95% CI, 94.7-98.2%), and overall accuracy 96.3% (95% CI, 95.1-97.5%). The positive predictive value was 97.2% (95% CI, 95.3-98.4%), and the negative predictive value was 95.3% (95% CI, 92.8-96.9%). Score concordance was high, with exact matches in 92.3% of cases and a ≤ 1-point difference in 97.7%. Misclassification for thrombectomy eligibility (ASPECTS < 6) occurred in four cases (0.4%). The software achieved a processing success rate of 91.9%.
Conclusion: E-ASPECTS demonstrates high diagnostic accuracy and strong agreement with expert radiological assessment, supporting its role as a valuable decision support tool in acute stroke imaging. However, its use should complement, not replace, expert interpretation, particularly in patients with low ASPECTS scores, where treatment decisions are most sensitive.
{"title":"Assessing the eligibility of Brainomix e-ASPECTS for acute stroke imaging.","authors":"Mateusz Dorochowicz, Arkadiusz Kacała, Michał Puła, Adrian Korbecki, Aleksandra Kosikowska, Aleksandra Tołkacz, Anna Zimny, Maciej Guziński","doi":"10.3389/fninf.2025.1668395","DOIUrl":"10.3389/fninf.2025.1668395","url":null,"abstract":"<p><strong>Background: </strong>Timely and accurate assessment of acute ischemic stroke is crucial for determining eligibility for mechanical thrombectomy. The Alberta Stroke Program Early CT Score (ASPECTS) is a widely used tool for evaluating early ischemic changes on non-contrast CT (NCCT), but its interpretation is subject to interobserver variability. Brainomix e-ASPECTS is an automated software designed to standardize and expedite this assessment. We aimed to evaluate the clinical utility and diagnostic performance of the Brainomix e-ASPECTS software in an unselected, real-world cohort of patients undergoing NCCT for suspected acute ischemic stroke.</p><p><strong>Methods: </strong>We retrospectively analyzed 1,029 NCCT studies from 954 patients between March 2020 and December 2024. e-ASPECTS scores were compared to radiologist-assigned ASPECTS, which served as the reference standard. Diagnostic accuracy, sensitivity, specificity, and correlation between scoring methods were assessed.</p><p><strong>Results: </strong>There was a strong correlation between e-ASPECTS and radiologist ASPECTS (<i>ρ</i> = 0.953, <i>p</i> < 0.001). For detecting acute ischemia, sensitivity was 95.8% (95% CI, 93.6-97.3%), specificity 96.9% (95% CI, 94.7-98.2%), and overall accuracy 96.3% (95% CI, 95.1-97.5%). The positive predictive value was 97.2% (95% CI, 95.3-98.4%), and the negative predictive value was 95.3% (95% CI, 92.8-96.9%). Score concordance was high, with exact matches in 92.3% of cases and a ≤ 1-point difference in 97.7%. Misclassification for thrombectomy eligibility (ASPECTS < 6) occurred in four cases (0.4%). The software achieved a processing success rate of 91.9%.</p><p><strong>Conclusion: </strong>E-ASPECTS demonstrates high diagnostic accuracy and strong agreement with expert radiological assessment, supporting its role as a valuable decision support tool in acute stroke imaging. However, its use should complement, not replace, expert interpretation, particularly in patients with low ASPECTS scores, where treatment decisions are most sensitive.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1668395"},"PeriodicalIF":2.5,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12738961/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145849462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-05eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1679664
Karol Chlasta, Piotr Struzik, Grzegorz M Wójcik
Dementia poses a major challenge to individuals and public health systems. Detecting cognitive decline through spontaneous speech offers a promising, non-invasive avenue for diagnosis of mild cognitive impairment (MCI) and dementia, enabling timely intervention and improved outcomes. This study describes our submission to the PROCESS Signal Processing Grand Challenge (ICASSP 2025), which tasked participants with predicting cognitive decline from speech samples. Our method combines eGeMAPS features from openSMILE, HuBERT (a self-supervised speech representation model), and GPT-4o, OpenAI's state-of-the-art large language model. These are integrated with the custom LSTM and ResMLP neural networks, and supported by Scikit-learn regressors/classifiers for both cognitive score regression and dementia classification. Our regression model based on LightGBM achieved an RMSE of 2.7775, placing us 10th out of 80 teams globally and surpassing the RoBERTa baseline by 7.5%. For the three-class classification task (Dementia/MCI/Control), our LSTM model obtained an F1-score of 0.5521, ranking 20th of 106 and marginally outperforming the best baseline. We trained models on speech data from 157 study participants, with independent evaluation performed on a separate test set of 40 individuals. We discoved that integrating large language models with self-supervised speech representations enhances the detection of cognitive decline. The proposed approach offers a scalable, data-driven method for early cognitive screening and may support emerging applications in neuropsychological informatics.
{"title":"Enhancing dementia and cognitive decline detection with large language models and speech representation learning.","authors":"Karol Chlasta, Piotr Struzik, Grzegorz M Wójcik","doi":"10.3389/fninf.2025.1679664","DOIUrl":"10.3389/fninf.2025.1679664","url":null,"abstract":"<p><p>Dementia poses a major challenge to individuals and public health systems. Detecting cognitive decline through spontaneous speech offers a promising, non-invasive avenue for diagnosis of mild cognitive impairment (MCI) and dementia, enabling timely intervention and improved outcomes. This study describes our submission to the PROCESS Signal Processing Grand Challenge (ICASSP 2025), which tasked participants with predicting cognitive decline from speech samples. Our method combines eGeMAPS features from openSMILE, HuBERT (a self-supervised speech representation model), and GPT-4o, OpenAI's state-of-the-art large language model. These are integrated with the custom LSTM and ResMLP neural networks, and supported by Scikit-learn regressors/classifiers for both cognitive score regression and dementia classification. Our regression model based on LightGBM achieved an RMSE of 2.7775, placing us 10th out of 80 teams globally and surpassing the RoBERTa baseline by 7.5%. For the three-class classification task (Dementia/MCI/Control), our LSTM model obtained an F1-score of 0.5521, ranking 20th of 106 and marginally outperforming the best baseline. We trained models on speech data from 157 study participants, with independent evaluation performed on a separate test set of 40 individuals. We discoved that integrating large language models with self-supervised speech representations enhances the detection of cognitive decline. The proposed approach offers a scalable, data-driven method for early cognitive screening and may support emerging applications in neuropsychological informatics.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1679664"},"PeriodicalIF":2.5,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12714990/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145803723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-20eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1706099
Jacob Kang, Hunseok Kang, Jong-Hyeon Seo
Alzheimer's disease (AD) and frontotemporal dementia (FTD) are major neurodegenerative disorders with characteristic EEG alterations. While most prior studies have focused on eyes-closed (EC) EEG, where stable alpha rhythms support relatively high classification performance, eyes-open (EO) EEG has proven particularly challenging for AD, as low-frequency instability obscures the typical spectral alterations. In contrast, FTD often remains more discriminable under EO conditions, reflecting distinct neurophysiological dynamics between the two disorders. To address this challenge, we propose a CNN-based framework that applies Dynamic Mode Decomposition (DMD) to segment EO EEG into shorter temporal windows and employs a 3D CNN to capture spatio-temporal-spectral representations. This approach outperformed not only the conventional short-epoch spectral ML pipeline but also the same CNN architecture trained on FFT-based features, with particularly pronounced improvements observed in AD classification. Excluding delta yielded small gains in AD-involving contrasts, whereas FTD/CN was unchanged or slightly better with delta retained-suggesting delta is more perturbative in AD under EO conditions.
{"title":"CNN-based framework for Alzheimer's disease detection from EEG via dynamic mode decomposition.","authors":"Jacob Kang, Hunseok Kang, Jong-Hyeon Seo","doi":"10.3389/fninf.2025.1706099","DOIUrl":"10.3389/fninf.2025.1706099","url":null,"abstract":"<p><p>Alzheimer's disease (AD) and frontotemporal dementia (FTD) are major neurodegenerative disorders with characteristic EEG alterations. While most prior studies have focused on eyes-closed (EC) EEG, where stable alpha rhythms support relatively high classification performance, eyes-open (EO) EEG has proven particularly challenging for AD, as low-frequency instability obscures the typical spectral alterations. In contrast, FTD often remains more discriminable under EO conditions, reflecting distinct neurophysiological dynamics between the two disorders. To address this challenge, we propose a CNN-based framework that applies Dynamic Mode Decomposition (DMD) to segment EO EEG into shorter temporal windows and employs a 3D CNN to capture spatio-temporal-spectral representations. This approach outperformed not only the conventional short-epoch spectral ML pipeline but also the same CNN architecture trained on FFT-based features, with particularly pronounced improvements observed in AD classification. Excluding delta yielded small gains in AD-involving contrasts, whereas FTD/CN was unchanged or slightly better with delta retained-suggesting delta is more perturbative in AD under EO conditions.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1706099"},"PeriodicalIF":2.5,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12676564/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145700286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1679196
J Revathy, Karthiga M
Introduction: Autism Spectrum Disorder (ASD) diagnosis remains complex due to limited access to large-scale multimodal datasets and privacy concerns surrounding clinical data. Traditional methods rely heavily on resource-intensive clinical assessments and are constrained by unimodal or non-adaptive learning models. To address these limitations, this study introduces AutismSynthGen, a privacy-preserving framework for synthesizing multimodal ASD data and enhancing prediction accuracy.
Materials and methods: The proposed system integrates a Multimodal Autism Data Synthesis Network (MADSN), which employs transformer-based encoders and cross-modal attention within a conditional GAN to generate synthetic data across structural MRI, EEG, behavioral vectors, and severity scores. Differential privacy is enforced via DP-SGD (ε ≤ 1.0). A complementary Adaptive Multimodal Ensemble Learning (AMEL) module, consisting of five heterogeneous experts and a gating network, is trained on both real and synthetic data. Evaluation is conducted on the ABIDE, NDAR, and SSC datasets using metrics such as AUC, F1 score, MMD, KS statistic, and BLEU.
Results: Synthetic augmentation improved model performance, yielding validation AUC gains of ≥ 0.04. AMEL achieved an AUC of 0.98 and an F1 score of 0.99 on real data and approached near-perfect internal performance (AUC ≈ 1.00, F1 ≈ 1.00) when synthetic data were included. Distributional metrics (MMD = 0.04; KS = 0.03) and text similarity (BLEU = 0.70) demonstrated high fidelity between the real and synthetic samples. Ablation studies confirmed the importance of cross-modal attention and entropy-regularized expert gating.
Discussion: AutismSynthGen offers a scalable, privacy-compliant solution for augmenting limited multimodal datasets and enhancing ASD prediction. Future directions include semi-supervised learning, explainable AI for clinical trust, and deployment in federated environments to broaden accessibility while maintaining privacy.
{"title":"Cross-modal privacy-preserving synthesis and mixture-of-experts ensemble for robust ASD prediction.","authors":"J Revathy, Karthiga M","doi":"10.3389/fninf.2025.1679196","DOIUrl":"10.3389/fninf.2025.1679196","url":null,"abstract":"<p><strong>Introduction: </strong>Autism Spectrum Disorder (ASD) diagnosis remains complex due to limited access to large-scale multimodal datasets and privacy concerns surrounding clinical data. Traditional methods rely heavily on resource-intensive clinical assessments and are constrained by unimodal or non-adaptive learning models. To address these limitations, this study introduces AutismSynthGen, a privacy-preserving framework for synthesizing multimodal ASD data and enhancing prediction accuracy.</p><p><strong>Materials and methods: </strong>The proposed system integrates a Multimodal Autism Data Synthesis Network (MADSN), which employs transformer-based encoders and cross-modal attention within a conditional GAN to generate synthetic data across structural MRI, EEG, behavioral vectors, and severity scores. Differential privacy is enforced via DP-SGD (<i>ε</i> ≤ 1.0). A complementary Adaptive Multimodal Ensemble Learning (AMEL) module, consisting of five heterogeneous experts and a gating network, is trained on both real and synthetic data. Evaluation is conducted on the ABIDE, NDAR, and SSC datasets using metrics such as AUC, F1 score, MMD, KS statistic, and BLEU.</p><p><strong>Results: </strong>Synthetic augmentation improved model performance, yielding validation AUC gains of ≥ 0.04. AMEL achieved an AUC of 0.98 and an F1 score of 0.99 on real data and approached near-perfect internal performance (AUC ≈ 1.00, F1 ≈ 1.00) when synthetic data were included. Distributional metrics (MMD = 0.04; KS = 0.03) and text similarity (BLEU = 0.70) demonstrated high fidelity between the real and synthetic samples. Ablation studies confirmed the importance of cross-modal attention and entropy-regularized expert gating.</p><p><strong>Discussion: </strong>AutismSynthGen offers a scalable, privacy-compliant solution for augmenting limited multimodal datasets and enhancing ASD prediction. Future directions include semi-supervised learning, explainable AI for clinical trust, and deployment in federated environments to broaden accessibility while maintaining privacy.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1679196"},"PeriodicalIF":2.5,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12673485/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145676785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-12eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1724386
Rositsa Paunova, Alice Geminiani
{"title":"Editorial: Women pioneering neuroinformatics and neuroscience-related machine learning, 2024.","authors":"Rositsa Paunova, Alice Geminiani","doi":"10.3389/fninf.2025.1724386","DOIUrl":"https://doi.org/10.3389/fninf.2025.1724386","url":null,"abstract":"","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1724386"},"PeriodicalIF":2.5,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12647107/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145631880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1700481
Erik D Fagerholm, Hirokazu Tanaka, Milan Brázdil
Introduction: Neural activity can be described in terms of probability distributions that are continuously evolving in time. Characterizing how these distributions are reshaped as they pass between cortical regions is key to understanding how information is organized in the brain.
Methods: We developed a mathematical framework that represents these transformations as information-theoretic gradient flows - dynamical trajectories that follow the steepest ascent of entropy and expectation. The relative strengths of these two functionals provide interpretable measures of how neural probability distributions change as they propagate within neural systems. Following construct validation in silico, we applied the framework to publicly available continuous ΔF/F two-photon calcium recordings from the mouse visual cortex.
Results: The analysis revealed consistent bi-directional transformations between the rostrolateral area and the primary visual cortex across all five mice. These findings demonstrate that the relative contributions of entropy and expectation can be disambiguated and used to describe information flow within cortical networks.
Discussion: We introduce a framework for decomposing neural signal transformations into interpretable information-theoretic components. Beyond the mouse visual cortex, the method can be applied to diverse neuroimaging modalities and scales, thereby providing a generalizable approach for quantifying how information geometry shapes cortical communication.
{"title":"Information-theoretic gradient flows in mouse visual cortex.","authors":"Erik D Fagerholm, Hirokazu Tanaka, Milan Brázdil","doi":"10.3389/fninf.2025.1700481","DOIUrl":"10.3389/fninf.2025.1700481","url":null,"abstract":"<p><strong>Introduction: </strong>Neural activity can be described in terms of probability distributions that are continuously evolving in time. Characterizing how these distributions are reshaped as they pass between cortical regions is key to understanding how information is organized in the brain.</p><p><strong>Methods: </strong>We developed a mathematical framework that represents these transformations as information-theoretic gradient flows - dynamical trajectories that follow the steepest ascent of entropy and expectation. The relative strengths of these two functionals provide interpretable measures of how neural probability distributions change as they propagate within neural systems. Following construct validation <i>in silico</i>, we applied the framework to publicly available continuous ΔF/F two-photon calcium recordings from the mouse visual cortex.</p><p><strong>Results: </strong>The analysis revealed consistent bi-directional transformations between the rostrolateral area and the primary visual cortex across all five mice. These findings demonstrate that the relative contributions of entropy and expectation can be disambiguated and used to describe information flow within cortical networks.</p><p><strong>Discussion: </strong>We introduce a framework for decomposing neural signal transformations into interpretable information-theoretic components. Beyond the mouse visual cortex, the method can be applied to diverse neuroimaging modalities and scales, thereby providing a generalizable approach for quantifying how information geometry shapes cortical communication.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1700481"},"PeriodicalIF":2.5,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12611820/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145539943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-24eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1647194
Chuanbo Hu, Jacob Thrasher, Wenqi Li, Mindi Ruan, Xiangxu Yu, Lynn K Paul, Shuo Wang, Xin Li
Introduction: Diagnosing Autism Spectrum Disorder (ASD) in verbally fluent individuals based on speech patterns in examiner-patient dialogues is challenging because speech-related symptoms are often subtle and heterogeneous. This study aimed to identify distinctive speech characteristics associated with ASD by analyzing recorded dialogues from the Autism Diagnostic Observation Schedule (ADOS-2).
Methods: We analyzed examiner-participant dialogues from ADOS-2 Module 4 and extracted 40 speech-related features categorized into intonation, volume, rate, pauses, spectral characteristics, chroma, and duration. These acoustic and prosodic features were processed using advanced speech analysis tools and used to train machine learning models to classify ASD participants into two subgroups: those with and without A2-defined speech pattern abnormalities. Model performance was evaluated using cross-validation and standard classification metrics.
Results: Using all 40 features, the support vector machine (SVM) achieved an F1-score of 84.49%. After removing Mel-Frequency Cepstral Coefficients (MFCC) and Chroma features to focus on prosodic, rhythmic, energy, and selected spectral features aligned with ADOS-2 A2 scores, performance improved, achieving 85.77% accuracy and an F1-score of 86.27%. Spectral spread and spectral centroid emerged as key features in the reduced set, while MFCC 6 and Chroma 4 also contributed significantly in the full feature set.
Discussion: These findings demonstrate that a compact, diverse set of non-MFCC and selected spectral features effectively characterizes speech abnormalities in verbally fluent individuals with ASD. The approach highlights the potential of context-aware, data-driven models to complement clinical assessments and enhance understanding of speech-related manifestations in ASD.
{"title":"Speech pattern disorders in verbally fluent individuals with autism spectrum disorder: a machine learning analysis.","authors":"Chuanbo Hu, Jacob Thrasher, Wenqi Li, Mindi Ruan, Xiangxu Yu, Lynn K Paul, Shuo Wang, Xin Li","doi":"10.3389/fninf.2025.1647194","DOIUrl":"10.3389/fninf.2025.1647194","url":null,"abstract":"<p><strong>Introduction: </strong>Diagnosing Autism Spectrum Disorder (ASD) in verbally fluent individuals based on speech patterns in examiner-patient dialogues is challenging because speech-related symptoms are often subtle and heterogeneous. This study aimed to identify distinctive speech characteristics associated with ASD by analyzing recorded dialogues from the Autism Diagnostic Observation Schedule (ADOS-2).</p><p><strong>Methods: </strong>We analyzed examiner-participant dialogues from ADOS-2 Module 4 and extracted 40 speech-related features categorized into intonation, volume, rate, pauses, spectral characteristics, chroma, and duration. These acoustic and prosodic features were processed using advanced speech analysis tools and used to train machine learning models to classify ASD participants into two subgroups: those with and without A2-defined speech pattern abnormalities. Model performance was evaluated using cross-validation and standard classification metrics.</p><p><strong>Results: </strong>Using all 40 features, the support vector machine (SVM) achieved an F1-score of 84.49%. After removing Mel-Frequency Cepstral Coefficients (MFCC) and Chroma features to focus on prosodic, rhythmic, energy, and selected spectral features aligned with ADOS-2 A2 scores, performance improved, achieving 85.77% accuracy and an F1-score of 86.27%. Spectral spread and spectral centroid emerged as key features in the reduced set, while MFCC 6 and Chroma 4 also contributed significantly in the full feature set.</p><p><strong>Discussion: </strong>These findings demonstrate that a compact, diverse set of non-MFCC and selected spectral features effectively characterizes speech abnormalities in verbally fluent individuals with ASD. The approach highlights the potential of context-aware, data-driven models to complement clinical assessments and enhance understanding of speech-related manifestations in ASD.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1647194"},"PeriodicalIF":2.5,"publicationDate":"2025-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12592137/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145481073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1655003
D Prabha Devi, C Palanisamy
Introduction: Heart disease is one of the leading causes of mortality worldwide, and early detection is crucial for effective treatment. Phonocardiogram (PCG) signals have shown potential in diagnosing cardiovascular conditions. However, accurate classification of PCG signals remains challenging due to high dimensional features, leading to misclassification and reduced performance in conventional systems.
Methods: To address these challenges, we propose a Linear Vectored Particle Swarm Optimization (LV-PSO) integrated with a Fuzzy Inference Xception Convolutional Neural Network (XCNN) for early heart risk prediction. PC G signals are analyzed to extract variations such as delta, theta, diastolic, and systolic differences. A Support Scalar Cardiac Impact Rate (S2CIR) is employed to capture disease specific scalar variations and behavioral impacts. LV-PSO is used to reduce feature dimensionality, and the optimized features are subsequently trained using the Fuzzy Inference XCNN model to classify disease types.
Results: Experimental evaluation demonstrates that the proposed system achieves superior predictive performance compared to existing models. The method attained a precision of 95.6%, recall of 93.1%, and an overall prediction accuracy of 95.8% across multiple disease categories.
Discussion: The integration of LV-PSO with Fuzzy Inference XCNN enhances feature selection aPSO with Fuzzy Inference XCNN enhances feature selection and nd classification accuracy, significantly improving the diagnostic capabilities of PCG-classification accuracy, significantly improving the diagnostic capabilities of PCG-based systems. These results highlight the potential of the proposed framework as a based systems. These results highlight the potential of the proposed framework as a reliable tool for early heart disease prediction and clinical decision support.reliable tool for early heart disease prediction and clinical decision support.
{"title":"Early heart disease prediction using LV-PSO and Fuzzy Inference Xception Convolution Neural Network on phonocardiogram signals.","authors":"D Prabha Devi, C Palanisamy","doi":"10.3389/fninf.2025.1655003","DOIUrl":"10.3389/fninf.2025.1655003","url":null,"abstract":"<p><strong>Introduction: </strong>Heart disease is one of the leading causes of mortality worldwide, and early detection is crucial for effective treatment. Phonocardiogram (PCG) signals have shown potential in diagnosing cardiovascular conditions. However, accurate classification of PCG signals remains challenging due to high dimensional features, leading to misclassification and reduced performance in conventional systems.</p><p><strong>Methods: </strong>To address these challenges, we propose a Linear Vectored Particle Swarm Optimization (LV-PSO) integrated with a Fuzzy Inference Xception Convolutional Neural Network (XCNN) for early heart risk prediction. PC G signals are analyzed to extract variations such as delta, theta, diastolic, and systolic differences. A Support Scalar Cardiac Impact Rate (S2CIR) is employed to capture disease specific scalar variations and behavioral impacts. LV-PSO is used to reduce feature dimensionality, and the optimized features are subsequently trained using the Fuzzy Inference XCNN model to classify disease types.</p><p><strong>Results: </strong>Experimental evaluation demonstrates that the proposed system achieves superior predictive performance compared to existing models. The method attained a precision of 95.6%, recall of 93.1%, and an overall prediction accuracy of 95.8% across multiple disease categories.</p><p><strong>Discussion: </strong>The integration of LV-PSO with Fuzzy Inference XCNN enhances feature selection aPSO with Fuzzy Inference XCNN enhances feature selection and nd classification accuracy, significantly improving the diagnostic capabilities of PCG-classification accuracy, significantly improving the diagnostic capabilities of PCG-based systems. These results highlight the potential of the proposed framework as a based systems. These results highlight the potential of the proposed framework as a reliable tool for early heart disease prediction and clinical decision support.reliable tool for early heart disease prediction and clinical decision support.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1655003"},"PeriodicalIF":2.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12521842/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145307449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-29eCollection Date: 2025-01-01DOI: 10.3389/fninf.2025.1630133
Paul Nazac, Shengyan Xu, Victor Breton, David Boulet, Lydia Danglot
In recent years, advances in microscopy and the development of novel fluorescent probes have significantly improved neuronal imaging. Many neuropsychiatric disorders are characterized by alterations in neuronal arborization, neuronal loss-as seen in Parkinson's disease-or synaptic loss, as in Alzheimer's disease. Neurodevelopmental disorders can also impact dendritic spine morphogenesis, as observed in autism spectrum disorders and schizophrenia. In this review, we provide an overview of the various labeling and microscopy techniques available to visualize neuronal structure, including dendritic spines and synapses. Particular attention is given to available fluorescent probes, recent technological advances in super-resolution microscopy (SIM, STED, STORM, MINFLUX), and segmentation methods. Aimed at biologists, this review presents both classical segmentation approaches and recent tools based on deep learning methods, with the goal of remaining accessible to readers without programming expertise.
{"title":"Super-resolution microscopy and deep learning methods: what can they bring to neuroscience: from neuron to 3D spine segmentation.","authors":"Paul Nazac, Shengyan Xu, Victor Breton, David Boulet, Lydia Danglot","doi":"10.3389/fninf.2025.1630133","DOIUrl":"10.3389/fninf.2025.1630133","url":null,"abstract":"<p><p>In recent years, advances in microscopy and the development of novel fluorescent probes have significantly improved neuronal imaging. Many neuropsychiatric disorders are characterized by alterations in neuronal arborization, neuronal loss-as seen in Parkinson's disease-or synaptic loss, as in Alzheimer's disease. Neurodevelopmental disorders can also impact dendritic spine morphogenesis, as observed in autism spectrum disorders and schizophrenia. In this review, we provide an overview of the various labeling and microscopy techniques available to visualize neuronal structure, including dendritic spines and synapses. Particular attention is given to available fluorescent probes, recent technological advances in super-resolution microscopy (SIM, STED, STORM, MINFLUX), and segmentation methods. Aimed at biologists, this review presents both classical segmentation approaches and recent tools based on deep learning methods, with the goal of remaining accessible to readers without programming expertise.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"19 ","pages":"1630133"},"PeriodicalIF":2.5,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12515862/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145291632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}