Pub Date : 2023-04-26eCollection Date: 2023-01-01DOI: 10.3389/fninf.2023.1173623
Scott C Neu, Karen L Crawford, Arthur W Toga
The Image and Data Archive (IDA) is a secure online resource for archiving, exploring, and sharing neuroscience data run by the Laboratory of Neuro Imaging (LONI). The laboratory first started managing neuroimaging data for multi-centered research studies in the late 1990's and since has become a nexus for many multi-site collaborations. By providing management and informatics tools and resources for de-identifying, integrating, searching, visualizing, and sharing a diverse range of neuroscience data, study investigators maintain complete control over data stored in the IDA while benefiting from a robust and reliable infrastructure that protects and preserves research data to maximize data collection investment.
{"title":"The image and data archive at the laboratory of neuro imaging.","authors":"Scott C Neu, Karen L Crawford, Arthur W Toga","doi":"10.3389/fninf.2023.1173623","DOIUrl":"10.3389/fninf.2023.1173623","url":null,"abstract":"<p><p>The Image and Data Archive (IDA) is a secure online resource for archiving, exploring, and sharing neuroscience data run by the Laboratory of Neuro Imaging (LONI). The laboratory first started managing neuroimaging data for multi-centered research studies in the late 1990's and since has become a nexus for many multi-site collaborations. By providing management and informatics tools and resources for de-identifying, integrating, searching, visualizing, and sharing a diverse range of neuroscience data, study investigators maintain complete control over data stored in the IDA while benefiting from a robust and reliable infrastructure that protects and preserves research data to maximize data collection investment.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"17 ","pages":"1173623"},"PeriodicalIF":3.5,"publicationDate":"2023-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10169596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9822870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-25eCollection Date: 2023-01-01DOI: 10.3389/fninf.2023.1082111
Zac Bowen, Gudjon Magnusson, Madeline Diep, Ujjwal Ayyangar, Aleksandr Smirnov, Patrick O Kanold, Wolfgang Losert
Multiphoton calcium imaging is one of the most powerful tools in modern neuroscience. However, multiphoton data require significant pre-processing of images and post-processing of extracted signals. As a result, many algorithms and pipelines have been developed for the analysis of multiphoton data, particularly two-photon imaging data. Most current studies use one of several algorithms and pipelines that are published and publicly available, and add customized upstream and downstream analysis elements to fit the needs of individual researchers. The vast differences in algorithm choices, parameter settings, pipeline composition, and data sources combine to make collaboration difficult, and raise questions about the reproducibility and robustness of experimental results. We present our solution, called NeuroWRAP (www.neurowrap.org), which is a tool that wraps multiple published algorithms together, and enables integration of custom algorithms. It enables development of collaborative, shareable custom workflows and reproducible data analysis for multiphoton calcium imaging data enabling easy collaboration between researchers. NeuroWRAP implements an approach to evaluate the sensitivity and robustness of the configured pipelines. When this sensitivity analysis is applied to a crucial step of image analysis, cell segmentation, we find a substantial difference between two popular workflows, CaImAn and Suite2p. NeuroWRAP harnesses this difference by introducing consensus analysis, utilizing two workflows in conjunction to significantly increase the trustworthiness and robustness of cell segmentation results.
{"title":"NeuroWRAP: integrating, validating, and sharing neurodata analysis workflows.","authors":"Zac Bowen, Gudjon Magnusson, Madeline Diep, Ujjwal Ayyangar, Aleksandr Smirnov, Patrick O Kanold, Wolfgang Losert","doi":"10.3389/fninf.2023.1082111","DOIUrl":"10.3389/fninf.2023.1082111","url":null,"abstract":"<p><p>Multiphoton calcium imaging is one of the most powerful tools in modern neuroscience. However, multiphoton data require significant pre-processing of images and post-processing of extracted signals. As a result, many algorithms and pipelines have been developed for the analysis of multiphoton data, particularly two-photon imaging data. Most current studies use one of several algorithms and pipelines that are published and publicly available, and add customized upstream and downstream analysis elements to fit the needs of individual researchers. The vast differences in algorithm choices, parameter settings, pipeline composition, and data sources combine to make collaboration difficult, and raise questions about the reproducibility and robustness of experimental results. We present our solution, called NeuroWRAP (www.neurowrap.org), which is a tool that wraps multiple published algorithms together, and enables integration of custom algorithms. It enables development of collaborative, shareable custom workflows and reproducible data analysis for multiphoton calcium imaging data enabling easy collaboration between researchers. NeuroWRAP implements an approach to evaluate the sensitivity and robustness of the configured pipelines. When this sensitivity analysis is applied to a crucial step of image analysis, cell segmentation, we find a substantial difference between two popular workflows, CaImAn and Suite2p. NeuroWRAP harnesses this difference by introducing consensus analysis, utilizing two workflows in conjunction to significantly increase the trustworthiness and robustness of cell segmentation results.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"17 ","pages":"1082111"},"PeriodicalIF":3.5,"publicationDate":"2023-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10166805/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9523716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-05eCollection Date: 2023-01-01DOI: 10.3389/fninf.2023.1104508
Jie Lisa Ji, Jure Demšar, Clara Fonteneau, Zailyn Tamayo, Lining Pan, Aleksij Kraljič, Andraž Matkovič, Nina Purg, Markus Helmer, Shaun Warrington, Anderson Winkler, Valerio Zerbi, Timothy S Coalson, Matthew F Glasser, Michael P Harms, Stamatios N Sotiropoulos, John D Murray, Alan Anticevic, Grega Repovš
Introduction: Neuroimaging technology has experienced explosive growth and transformed the study of neural mechanisms across health and disease. However, given the diversity of sophisticated tools for handling neuroimaging data, the field faces challenges in method integration, particularly across multiple modalities and species. Specifically, researchers often have to rely on siloed approaches which limit reproducibility, with idiosyncratic data organization and limited software interoperability.
Methods: To address these challenges, we have developed Quantitative Neuroimaging Environment & Toolbox (QuNex), a platform for consistent end-to-end processing and analytics. QuNex provides several novel functionalities for neuroimaging analyses, including a "turnkey" command for the reproducible deployment of custom workflows, from onboarding raw data to generating analytic features.
Results: The platform enables interoperable integration of multi-modal, community-developed neuroimaging software through an extension framework with a software development kit (SDK) for seamless integration of community tools. Critically, it supports high-throughput, parallel processing in high-performance compute environments, either locally or in the cloud. Notably, QuNex has successfully processed over 10,000 scans across neuroimaging consortia, including multiple clinical datasets. Moreover, QuNex enables integration of human and non-human workflows via a cohesive translational platform.
Discussion: Collectively, this effort stands to significantly impact neuroimaging method integration across acquisition approaches, pipelines, datasets, computational environments, and species. Building on this platform will enable more rapid, scalable, and reproducible impact of neuroimaging technology across health and disease.
{"title":"QuNex-An integrative platform for reproducible neuroimaging analytics.","authors":"Jie Lisa Ji, Jure Demšar, Clara Fonteneau, Zailyn Tamayo, Lining Pan, Aleksij Kraljič, Andraž Matkovič, Nina Purg, Markus Helmer, Shaun Warrington, Anderson Winkler, Valerio Zerbi, Timothy S Coalson, Matthew F Glasser, Michael P Harms, Stamatios N Sotiropoulos, John D Murray, Alan Anticevic, Grega Repovš","doi":"10.3389/fninf.2023.1104508","DOIUrl":"10.3389/fninf.2023.1104508","url":null,"abstract":"<p><strong>Introduction: </strong>Neuroimaging technology has experienced explosive growth and transformed the study of neural mechanisms across health and disease. However, given the diversity of sophisticated tools for handling neuroimaging data, the field faces challenges in method integration, particularly across multiple modalities and species. Specifically, researchers often have to rely on siloed approaches which limit reproducibility, with idiosyncratic data organization and limited software interoperability.</p><p><strong>Methods: </strong>To address these challenges, we have developed Quantitative Neuroimaging Environment & Toolbox (QuNex), a platform for consistent end-to-end processing and analytics. QuNex provides several novel functionalities for neuroimaging analyses, including a \"turnkey\" command for the reproducible deployment of custom workflows, from onboarding raw data to generating analytic features.</p><p><strong>Results: </strong>The platform enables interoperable integration of multi-modal, community-developed neuroimaging software through an extension framework with a software development kit (SDK) for seamless integration of community tools. Critically, it supports high-throughput, parallel processing in high-performance compute environments, either locally or in the cloud. Notably, QuNex has successfully processed over 10,000 scans across neuroimaging consortia, including multiple clinical datasets. Moreover, QuNex enables integration of human and non-human workflows <i>via</i> a cohesive translational platform.</p><p><strong>Discussion: </strong>Collectively, this effort stands to significantly impact neuroimaging method integration across acquisition approaches, pipelines, datasets, computational environments, and species. Building on this platform will enable more rapid, scalable, and reproducible impact of neuroimaging technology across health and disease.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"17 ","pages":"1104508"},"PeriodicalIF":3.5,"publicationDate":"2023-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10113546/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9386679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-24eCollection Date: 2023-01-01DOI: 10.3389/fninf.2023.1081160
Sahaj Anilbhai Patel, Abidin Yildirim
This paper presents a time-efficient preprocessing framework that converts any given 1D physiological signal recordings into a 2D image representation for training image-based deep learning models. The non-stationary signal is rasterized into the 2D image using Bresenham's line algorithm with time complexity O(n). The robustness of the proposed approach is evaluated based on two publicly available datasets. This study classified three different neural spikes (multi-class) and EEG epileptic seizure and non-seizure (binary class) based on shapes using a modified 2D Convolution Neural Network (2D CNN). The multi-class dataset consists of artificially simulated neural recordings with different Signal-to-Noise Ratios (SNR). The 2D CNN architecture showed significant performance for all individual SNRs scores with (SNR/ACC): 0.5/99.69, 0.75/99.69, 1.0/99.49, 1.25/98.85, 1.5/97.43, 1.75/95.20 and 2.0/91.98. Additionally, the binary class dataset also achieved 97.52% accuracy by outperforming several others proposed algorithms. Likewise, this approach could be employed on other biomedical signals such as Electrocardiograph (EKG) and Electromyography (EMG).
{"title":"Non-stationary neural signal to image conversion framework for image-based deep learning algorithms.","authors":"Sahaj Anilbhai Patel, Abidin Yildirim","doi":"10.3389/fninf.2023.1081160","DOIUrl":"10.3389/fninf.2023.1081160","url":null,"abstract":"<p><p>This paper presents a time-efficient preprocessing framework that converts any given 1D physiological signal recordings into a 2D image representation for training image-based deep learning models. The non-stationary signal is rasterized into the 2D image using Bresenham's line algorithm with time complexity O(n). The robustness of the proposed approach is evaluated based on two publicly available datasets. This study classified three different neural spikes (multi-class) and EEG epileptic seizure and non-seizure (binary class) based on shapes using a modified 2D Convolution Neural Network (2D CNN). The multi-class dataset consists of artificially simulated neural recordings with different Signal-to-Noise Ratios (SNR). The 2D CNN architecture showed significant performance for all individual SNRs scores with (SNR/ACC): 0.5/99.69, 0.75/99.69, 1.0/99.49, 1.25/98.85, 1.5/97.43, 1.75/95.20 and 2.0/91.98. Additionally, the binary class dataset also achieved 97.52% accuracy by outperforming several others proposed algorithms. Likewise, this approach could be employed on other biomedical signals such as Electrocardiograph (EKG) and Electromyography (EMG).</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"17 ","pages":"1081160"},"PeriodicalIF":2.5,"publicationDate":"2023-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10079945/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9273992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-24eCollection Date: 2023-01-01DOI: 10.3389/fninf.2023.1150157
Anders J Asp, Yaswanth Chintaluru, Sydney Hillan, J Luis Lujan
{"title":"Targeted neuroplasticity in spatiotemporally patterned invasive neuromodulation therapies for improving clinical outcomes.","authors":"Anders J Asp, Yaswanth Chintaluru, Sydney Hillan, J Luis Lujan","doi":"10.3389/fninf.2023.1150157","DOIUrl":"10.3389/fninf.2023.1150157","url":null,"abstract":"","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"17 ","pages":"1150157"},"PeriodicalIF":2.5,"publicationDate":"2023-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10080034/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9284013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-23eCollection Date: 2023-01-01DOI: 10.3389/fninf.2023.1060511
Antonio Ricciardi, Francesco Grussu, Baris Kanber, Ferran Prados, Marios C Yiannakas, Bhavana S Solanky, Frank Riemer, Xavier Golay, Wallace Brownlee, Olga Ciccarelli, Daniel C Alexander, Claudia A M Gandini Wheeler-Kingshott
Introduction: Conventional MRI is routinely used for the characterization of pathological changes in multiple sclerosis (MS), but due to its lack of specificity is unable to provide accurate prognoses, explain disease heterogeneity and reconcile the gap between observed clinical symptoms and radiological evidence. Quantitative MRI provides measures of physiological abnormalities, otherwise invisible to conventional MRI, that correlate with MS severity. Analyzing quantitative MRI measures through machine learning techniques has been shown to improve the understanding of the underlying disease by better delineating its alteration patterns.
Methods: In this retrospective study, a cohort of healthy controls (HC) and MS patients with different subtypes, followed up 15 years from clinically isolated syndrome (CIS), was analyzed to produce a multi-modal set of quantitative MRI features encompassing relaxometry, microstructure, sodium ion concentration, and tissue volumetry. Random forest classifiers were used to train a model able to discriminate between HC, CIS, relapsing remitting (RR) and secondary progressive (SP) MS patients based on these features and, for each classification task, to identify the relative contribution of each MRI-derived tissue property to the classification task itself.
Results and discussion: Average classification accuracy scores of 99 and 95% were obtained when discriminating HC and CIS vs. SP, respectively; 82 and 83% for HC and CIS vs. RR; 76% for RR vs. SP, and 79% for HC vs. CIS. Different patterns of alterations were observed for each classification task, offering key insights in the understanding of MS phenotypes pathophysiology: atrophy and relaxometry emerged particularly in the classification of HC and CIS vs. MS, relaxometry within lesions in RR vs. SP, sodium ion concentration in HC vs. CIS, and microstructural alterations were involved across all tasks.
{"title":"Patterns of inflammation, microstructural alterations, and sodium accumulation define multiple sclerosis subtypes after 15 years from onset.","authors":"Antonio Ricciardi, Francesco Grussu, Baris Kanber, Ferran Prados, Marios C Yiannakas, Bhavana S Solanky, Frank Riemer, Xavier Golay, Wallace Brownlee, Olga Ciccarelli, Daniel C Alexander, Claudia A M Gandini Wheeler-Kingshott","doi":"10.3389/fninf.2023.1060511","DOIUrl":"10.3389/fninf.2023.1060511","url":null,"abstract":"<p><strong>Introduction: </strong>Conventional MRI is routinely used for the characterization of pathological changes in multiple sclerosis (MS), but due to its lack of specificity is unable to provide accurate prognoses, explain disease heterogeneity and reconcile the gap between observed clinical symptoms and radiological evidence. Quantitative MRI provides measures of physiological abnormalities, otherwise invisible to conventional MRI, that correlate with MS severity. Analyzing quantitative MRI measures through machine learning techniques has been shown to improve the understanding of the underlying disease by better delineating its alteration patterns.</p><p><strong>Methods: </strong>In this retrospective study, a cohort of healthy controls (HC) and MS patients with different subtypes, followed up 15 years from clinically isolated syndrome (CIS), was analyzed to produce a multi-modal set of quantitative MRI features encompassing relaxometry, microstructure, sodium ion concentration, and tissue volumetry. Random forest classifiers were used to train a model able to discriminate between HC, CIS, relapsing remitting (RR) and secondary progressive (SP) MS patients based on these features and, for each classification task, to identify the relative contribution of each MRI-derived tissue property to the classification task itself.</p><p><strong>Results and discussion: </strong>Average classification accuracy scores of 99 and 95% were obtained when discriminating HC and CIS vs. SP, respectively; 82 and 83% for HC and CIS vs. RR; 76% for RR vs. SP, and 79% for HC vs. CIS. Different patterns of alterations were observed for each classification task, offering key insights in the understanding of MS phenotypes pathophysiology: atrophy and relaxometry emerged particularly in the classification of HC and CIS vs. MS, relaxometry within lesions in RR vs. SP, sodium ion concentration in HC vs. CIS, and microstructural alterations were involved across all tasks.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"17 ","pages":"1060511"},"PeriodicalIF":3.5,"publicationDate":"2023-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10076673/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9273987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-15eCollection Date: 2023-01-01DOI: 10.3389/fninf.2023.1123376
Charles A Ellis, Mohammad S E Sendi, Rongen Zhang, Darwin A Carbajal, May D Wang, Robyn L Miller, Vince D Calhoun
Introduction: Multimodal classification is increasingly common in electrophysiology studies. Many studies use deep learning classifiers with raw time-series data, which makes explainability difficult, and has resulted in relatively few studies applying explainability methods. This is concerning because explainability is vital to the development and implementation of clinical classifiers. As such, new multimodal explainability methods are needed.
Methods: In this study, we train a convolutional neural network for automated sleep stage classification with electroencephalogram (EEG), electrooculogram, and electromyogram data. We then present a global explainability approach that is uniquely adapted for electrophysiology analysis and compare it to an existing approach. We present the first two local multimodal explainability approaches. We look for subject-level differences in the local explanations that are obscured by global methods and look for relationships between the explanations and clinical and demographic variables in a novel analysis.
Results: We find a high level of agreement between methods. We find that EEG is globally the most important modality for most sleep stages and that subject-level differences in importance arise in local explanations that are not captured in global explanations. We further show that sex, followed by medication and age, had significant effects upon the patterns learned by the classifier.
Discussion: Our novel methods enhance explainability for the growing field of multimodal electrophysiology classification, provide avenues for the advancement of personalized medicine, yield unique insights into the effects of demographic and clinical variables upon classifiers, and help pave the way for the implementation of multimodal electrophysiology clinical classifiers.
{"title":"Novel methods for elucidating modality importance in multimodal electrophysiology classifiers.","authors":"Charles A Ellis, Mohammad S E Sendi, Rongen Zhang, Darwin A Carbajal, May D Wang, Robyn L Miller, Vince D Calhoun","doi":"10.3389/fninf.2023.1123376","DOIUrl":"10.3389/fninf.2023.1123376","url":null,"abstract":"<p><strong>Introduction: </strong>Multimodal classification is increasingly common in electrophysiology studies. Many studies use deep learning classifiers with raw time-series data, which makes explainability difficult, and has resulted in relatively few studies applying explainability methods. This is concerning because explainability is vital to the development and implementation of clinical classifiers. As such, new multimodal explainability methods are needed.</p><p><strong>Methods: </strong>In this study, we train a convolutional neural network for automated sleep stage classification with electroencephalogram (EEG), electrooculogram, and electromyogram data. We then present a global explainability approach that is uniquely adapted for electrophysiology analysis and compare it to an existing approach. We present the first two local multimodal explainability approaches. We look for subject-level differences in the local explanations that are obscured by global methods and look for relationships between the explanations and clinical and demographic variables in a novel analysis.</p><p><strong>Results: </strong>We find a high level of agreement between methods. We find that EEG is globally the most important modality for most sleep stages and that subject-level differences in importance arise in local explanations that are not captured in global explanations. We further show that sex, followed by medication and age, had significant effects upon the patterns learned by the classifier.</p><p><strong>Discussion: </strong>Our novel methods enhance explainability for the growing field of multimodal electrophysiology classification, provide avenues for the advancement of personalized medicine, yield unique insights into the effects of demographic and clinical variables upon classifiers, and help pave the way for the implementation of multimodal electrophysiology clinical classifiers.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"17 ","pages":"1123376"},"PeriodicalIF":2.5,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10050434/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9594414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-09eCollection Date: 2023-01-01DOI: 10.3389/fninf.2023.934472
Franklin Alvarez, Daniel Kipping, Waldo Nogueira
Speech understanding in cochlear implant (CI) users presents large intersubject variability that may be related to different aspects of the peripheral auditory system, such as the electrode-nerve interface and neural health conditions. This variability makes it more challenging to proof differences in performance between different CI sound coding strategies in regular clinical studies, nevertheless, computational models can be helpful to assess the speech performance of CI users in an environment where all these physiological aspects can be controlled. In this study, differences in performance between three variants of the HiRes Fidelity 120 (F120) sound coding strategy are studied with a computational model. The computational model consists of (i) a processing stage with the sound coding strategy, (ii) a three-dimensional electrode-nerve interface that accounts for auditory nerve fiber (ANF) degeneration, (iii) a population of phenomenological ANF models, and (iv) a feature extractor algorithm to obtain the internal representation (IR) of the neural activity. As the back-end, the simulation framework for auditory discrimination experiments (FADE) was chosen. Two experiments relevant to speech understanding were performed: one related to spectral modulation threshold (SMT), and the other one related to speech reception threshold (SRT). These experiments included three different neural health conditions (healthy ANFs, and moderate and severe ANF degeneration). The F120 was configured to use sequential stimulation (F120-S), and simultaneous stimulation with two (F120-P) and three (F120-T) simultaneously active channels. Simultaneous stimulation causes electric interaction that smears the spectrotemporal information transmitted to the ANFs, and it has been hypothesized to lead to even worse information transmission in poor neural health conditions. In general, worse neural health conditions led to worse predicted performance; nevertheless, the detriment was small compared to clinical data. Results in SRT experiments indicated that performance with simultaneous stimulation, especially F120-T, were more affected by neural degeneration than with sequential stimulation. Results in SMT experiments showed no significant difference in performance. Although the proposed model in its current state is able to perform SMT and SRT experiments, it is not reliable to predict real CI users' performance yet. Nevertheless, improvements related to the ANF model, feature extraction, and predictor algorithm are discussed.
{"title":"A computational model to simulate spectral modulation and speech perception experiments of cochlear implant users.","authors":"Franklin Alvarez, Daniel Kipping, Waldo Nogueira","doi":"10.3389/fninf.2023.934472","DOIUrl":"10.3389/fninf.2023.934472","url":null,"abstract":"<p><p>Speech understanding in cochlear implant (CI) users presents large intersubject variability that may be related to different aspects of the peripheral auditory system, such as the electrode-nerve interface and neural health conditions. This variability makes it more challenging to proof differences in performance between different CI sound coding strategies in regular clinical studies, nevertheless, computational models can be helpful to assess the speech performance of CI users in an environment where all these physiological aspects can be controlled. In this study, differences in performance between three variants of the HiRes Fidelity 120 (F120) sound coding strategy are studied with a computational model. The computational model consists of (i) a processing stage with the sound coding strategy, (ii) a three-dimensional electrode-nerve interface that accounts for auditory nerve fiber (ANF) degeneration, (iii) a population of phenomenological ANF models, and (iv) a feature extractor algorithm to obtain the internal representation (IR) of the neural activity. As the back-end, the simulation framework for auditory discrimination experiments (FADE) was chosen. Two experiments relevant to speech understanding were performed: one related to spectral modulation threshold (SMT), and the other one related to speech reception threshold (SRT). These experiments included three different neural health conditions (healthy ANFs, and moderate and severe ANF degeneration). The F120 was configured to use sequential stimulation (F120-S), and simultaneous stimulation with two (F120-P) and three (F120-T) simultaneously active channels. Simultaneous stimulation causes electric interaction that smears the spectrotemporal information transmitted to the ANFs, and it has been hypothesized to lead to even worse information transmission in poor neural health conditions. In general, worse neural health conditions led to worse predicted performance; nevertheless, the detriment was small compared to clinical data. Results in SRT experiments indicated that performance with simultaneous stimulation, especially F120-T, were more affected by neural degeneration than with sequential stimulation. Results in SMT experiments showed no significant difference in performance. Although the proposed model in its current state is able to perform SMT and SRT experiments, it is not reliable to predict real CI users' performance yet. Nevertheless, improvements related to the ANF model, feature extraction, and predictor algorithm are discussed.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"17 ","pages":"934472"},"PeriodicalIF":2.5,"publicationDate":"2023-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10061543/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9594417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-09eCollection Date: 2023-01-01DOI: 10.3389/fninf.2023.1154080
Heidi Kleven, Ingrid Reiten, Camilla H Blixhavn, Ulrike Schlegel, Martin Øvsthus, Eszter A Papp, Maja A Puchades, Jan G Bjaalie, Trygve B Leergaard, Ingvild E Bjerke
Brain atlases are widely used in neuroscience as resources for conducting experimental studies, and for integrating, analyzing, and reporting data from animal models. A variety of atlases are available, and it may be challenging to find the optimal atlas for a given purpose and to perform efficient atlas-based data analyses. Comparing findings reported using different atlases is also not trivial, and represents a barrier to reproducible science. With this perspective article, we provide a guide to how mouse and rat brain atlases can be used for analyzing and reporting data in accordance with the FAIR principles that advocate for data to be findable, accessible, interoperable, and re-usable. We first introduce how atlases can be interpreted and used for navigating to brain locations, before discussing how they can be used for different analytic purposes, including spatial registration and data visualization. We provide guidance on how neuroscientists can compare data mapped to different atlases and ensure transparent reporting of findings. Finally, we summarize key considerations when choosing an atlas and give an outlook on the relevance of increased uptake of atlas-based tools and workflows for FAIR data sharing.
{"title":"A neuroscientist's guide to using murine brain atlases for efficient analysis and transparent reporting.","authors":"Heidi Kleven, Ingrid Reiten, Camilla H Blixhavn, Ulrike Schlegel, Martin Øvsthus, Eszter A Papp, Maja A Puchades, Jan G Bjaalie, Trygve B Leergaard, Ingvild E Bjerke","doi":"10.3389/fninf.2023.1154080","DOIUrl":"10.3389/fninf.2023.1154080","url":null,"abstract":"<p><p>Brain atlases are widely used in neuroscience as resources for conducting experimental studies, and for integrating, analyzing, and reporting data from animal models. A variety of atlases are available, and it may be challenging to find the optimal atlas for a given purpose and to perform efficient atlas-based data analyses. Comparing findings reported using different atlases is also not trivial, and represents a barrier to reproducible science. With this perspective article, we provide a guide to how mouse and rat brain atlases can be used for analyzing and reporting data in accordance with the FAIR principles that advocate for data to be findable, accessible, interoperable, and re-usable. We first introduce how atlases can be interpreted and used for navigating to brain locations, before discussing how they can be used for different analytic purposes, including spatial registration and data visualization. We provide guidance on how neuroscientists can compare data mapped to different atlases and ensure transparent reporting of findings. Finally, we summarize key considerations when choosing an atlas and give an outlook on the relevance of increased uptake of atlas-based tools and workflows for FAIR data sharing.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":"17 ","pages":"1154080"},"PeriodicalIF":2.5,"publicationDate":"2023-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10033636/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9546649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}