Pub Date : 2024-06-18eCollection Date: 2024-01-01DOI: 10.3389/fninf.2024.1384720
S M Shayez Karim, Md Shah Fahad, R S Rathore
Alzheimer's disease (AD) is a challenging neurodegenerative condition, necessitating early diagnosis and intervention. This research leverages machine learning (ML) and graph theory metrics, derived from resting-state functional magnetic resonance imaging (rs-fMRI) data to predict AD. Using Southwest University Adult Lifespan Dataset (SALD, age 21-76 years) and the Open Access Series of Imaging Studies (OASIS, age 64-95 years) dataset, containing 112 participants, various ML models were developed for the purpose of AD prediction. The study identifies key features for a comprehensive understanding of brain network topology and functional connectivity in AD. Through a 5-fold cross-validation, all models demonstrate substantial predictive capabilities (accuracy in 82-92% range), with the support vector machine model standing out as the best having an accuracy of 92%. Present study suggests that top 13 regions, identified based on most important discriminating features, have lost significant connections with thalamus. The functional connection strengths were consistently declined for substantia nigra, pars reticulata, substantia nigra, pars compacta, and nucleus accumbens among AD subjects as compared to healthy adults and aging individuals. The present finding corroborate with the earlier studies, employing various neuroimagining techniques. This research signifies the translational potential of a comprehensive approach integrating ML, graph theory and rs-fMRI analysis in AD prediction, offering potential biomarker for more accurate diagnostics and early prediction of AD.
{"title":"Identifying discriminative features of brain network for prediction of Alzheimer's disease using graph theory and machine learning.","authors":"S M Shayez Karim, Md Shah Fahad, R S Rathore","doi":"10.3389/fninf.2024.1384720","DOIUrl":"10.3389/fninf.2024.1384720","url":null,"abstract":"<p><p>Alzheimer's disease (AD) is a challenging neurodegenerative condition, necessitating early diagnosis and intervention. This research leverages machine learning (ML) and graph theory metrics, derived from resting-state functional magnetic resonance imaging (rs-fMRI) data to predict AD. Using Southwest University Adult Lifespan Dataset (SALD, age 21-76 years) and the Open Access Series of Imaging Studies (OASIS, age 64-95 years) dataset, containing 112 participants, various ML models were developed for the purpose of AD prediction. The study identifies key features for a comprehensive understanding of brain network topology and functional connectivity in AD. Through a 5-fold cross-validation, all models demonstrate substantial predictive capabilities (accuracy in 82-92% range), with the support vector machine model standing out as the best having an accuracy of 92%. Present study suggests that top 13 regions, identified based on most important discriminating features, have lost significant connections with thalamus. The functional connection strengths were consistently declined for substantia nigra, pars reticulata, substantia nigra, pars compacta, and nucleus accumbens among AD subjects as compared to healthy adults and aging individuals. The present finding corroborate with the earlier studies, employing various neuroimagining techniques. This research signifies the translational potential of a comprehensive approach integrating ML, graph theory and rs-fMRI analysis in AD prediction, offering potential biomarker for more accurate diagnostics and early prediction of AD.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11217540/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141491464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: The Rotation Invariant Vision Transformer (RViT) is a novel deep learning model tailored for brain tumor classification using MRI scans.
Methods: RViT incorporates rotated patch embeddings to enhance the accuracy of brain tumor identification.
Results: Evaluation on the Brain Tumor MRI Dataset from Kaggle demonstrates RViT's superior performance with sensitivity (1.0), specificity (0.975), F1-score (0.984), Matthew's Correlation Coefficient (MCC) (0.972), and an overall accuracy of 0.986.
Conclusion: RViT outperforms the standard Vision Transformer model and several existing techniques, highlighting its efficacy in medical imaging. The study confirms that integrating rotational patch embeddings improves the model's capability to handle diverse orientations, a common challenge in tumor imaging. The specialized architecture and rotational invariance approach of RViT have the potential to enhance current methodologies for brain tumor detection and extend to other complex imaging tasks.
{"title":"Enhancing brain tumor detection in MRI with a rotation invariant Vision Transformer.","authors":"Palani Thanaraj Krishnan, Pradeep Krishnadoss, Mukund Khandelwal, Devansh Gupta, Anupoju Nihaal, T Sunil Kumar","doi":"10.3389/fninf.2024.1414925","DOIUrl":"10.3389/fninf.2024.1414925","url":null,"abstract":"<p><strong>Background: </strong>The Rotation Invariant Vision Transformer (RViT) is a novel deep learning model tailored for brain tumor classification using MRI scans.</p><p><strong>Methods: </strong>RViT incorporates rotated patch embeddings to enhance the accuracy of brain tumor identification.</p><p><strong>Results: </strong>Evaluation on the Brain Tumor MRI Dataset from Kaggle demonstrates RViT's superior performance with sensitivity (1.0), specificity (0.975), F1-score (0.984), Matthew's Correlation Coefficient (MCC) (0.972), and an overall accuracy of 0.986.</p><p><strong>Conclusion: </strong>RViT outperforms the standard Vision Transformer model and several existing techniques, highlighting its efficacy in medical imaging. The study confirms that integrating rotational patch embeddings improves the model's capability to handle diverse orientations, a common challenge in tumor imaging. The specialized architecture and rotational invariance approach of RViT have the potential to enhance current methodologies for brain tumor detection and extend to other complex imaging tasks.</p>","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11217563/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141491463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-12DOI: 10.3389/fninf.2024.1415085
Marta Gaviraghi, Antonio Ricciardi, Fulvia Palesi, Wallace Brownlee, Paolo Vitali, Ferran Prados, B. Kanber, C. G. Gandini Wheeler-Kingshott
Quantitative maps obtained with diffusion weighted (DW) imaging, such as fractional anisotropy (FA) –calculated by fitting the diffusion tensor (DT) model to the data,—are very useful to study neurological diseases. To fit this map accurately, acquisition times of the order of several minutes are needed because many noncollinear DW volumes must be acquired to reduce directional biases. Deep learning (DL) can be used to reduce acquisition times by reducing the number of DW volumes. We already developed a DL network named “one-minute FA,” which uses 10 DW volumes to obtain FA maps, maintaining the same characteristics and clinical sensitivity of the FA maps calculated with the standard method using more volumes. Recent publications have indicated that it is possible to train DL networks and obtain FA maps even with 4 DW input volumes, far less than the minimum number of directions for the mathematical estimation of the DT.Here we investigated the impact of reducing the number of DW input volumes to 4 or 7, and evaluated the performance and clinical sensitivity of the corresponding DL networks trained to calculate FA, while comparing results also with those using our one-minute FA. Each network training was performed on the human connectome project open-access dataset that has a high resolution and many DW volumes, used to fit a ground truth FA. To evaluate the generalizability of each network, they were tested on two external clinical datasets, not seen during training, and acquired on different scanners with different protocols, as previously done.Using 4 or 7 DW volumes, it was possible to train DL networks to obtain FA maps with the same range of values as ground truth - map, only when using HCP test data; pathological sensitivity was lost when tested using the external clinical datasets: indeed in both cases, no consistent differences were found between patient groups. On the contrary, our “one-minute FA” did not suffer from the same problem.When developing DL networks for reduced acquisition times, the ability to generalize and to generate quantitative biomarkers that provide clinical sensitivity must be addressed.
通过扩散加权(DW)成像获得的定量图,如分数各向异性(FA),是通过对数据拟合扩散张量(DT)模型计算得出的,对研究神经系统疾病非常有用。要精确拟合该图谱,需要几分钟的采集时间,因为必须采集许多非共线性的 DW 体积以减少方向偏差。深度学习(DL)可通过减少 DW 卷的数量来缩短采集时间。我们已经开发出一种名为 "一分钟 FA "的深度学习网络,只需 10 个 DW 容积即可获得 FA 图,与使用更多容积的标准方法计算出的 FA 图保持相同的特征和临床灵敏度。最近发表的文章指出,即使只有 4 个 DW 输入容积,也可以训练 DL 网络并获得 FA 图,而这一数字远远低于 DT 数学估计所需的最小方向数。在此,我们研究了将 DW 输入容积数减少到 4 个或 7 个的影响,并评估了相应的 DL 网络在计算 FA 时的性能和临床灵敏度,同时还将结果与使用我们的 "一分钟 FA "的结果进行了比较。每个网络的训练都是在人类连接组项目开放数据集上进行的,该数据集具有高分辨率和大量 DW 容积,用于拟合基本真实 FA。为了评估每个网络的通用性,我们在两个外部临床数据集上对其进行了测试,这两个数据集在训练过程中没有出现过,而且是在不同的扫描仪上以不同的方案获得的,就像之前所做的那样。使用 4 或 7 个 DW 容积,只有在使用 HCP 测试数据时,DL 网络才有可能训练出与基本真实 FA 图具有相同取值范围的 FA 图;而在使用外部临床数据集进行测试时,病理学敏感性就会丧失:事实上,在这两种情况下,都没有发现不同患者组之间存在一致的差异。相反,我们的 "一分钟 FA "却没有出现同样的问题。在开发可缩短采集时间的 DL 网络时,必须解决通用能力和生成可提供临床敏感性的定量生物标志物的问题。
{"title":"Finding the limits of deep learning clinical sensitivity with fractional anisotropy (FA) microstructure maps","authors":"Marta Gaviraghi, Antonio Ricciardi, Fulvia Palesi, Wallace Brownlee, Paolo Vitali, Ferran Prados, B. Kanber, C. G. Gandini Wheeler-Kingshott","doi":"10.3389/fninf.2024.1415085","DOIUrl":"https://doi.org/10.3389/fninf.2024.1415085","url":null,"abstract":"Quantitative maps obtained with diffusion weighted (DW) imaging, such as fractional anisotropy (FA) –calculated by fitting the diffusion tensor (DT) model to the data,—are very useful to study neurological diseases. To fit this map accurately, acquisition times of the order of several minutes are needed because many noncollinear DW volumes must be acquired to reduce directional biases. Deep learning (DL) can be used to reduce acquisition times by reducing the number of DW volumes. We already developed a DL network named “one-minute FA,” which uses 10 DW volumes to obtain FA maps, maintaining the same characteristics and clinical sensitivity of the FA maps calculated with the standard method using more volumes. Recent publications have indicated that it is possible to train DL networks and obtain FA maps even with 4 DW input volumes, far less than the minimum number of directions for the mathematical estimation of the DT.Here we investigated the impact of reducing the number of DW input volumes to 4 or 7, and evaluated the performance and clinical sensitivity of the corresponding DL networks trained to calculate FA, while comparing results also with those using our one-minute FA. Each network training was performed on the human connectome project open-access dataset that has a high resolution and many DW volumes, used to fit a ground truth FA. To evaluate the generalizability of each network, they were tested on two external clinical datasets, not seen during training, and acquired on different scanners with different protocols, as previously done.Using 4 or 7 DW volumes, it was possible to train DL networks to obtain FA maps with the same range of values as ground truth - map, only when using HCP test data; pathological sensitivity was lost when tested using the external clinical datasets: indeed in both cases, no consistent differences were found between patient groups. On the contrary, our “one-minute FA” did not suffer from the same problem.When developing DL networks for reduced acquisition times, the ability to generalize and to generate quantitative biomarkers that provide clinical sensitivity must be addressed.","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141354081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.3389/fninf.2024.1376022
Horea-Ioan Ioanas, Austin Macdonald, Yaroslav O. Halchenko
The value of research articles is increasingly contingent on complex data analysis results which substantiate their claims. Compared to data production, data analysis more readily lends itself to a higher standard of transparency and repeated operator-independent execution. This higher standard can be approached via fully reexecutable research outputs, which contain the entire instruction set for automatic end-to-end generation of an entire article from the earliest feasible provenance point. In this study, we make use of a peer-reviewed neuroimaging article which provides complete but fragile reexecution instructions, as a starting point to draft a new reexecution system which is both robust and portable. We render this system modular as a core design aspect, so that reexecutable article code, data, and environment specifications could potentially be substituted or adapted. In conjunction with this system, which forms the demonstrative product of this study, we detail the core challenges with full article reexecution and specify a number of best practices which permitted us to mitigate them. We further show how the capabilities of our system can subsequently be used to provide reproducibility assessments, both via simple statistical metrics and by visually highlighting divergent elements for human inspection. We argue that fully reexecutable articles are thus a feasible best practice, which can greatly enhance the understanding of data analysis variability and the trust in results. Lastly, we comment at length on the outlook for reexecutable research outputs and encourage re-use and derivation of the system produced herein.
{"title":"Frontiers | Neuroimaging article reexecution and reproduction assessment system","authors":"Horea-Ioan Ioanas, Austin Macdonald, Yaroslav O. Halchenko","doi":"10.3389/fninf.2024.1376022","DOIUrl":"https://doi.org/10.3389/fninf.2024.1376022","url":null,"abstract":"The value of research articles is increasingly contingent on complex data analysis results which substantiate their claims. Compared to data production, data analysis more readily lends itself to a higher standard of transparency and repeated operator-independent execution. This higher standard can be approached via fully reexecutable research outputs, which contain the entire instruction set for automatic end-to-end generation of an entire article from the earliest feasible provenance point. In this study, we make use of a peer-reviewed neuroimaging article which provides complete but fragile reexecution instructions, as a starting point to draft a new reexecution system which is both robust and portable. We render this system modular as a core design aspect, so that reexecutable article code, data, and environment specifications could potentially be substituted or adapted. In conjunction with this system, which forms the demonstrative product of this study, we detail the core challenges with full article reexecution and specify a number of best practices which permitted us to mitigate them. We further show how the capabilities of our system can subsequently be used to provide reproducibility assessments, both via simple statistical metrics and by visually highlighting divergent elements for human inspection. We argue that fully reexecutable articles are thus a feasible best practice, which can greatly enhance the understanding of data analysis variability and the trust in results. Lastly, we comment at length on the outlook for reexecutable research outputs and encourage re-use and derivation of the system produced herein.","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141741154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-23DOI: 10.3389/fninf.2024.1292667
Scott Makeig, Kay Robbins
The brain is a complex dynamic system whose current state is inextricably coupled to awareness of past, current, and anticipated future threats and opportunities that continually affect awareness and behavioral goals and decisions. Brain activity is driven on multiple time scales by an ever-evolving flow of sensory, proprioceptive, and idiothetic experience. Neuroimaging experiments seek to isolate and focus on some aspect of these complex dynamics to better understand how human experience, cognition, behavior, and health are supported by brain activity. Here we consider an event-related data modeling approach that seeks to parse experience and behavior into a set of time-delimited events. We distinguish between event processes themselves, that unfold through time, and event markers that record the experiment timeline latencies of event onset, offset, and any other event phase transitions. Precise descriptions of experiment events (sensory, motor, or other) allow participant experience and behavior to be interpreted in the context either of the event itself or of all or any experiment events. We discuss how events in neuroimaging experiments have been, are currently, and should best be identified and represented with emphasis on the importance of modeling both events and event context for meaningful interpretation of relationships between brain dynamics, experience, and behavior. We show how text annotation of time series neuroimaging data using the system of Hierarchical Event Descriptors (HED; https://www.hedtags.org) can more adequately model the roles of both events and their ever-evolving context than current data annotation practice and can thereby facilitate data analysis, meta-analysis, and mega-analysis. Finally, we discuss ways in which the HED system must continue to expand to serve the evolving needs of neuroimaging research.
{"title":"Events in context—The HED framework for the study of brain, experience and behavior","authors":"Scott Makeig, Kay Robbins","doi":"10.3389/fninf.2024.1292667","DOIUrl":"https://doi.org/10.3389/fninf.2024.1292667","url":null,"abstract":"The brain is a complex dynamic system whose current state is inextricably coupled to awareness of past, current, and anticipated future threats and opportunities that continually affect awareness and behavioral goals and decisions. Brain activity is driven on multiple time scales by an ever-evolving flow of sensory, proprioceptive, and idiothetic experience. Neuroimaging experiments seek to isolate and focus on some aspect of these complex dynamics to better understand how human experience, cognition, behavior, and health are supported by brain activity. Here we consider an event-related data modeling approach that seeks to parse experience and behavior into a set of time-delimited events. We distinguish between event processes themselves, that unfold through time, and event markers that record the experiment timeline latencies of event onset, offset, and any other event phase transitions. Precise descriptions of experiment events (sensory, motor, or other) allow participant experience and behavior to be interpreted in the context either of the event itself or of all or any experiment events. We discuss how events in neuroimaging experiments have been, are currently, and should best be identified and represented with emphasis on the importance of modeling both events and event context for meaningful interpretation of relationships between brain dynamics, experience, and behavior. We show how text annotation of time series neuroimaging data using the system of Hierarchical Event Descriptors (HED; https://www.hedtags.org) can more adequately model the roles of both events and their ever-evolving context than current data annotation practice and can thereby facilitate data analysis, meta-analysis, and mega-analysis. Finally, we discuss ways in which the HED system must continue to expand to serve the evolving needs of neuroimaging research.","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141105797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-17DOI: 10.3389/fninf.2024.1385526
Patrick G. McPhee, Anthony L. Vaccarino, Sibel Naska, Kirk Nylen, Jose Arturo Santisteban, Rachel Chepesiuk, Andrea Andrade, Stelios Georgiades, Brendan Behan, A. Iaboni, Flora Wan, Sabrina Aimola, Heena Cheema, Jan Willem Gorter
There is an increasing desire to study neurodevelopmental disorders (NDDs) together to understand commonalities to develop generic health promotion strategies and improve clinical treatment. Common data elements (CDEs) collected across studies involving children with NDDs afford an opportunity to answer clinically meaningful questions. We undertook a retrospective, secondary analysis of data pertaining to sleep in children with different NDDs collected through various research studies. The objective of this paper is to share lessons learned for data management, collation, and harmonization from a sleep study in children within and across NDDs from large, collaborative research networks in the Ontario Brain Institute (OBI). Three collaborative research networks contributed demographic data and data pertaining to sleep, internalizing symptoms, health-related quality of life, and severity of disorder for children with six different NDDs: autism spectrum disorder; attention deficit/hyperactivity disorder; obsessive compulsive disorder; intellectual disability; cerebral palsy; and epilepsy. Procedures for data harmonization, derivations, and merging were shared and examples pertaining to severity of disorder and sleep disturbances were described in detail. Important lessons emerged from data harmonizing procedures: prioritizing the collection of CDEs to ensure data completeness; ensuring unprocessed data are uploaded for harmonization in order to facilitate timely analytic procedures; the value of maintaining variable naming that is consistent with data dictionaries at time of project validation; and the value of regular meetings with the research networks to discuss and overcome challenges with data harmonization. Buy-in from all research networks involved at study inception and oversight from a centralized infrastructure (OBI) identified the importance of collaboration to collect CDEs and facilitate data harmonization to improve outcomes for children with NDDs.
{"title":"Harmonizing data on correlates of sleep in children within and across neurodevelopmental disorders: lessons learned from an Ontario Brain Institute cross-program collaboration","authors":"Patrick G. McPhee, Anthony L. Vaccarino, Sibel Naska, Kirk Nylen, Jose Arturo Santisteban, Rachel Chepesiuk, Andrea Andrade, Stelios Georgiades, Brendan Behan, A. Iaboni, Flora Wan, Sabrina Aimola, Heena Cheema, Jan Willem Gorter","doi":"10.3389/fninf.2024.1385526","DOIUrl":"https://doi.org/10.3389/fninf.2024.1385526","url":null,"abstract":"There is an increasing desire to study neurodevelopmental disorders (NDDs) together to understand commonalities to develop generic health promotion strategies and improve clinical treatment. Common data elements (CDEs) collected across studies involving children with NDDs afford an opportunity to answer clinically meaningful questions. We undertook a retrospective, secondary analysis of data pertaining to sleep in children with different NDDs collected through various research studies. The objective of this paper is to share lessons learned for data management, collation, and harmonization from a sleep study in children within and across NDDs from large, collaborative research networks in the Ontario Brain Institute (OBI). Three collaborative research networks contributed demographic data and data pertaining to sleep, internalizing symptoms, health-related quality of life, and severity of disorder for children with six different NDDs: autism spectrum disorder; attention deficit/hyperactivity disorder; obsessive compulsive disorder; intellectual disability; cerebral palsy; and epilepsy. Procedures for data harmonization, derivations, and merging were shared and examples pertaining to severity of disorder and sleep disturbances were described in detail. Important lessons emerged from data harmonizing procedures: prioritizing the collection of CDEs to ensure data completeness; ensuring unprocessed data are uploaded for harmonization in order to facilitate timely analytic procedures; the value of maintaining variable naming that is consistent with data dictionaries at time of project validation; and the value of regular meetings with the research networks to discuss and overcome challenges with data harmonization. Buy-in from all research networks involved at study inception and oversight from a centralized infrastructure (OBI) identified the importance of collaboration to collect CDEs and facilitate data harmonization to improve outcomes for children with NDDs.","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140964399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-16DOI: 10.3389/fninf.2024.1395916
Sahaj A. Patel, Rachel June Smith, Abidin Yildirim
Recently, graph theory has become a promising tool for biomedical signal analysis, wherein the signals are transformed into a graph network and represented as either adjacency or Laplacian matrices. However, as the size of the time series increases, the dimensions of transformed matrices also expand, leading to a significant rise in computational demand for analysis. Therefore, there is a critical need for efficient feature extraction methods demanding low computational time. This paper introduces a new feature extraction technique based on the Gershgorin Circle theorem applied to biomedical signals, termed Gershgorin Circle Feature Extraction (GCFE). The study makes use of two publicly available datasets: one including synthetic neural recordings, and the other consisting of EEG seizure data. In addition, the efficacy of GCFE is compared with two distinct visibility graphs and tested against seven other feature extraction methods. In the GCFE method, the features are extracted from a special modified weighted Laplacian matrix from the visibility graphs. This method was applied to classify three different types of neural spikes from one dataset, and to distinguish between seizure and non-seizure events in another. The application of GCFE resulted in superior performance when compared to seven other algorithms, achieving a positive average accuracy difference of 2.67% across all experimental datasets. This indicates that GCFE consistently outperformed the other methods in terms of accuracy. Furthermore, the GCFE method was more computationally-efficient than the other feature extraction techniques. The GCFE method can also be employed in real-time biomedical signal classification where the visibility graphs are utilized such as EKG signal classification.
{"title":"Gershgorin circle theorem-based feature extraction for biomedical signal analysis","authors":"Sahaj A. Patel, Rachel June Smith, Abidin Yildirim","doi":"10.3389/fninf.2024.1395916","DOIUrl":"https://doi.org/10.3389/fninf.2024.1395916","url":null,"abstract":"Recently, graph theory has become a promising tool for biomedical signal analysis, wherein the signals are transformed into a graph network and represented as either adjacency or Laplacian matrices. However, as the size of the time series increases, the dimensions of transformed matrices also expand, leading to a significant rise in computational demand for analysis. Therefore, there is a critical need for efficient feature extraction methods demanding low computational time. This paper introduces a new feature extraction technique based on the Gershgorin Circle theorem applied to biomedical signals, termed Gershgorin Circle Feature Extraction (GCFE). The study makes use of two publicly available datasets: one including synthetic neural recordings, and the other consisting of EEG seizure data. In addition, the efficacy of GCFE is compared with two distinct visibility graphs and tested against seven other feature extraction methods. In the GCFE method, the features are extracted from a special modified weighted Laplacian matrix from the visibility graphs. This method was applied to classify three different types of neural spikes from one dataset, and to distinguish between seizure and non-seizure events in another. The application of GCFE resulted in superior performance when compared to seven other algorithms, achieving a positive average accuracy difference of 2.67% across all experimental datasets. This indicates that GCFE consistently outperformed the other methods in terms of accuracy. Furthermore, the GCFE method was more computationally-efficient than the other feature extraction techniques. The GCFE method can also be employed in real-time biomedical signal classification where the visibility graphs are utilized such as EKG signal classification.","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140967386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At the intersection of neural monitoring and decoding, event-related potential (ERP) based on electroencephalography (EEG) has opened a window into intrinsic brain function. The stability of ERP makes it frequently employed in the field of neuroscience. However, project-specific custom code, tracking of user-defined parameters, and the large diversity of commercial tools have limited clinical application.We introduce an open-source, user-friendly, and reproducible MATLAB toolbox named EPAT that includes a variety of algorithms for EEG data preprocessing. It provides EEGLAB-based template pipelines for advanced multi-processing of EEG, magnetoencephalography, and polysomnogram data. Participants evaluated EEGLAB and EPAT across 14 indicators, with satisfaction ratings analyzed using the Wilcoxon signed-rank test or paired t-test based on distribution normality.EPAT eases EEG signal browsing and preprocessing, EEG power spectrum analysis, independent component analysis, time-frequency analysis, ERP waveform drawing, and topological analysis of scalp voltage. A user-friendly graphical user interface allows clinicians and researchers with no programming background to use EPAT.This article describes the architecture, functionalities, and workflow of the toolbox. The release of EPAT will help advance EEG methodology and its application to clinical translational studies.
{"title":"EPAT: a user-friendly MATLAB toolbox for EEG/ERP data processing and analysis","authors":"Jianwei Shi, Xun Gong, Ziang Song, Wenkai Xie, Yanfeng Yang, Xiangjie Sun, Penghu Wei, Changming Wang, Guoguang Zhao","doi":"10.3389/fninf.2024.1384250","DOIUrl":"https://doi.org/10.3389/fninf.2024.1384250","url":null,"abstract":"At the intersection of neural monitoring and decoding, event-related potential (ERP) based on electroencephalography (EEG) has opened a window into intrinsic brain function. The stability of ERP makes it frequently employed in the field of neuroscience. However, project-specific custom code, tracking of user-defined parameters, and the large diversity of commercial tools have limited clinical application.We introduce an open-source, user-friendly, and reproducible MATLAB toolbox named EPAT that includes a variety of algorithms for EEG data preprocessing. It provides EEGLAB-based template pipelines for advanced multi-processing of EEG, magnetoencephalography, and polysomnogram data. Participants evaluated EEGLAB and EPAT across 14 indicators, with satisfaction ratings analyzed using the Wilcoxon signed-rank test or paired t-test based on distribution normality.EPAT eases EEG signal browsing and preprocessing, EEG power spectrum analysis, independent component analysis, time-frequency analysis, ERP waveform drawing, and topological analysis of scalp voltage. A user-friendly graphical user interface allows clinicians and researchers with no programming background to use EPAT.This article describes the architecture, functionalities, and workflow of the toolbox. The release of EPAT will help advance EEG methodology and its application to clinical translational studies.","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140973401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-13DOI: 10.3389/fninf.2024.1379932
Benedikt Holm, Gabriel Jouan, Emil Hardarson, Sigríður Sigurðardottir, Kenan Hoelke, Conor Murphy, Erna Sif Arnardóttir, María Óskarsdóttir, Anna Sigríður Islind
IntroductionPolysomnographic recordings are essential for diagnosing many sleep disorders, yet their detailed analysis presents considerable challenges. With the rise of machine learning methodologies, researchers have created various algorithms to automatically score and extract clinically relevant features from polysomnography, but less research has been devoted to how exactly the algorithms should be incorporated into the workflow of sleep technologists. This paper presents a sophisticated data collection platform developed under the Sleep Revolution project, to harness polysomnographic data from multiple European centers.MethodsA tripartite platform is presented: a user-friendly web platform for uploading three-night polysomnographic recordings, a dedicated splitter that segments these into individual one-night recordings, and an advanced processor that enhances the one-night polysomnography with contemporary automatic scoring algorithms. The platform is evaluated using real-life data and human scorers, whereby scoring time, accuracy, and trust are quantified. Additionally, the scorers were interviewed about their trust in the platform, along with the impact of its integration into their workflow.ResultsWe found that incorporating AI into the workflow of sleep technologists both decreased the time to score by up to 65 min and increased the agreement between technologists by as much as 0.17 κ.DiscussionWe conclude that while the inclusion of AI into the workflow of sleep technologists can have a positive impact in terms of speed and agreement, there is a need for trust in the algorithms.
{"title":"An optimized framework for processing multicentric polysomnographic data incorporating expert human oversight","authors":"Benedikt Holm, Gabriel Jouan, Emil Hardarson, Sigríður Sigurðardottir, Kenan Hoelke, Conor Murphy, Erna Sif Arnardóttir, María Óskarsdóttir, Anna Sigríður Islind","doi":"10.3389/fninf.2024.1379932","DOIUrl":"https://doi.org/10.3389/fninf.2024.1379932","url":null,"abstract":"IntroductionPolysomnographic recordings are essential for diagnosing many sleep disorders, yet their detailed analysis presents considerable challenges. With the rise of machine learning methodologies, researchers have created various algorithms to automatically score and extract clinically relevant features from polysomnography, but less research has been devoted to how exactly the algorithms should be incorporated into the workflow of sleep technologists. This paper presents a sophisticated data collection platform developed under the Sleep Revolution project, to harness polysomnographic data from multiple European centers.MethodsA tripartite platform is presented: a user-friendly web platform for uploading three-night polysomnographic recordings, a dedicated splitter that segments these into individual one-night recordings, and an advanced processor that enhances the one-night polysomnography with contemporary automatic scoring algorithms. The platform is evaluated using real-life data and human scorers, whereby scoring time, accuracy, and trust are quantified. Additionally, the scorers were interviewed about their trust in the platform, along with the impact of its integration into their workflow.ResultsWe found that incorporating AI into the workflow of sleep technologists both decreased the time to score by up to 65 min and increased the agreement between technologists by as much as 0.17 <jats:italic>κ</jats:italic>.DiscussionWe conclude that while the inclusion of AI into the workflow of sleep technologists can have a positive impact in terms of speed and agreement, there is a need for trust in the algorithms.","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140937034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-19DOI: 10.3389/fninf.2024.1323203
Marvin Kaster, Fabian Czappa, Markus Butz-Ostendorf, Felix Wolf
Memory formation is usually associated with Hebbian learning and synaptic plasticity, which changes the synaptic strengths but omits structural changes. A recent study suggests that structural plasticity can also lead to silent memory engrams, reproducing a conditioned learning paradigm with neuron ensembles. However, this study is limited by its way of synapse formation, enabling the formation of only one memory engram. Overcoming this, our model allows the formation of many engrams simultaneously while retaining high neurophysiological accuracy, e.g., as found in cortical columns. We achieve this by substituting the random synapse formation with the Model of Structural Plasticity. As a homeostatic model, neurons regulate their activity by growing and pruning synaptic elements based on their current activity. Utilizing synapse formation based on the Euclidean distance between the neurons with a scalable algorithm allows us to easily simulate 4 million neurons with 343 memory engrams. These engrams do not interfere with one another by default, yet we can change the simulation parameters to form long-reaching associations. Our model's analysis shows that homeostatic engram formation requires a certain spatiotemporal order of events. It predicts that synaptic pruning precedes and enables synaptic engram formation and that it does not occur as a mere compensatory response to enduring synapse potentiation as in Hebbian plasticity with synaptic scaling. Our model paves the way for simulations addressing further inquiries, ranging from memory chains and hierarchies to complex memory systems comprising areas with different learning mechanisms.
{"title":"Building a realistic, scalable memory model with independent engrams using a homeostatic mechanism","authors":"Marvin Kaster, Fabian Czappa, Markus Butz-Ostendorf, Felix Wolf","doi":"10.3389/fninf.2024.1323203","DOIUrl":"https://doi.org/10.3389/fninf.2024.1323203","url":null,"abstract":"Memory formation is usually associated with Hebbian learning and synaptic plasticity, which changes the synaptic strengths but omits structural changes. A recent study suggests that structural plasticity can also lead to silent memory engrams, reproducing a conditioned learning paradigm with neuron ensembles. However, this study is limited by its way of synapse formation, enabling the formation of only one memory engram. Overcoming this, our model allows the formation of many engrams simultaneously while retaining high neurophysiological accuracy, e.g., as found in cortical columns. We achieve this by substituting the random synapse formation with the Model of Structural Plasticity. As a homeostatic model, neurons regulate their activity by growing and pruning synaptic elements based on their current activity. Utilizing synapse formation based on the Euclidean distance between the neurons with a scalable algorithm allows us to easily simulate 4 million neurons with 343 memory engrams. These engrams do not interfere with one another by default, yet we can change the simulation parameters to form long-reaching associations. Our model's analysis shows that homeostatic engram formation requires a certain spatiotemporal order of events. It predicts that synaptic pruning precedes and enables synaptic engram formation and that it does not occur as a mere compensatory response to enduring synapse potentiation as in Hebbian plasticity with synaptic scaling. Our model paves the way for simulations addressing further inquiries, ranging from memory chains and hierarchies to complex memory systems comprising areas with different learning mechanisms.","PeriodicalId":12462,"journal":{"name":"Frontiers in Neuroinformatics","volume":null,"pages":null},"PeriodicalIF":3.5,"publicationDate":"2024-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140630650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}