Pub Date : 2023-07-01DOI: 10.1007/s12021-023-09636-4
Nayereh Ghazi, Mohammad Hadi Aarabi, Hamid Soltanian-Zadeh
Quantitative analysis of white matter fiber tracts from diffusion Magnetic Resonance Imaging (dMRI) data is of great significance in health and disease. For example, analysis of fiber tracts related to anatomically meaningful fiber bundles is highly demanded in pre-surgical and treatment planning, and the surgery outcome depends on accurate segmentation of the desired tracts. Currently, this process is mainly done through time-consuming manual identification performed by neuro-anatomical experts. However, there is a broad interest in automating the pipeline such that it is fast, accurate, and easy to apply in clinical settings and also eliminates the intra-reader variabilities. Following the advancements in medical image analysis using deep learning techniques, there has been a growing interest in using these techniques for the task of tract identification as well. Recent reports on this application show that deep learning-based tract identification approaches outperform existing state-of-the-art methods. This paper presents a review of current tract identification approaches based on deep neural networks. First, we review the recent deep learning methods for tract identification. Next, we compare them with respect to their performance, training process, and network properties. Finally, we end with a critical discussion of open challenges and possible directions for future works.
{"title":"Deep Learning Methods for Identification of White Matter Fiber Tracts: Review of State-of-the-Art and Future Prospective.","authors":"Nayereh Ghazi, Mohammad Hadi Aarabi, Hamid Soltanian-Zadeh","doi":"10.1007/s12021-023-09636-4","DOIUrl":"https://doi.org/10.1007/s12021-023-09636-4","url":null,"abstract":"<p><p>Quantitative analysis of white matter fiber tracts from diffusion Magnetic Resonance Imaging (dMRI) data is of great significance in health and disease. For example, analysis of fiber tracts related to anatomically meaningful fiber bundles is highly demanded in pre-surgical and treatment planning, and the surgery outcome depends on accurate segmentation of the desired tracts. Currently, this process is mainly done through time-consuming manual identification performed by neuro-anatomical experts. However, there is a broad interest in automating the pipeline such that it is fast, accurate, and easy to apply in clinical settings and also eliminates the intra-reader variabilities. Following the advancements in medical image analysis using deep learning techniques, there has been a growing interest in using these techniques for the task of tract identification as well. Recent reports on this application show that deep learning-based tract identification approaches outperform existing state-of-the-art methods. This paper presents a review of current tract identification approaches based on deep neural networks. First, we review the recent deep learning methods for tract identification. Next, we compare them with respect to their performance, training process, and network properties. Finally, we end with a critical discussion of open challenges and possible directions for future works.</p>","PeriodicalId":49761,"journal":{"name":"Neuroinformatics","volume":"21 3","pages":"517-548"},"PeriodicalIF":3.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10018299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The sharing of open-access neuroimaging data has increased significantly during the last few years. Sharing neuroimaging data is crucial to accelerating scientific advancement, particularly in the field of neuroscience. A number of big initiatives that will increase the amount of available neuroimaging data are currently in development. The Big Brain Data Initiative project was started by Universiti Sains Malaysia as the first neuroimaging data repository platform in Malaysia for the purpose of data sharing. In order to ensure that the neuroimaging data in this project is accessible, usable, and secure, as well as to offer users high-quality data that can be consistently accessed, we first came up with good data stewardship practices. Then, we developed MyneuroDB, an online repository database system for data sharing purposes. Here, we describe the Big Brain Data Initiative and MyneuroDB, a data repository that provides the ability to openly share neuroimaging data, currently including magnetic resonance imaging (MRI), electroencephalography (EEG), and magnetoencephalography (MEG), following the FAIR principles for data sharing.
在过去几年中,开放获取的神经成像数据的共享显著增加。共享神经成像数据对于加速科学进步至关重要,特别是在神经科学领域。目前正在开发一些将增加可用神经成像数据量的重大举措。大大脑数据倡议项目是由马来西亚理科大学发起的,是马来西亚第一个以数据共享为目的的神经成像数据存储平台。为了确保本项目神经成像数据的可访问性、可用性和安全性,并为用户提供可持续访问的高质量数据,我们首先提出了良好的数据管理实践。然后,我们开发了MyneuroDB,一个用于数据共享的在线存储数据库系统。在这里,我们描述了Big Brain Data Initiative和MyneuroDB, MyneuroDB是一个数据存储库,它提供了公开共享神经成像数据的能力,目前包括磁共振成像(MRI)、脑电图(EEG)和脑磁图(MEG),遵循FAIR数据共享原则。
{"title":"Big Brain Data Initiatives in Universiti Sains Malaysia: Data Stewardship to Data Repository and Data Sharing.","authors":"Nurfaten Hamzah, Nurul Hashimah Ahamed Hassain Malim, Jafri Malin Abdullah, Putra Sumari, Ariffin Marzuki Mokhtar, Siti Nur Syamila Rosli, Sharifah Aida Shekh Ibrahim, Zamzuri Idris","doi":"10.1007/s12021-023-09637-3","DOIUrl":"https://doi.org/10.1007/s12021-023-09637-3","url":null,"abstract":"<p><p>The sharing of open-access neuroimaging data has increased significantly during the last few years. Sharing neuroimaging data is crucial to accelerating scientific advancement, particularly in the field of neuroscience. A number of big initiatives that will increase the amount of available neuroimaging data are currently in development. The Big Brain Data Initiative project was started by Universiti Sains Malaysia as the first neuroimaging data repository platform in Malaysia for the purpose of data sharing. In order to ensure that the neuroimaging data in this project is accessible, usable, and secure, as well as to offer users high-quality data that can be consistently accessed, we first came up with good data stewardship practices. Then, we developed MyneuroDB, an online repository database system for data sharing purposes. Here, we describe the Big Brain Data Initiative and MyneuroDB, a data repository that provides the ability to openly share neuroimaging data, currently including magnetic resonance imaging (MRI), electroencephalography (EEG), and magnetoencephalography (MEG), following the FAIR principles for data sharing.</p>","PeriodicalId":49761,"journal":{"name":"Neuroinformatics","volume":"21 3","pages":"589-600"},"PeriodicalIF":3.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10371870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01Epub Date: 2023-06-09DOI: 10.1007/s12021-023-09633-7
Daniel P Chapman, Stefano Vicini, Mark P Burns, Rebekah Evans
Traumatic brain injury (TBI) and repetitive head impacts can result in a wide range of neurological symptoms. Despite being the most common neurological disorder in the world, repeat head impacts and TBI do not have any FDA-approved treatments. Single neuron modeling allows researchers to extrapolate cellular changes in individual neurons based on experimental data. We recently characterized a model of high frequency head impact (HFHI) with a phenotype of cognitive deficits associated with decreases in neuronal excitability of CA1 neurons and synaptic changes. While the synaptic changes have been interrogated in vivo, the cause and potential therapeutic targets of hypoexcitability following repetitive head impacts are unknown. Here, we generated in silico models of CA1 pyramidal neurons from current clamp data of control mice and mice that sustained HFHI. We use a directed evolution algorithm with a crowding penalty to generate a large and unbiased population of plausible models for each group that approximated the experimental features. The HFHI neuron model population showed decreased voltage gated sodium conductance and a general increase in potassium channel conductance. We used partial least squares regression analysis to identify combinations of channels that may account for CA1 hypoexcitability after HFHI. The hypoexcitability phenotype in models was linked to A- and M-type potassium channels in combination, but not by any single channel correlations. We provide an open access set of CA1 pyramidal neuron models for both control and HFHI conditions that can be used to predict the effects of pharmacological interventions in TBI models.
创伤性脑损伤(TBI)和重复性头部撞击可导致多种神经系统症状。尽管重复性头部撞击和创伤性脑损伤是世界上最常见的神经系统疾病,但没有任何治疗方法获得美国食品及药物管理局的批准。单个神经元建模允许研究人员根据实验数据推断单个神经元的细胞变化。我们最近鉴定了一种高频头部撞击(HFHI)模型,其认知障碍表型与 CA1 神经元兴奋性下降和突触变化有关。虽然突触变化已在体内进行了研究,但重复性头部撞击后兴奋性降低的原因和潜在治疗目标尚不清楚。在这里,我们从对照组小鼠和持续性高频头痛小鼠的电流钳数据中生成了 CA1 锥体神经元的硅学模型。我们使用了一种带有拥挤惩罚的定向进化算法,为每组小鼠生成了大量无偏的近似实验特征的可信模型。HFHI 神经元模型群显示电压门控钠电导降低,钾通道电导普遍升高。我们使用偏最小二乘法回归分析来确定可能导致 CA1 在 HFHI 后兴奋性降低的通道组合。模型中的低兴奋表型与 A 型和 M 型钾通道组合有关,但与任何单一通道无关。我们提供了一组对照和高频手震条件下的 CA1 锥体神经元开放存取模型,可用于预测药物干预对创伤性脑损伤模型的影响。
{"title":"Single Neuron Modeling Identifies Potassium Channel Modulation as Potential Target for Repetitive Head Impacts.","authors":"Daniel P Chapman, Stefano Vicini, Mark P Burns, Rebekah Evans","doi":"10.1007/s12021-023-09633-7","DOIUrl":"10.1007/s12021-023-09633-7","url":null,"abstract":"<p><p>Traumatic brain injury (TBI) and repetitive head impacts can result in a wide range of neurological symptoms. Despite being the most common neurological disorder in the world, repeat head impacts and TBI do not have any FDA-approved treatments. Single neuron modeling allows researchers to extrapolate cellular changes in individual neurons based on experimental data. We recently characterized a model of high frequency head impact (HFHI) with a phenotype of cognitive deficits associated with decreases in neuronal excitability of CA1 neurons and synaptic changes. While the synaptic changes have been interrogated in vivo, the cause and potential therapeutic targets of hypoexcitability following repetitive head impacts are unknown. Here, we generated in silico models of CA1 pyramidal neurons from current clamp data of control mice and mice that sustained HFHI. We use a directed evolution algorithm with a crowding penalty to generate a large and unbiased population of plausible models for each group that approximated the experimental features. The HFHI neuron model population showed decreased voltage gated sodium conductance and a general increase in potassium channel conductance. We used partial least squares regression analysis to identify combinations of channels that may account for CA1 hypoexcitability after HFHI. The hypoexcitability phenotype in models was linked to A- and M-type potassium channels in combination, but not by any single channel correlations. We provide an open access set of CA1 pyramidal neuron models for both control and HFHI conditions that can be used to predict the effects of pharmacological interventions in TBI models.</p>","PeriodicalId":49761,"journal":{"name":"Neuroinformatics","volume":"21 3","pages":"501-516"},"PeriodicalIF":2.7,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10833395/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10281744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1007/s12021-023-09629-3
Emily S Nichols, Susana Correa, Peter Van Dyken, Jason Kai, Tristan Kuehn, Sandrine de Ribaupierre, Emma G Duerden, Ali R Khan
Fetal functional magnetic resonance imaging (fMRI) offers critical insight into the developing brain and could aid in predicting developmental outcomes. As the fetal brain is surrounded by heterogeneous tissue, it is not possible to use adult- or child-based segmentation toolboxes. Manually-segmented masks can be used to extract the fetal brain; however, this comes at significant time costs. Here, we present a new BIDS App for masking fetal fMRI, funcmasker-flex, that overcomes these issues with a robust 3D convolutional neural network (U-net) architecture implemented in an extensible and transparent Snakemake workflow. Open-access fetal fMRI data with manual brain masks from 159 fetuses (1103 total volumes) were used for training and testing the U-net model. We also tested generalizability of the model using 82 locally acquired functional scans from 19 fetuses, which included over 2300 manually segmented volumes. Dice metrics were used to compare performance of funcmasker-flex to the ground truth manually segmented volumes, and segmentations were consistently robust (all Dice metrics ≥ 0.74). The tool is freely available and can be applied to any BIDS dataset containing fetal bold sequences. Funcmasker-flex reduces the need for manual segmentation, even when applied to novel fetal functional datasets, resulting in significant time-cost savings for performing fetal fMRI analysis.
{"title":"Funcmasker-flex: An Automated BIDS-App for Brain Segmentation of Human Fetal Functional MRI data.","authors":"Emily S Nichols, Susana Correa, Peter Van Dyken, Jason Kai, Tristan Kuehn, Sandrine de Ribaupierre, Emma G Duerden, Ali R Khan","doi":"10.1007/s12021-023-09629-3","DOIUrl":"https://doi.org/10.1007/s12021-023-09629-3","url":null,"abstract":"<p><p>Fetal functional magnetic resonance imaging (fMRI) offers critical insight into the developing brain and could aid in predicting developmental outcomes. As the fetal brain is surrounded by heterogeneous tissue, it is not possible to use adult- or child-based segmentation toolboxes. Manually-segmented masks can be used to extract the fetal brain; however, this comes at significant time costs. Here, we present a new BIDS App for masking fetal fMRI, funcmasker-flex, that overcomes these issues with a robust 3D convolutional neural network (U-net) architecture implemented in an extensible and transparent Snakemake workflow. Open-access fetal fMRI data with manual brain masks from 159 fetuses (1103 total volumes) were used for training and testing the U-net model. We also tested generalizability of the model using 82 locally acquired functional scans from 19 fetuses, which included over 2300 manually segmented volumes. Dice metrics were used to compare performance of funcmasker-flex to the ground truth manually segmented volumes, and segmentations were consistently robust (all Dice metrics ≥ 0.74). The tool is freely available and can be applied to any BIDS dataset containing fetal bold sequences. Funcmasker-flex reduces the need for manual segmentation, even when applied to novel fetal functional datasets, resulting in significant time-cost savings for performing fetal fMRI analysis.</p>","PeriodicalId":49761,"journal":{"name":"Neuroinformatics","volume":"21 3","pages":"565-573"},"PeriodicalIF":3.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10016997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Head CT, which includes the facial region, can visualize faces using 3D reconstruction, raising concern that individuals may be identified. We developed a new de-identification technique that distorts the faces of head CT images. Head CT images that were distorted were labeled as "original images" and the others as "reference images." Reconstructed face models of both were created, with 400 control points on the facial surfaces. All voxel positions in the original image were moved and deformed according to the deformation vectors required to move to corresponding control points on the reference image. Three face detection and identification programs were used to determine face detection rates and match confidence scores. Intracranial volume equivalence tests were performed before and after deformation, and correlation coefficients between intracranial pixel value histograms were calculated. Output accuracy of the deep learning model for intracranial segmentation was determined using Dice Similarity Coefficient before and after deformation. The face detection rate was 100%, and match confidence scores were < 90. Equivalence testing of the intracranial volume revealed statistical equivalence before and after deformation. The median correlation coefficient between intracranial pixel value histograms before and after deformation was 0.9965, indicating high similarity. Dice Similarity Coefficient values of original and deformed images were statistically equivalent. We developed a technique to de-identify head CT images while maintaining the accuracy of deep-learning models. The technique involves deforming images to prevent face identification, with minimal changes to the original information.
{"title":"De-Identification Technique with Facial Deformation in Head CT Images.","authors":"Tatsuya Uchida, Taichi Kin, Toki Saito, Naoyuki Shono, Satoshi Kiyofuji, Tsukasa Koike, Katsuya Sato, Ryoko Niwa, Ikumi Takashima, Hiroshi Oyama, Nobuhito Saito","doi":"10.1007/s12021-023-09631-9","DOIUrl":"https://doi.org/10.1007/s12021-023-09631-9","url":null,"abstract":"<p><p>Head CT, which includes the facial region, can visualize faces using 3D reconstruction, raising concern that individuals may be identified. We developed a new de-identification technique that distorts the faces of head CT images. Head CT images that were distorted were labeled as \"original images\" and the others as \"reference images.\" Reconstructed face models of both were created, with 400 control points on the facial surfaces. All voxel positions in the original image were moved and deformed according to the deformation vectors required to move to corresponding control points on the reference image. Three face detection and identification programs were used to determine face detection rates and match confidence scores. Intracranial volume equivalence tests were performed before and after deformation, and correlation coefficients between intracranial pixel value histograms were calculated. Output accuracy of the deep learning model for intracranial segmentation was determined using Dice Similarity Coefficient before and after deformation. The face detection rate was 100%, and match confidence scores were < 90. Equivalence testing of the intracranial volume revealed statistical equivalence before and after deformation. The median correlation coefficient between intracranial pixel value histograms before and after deformation was 0.9965, indicating high similarity. Dice Similarity Coefficient values of original and deformed images were statistically equivalent. We developed a technique to de-identify head CT images while maintaining the accuracy of deep-learning models. The technique involves deforming images to prevent face identification, with minimal changes to the original information.</p>","PeriodicalId":49761,"journal":{"name":"Neuroinformatics","volume":"21 3","pages":"575-587"},"PeriodicalIF":3.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10406725/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10015017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01Epub Date: 2023-06-26DOI: 10.1007/s12021-023-09632-8
Maryam Sadeghi, Arnau Ramos-Prats, Pedro Neto, Federico Castaldi, Devin Crowley, Pawel Matulewicz, Enrica Paradiso, Wolfgang Freysinger, Francesco Ferraguti, Georg Goebel
To accurately explore the anatomical organization of neural circuits in the brain, it is crucial to map the experimental brain data onto a standardized system of coordinates. Studying 2D histological mouse brain slices remains the standard procedure in many laboratories. Mapping these 2D brain slices is challenging; due to deformations, artifacts, and tilted angles introduced during the standard preparation and slicing process. In addition, analysis of experimental mouse brain slices can be highly dependent on the level of expertise of the human operator. Here we propose a computational tool for Accurate Mouse Brain Image Analysis (AMBIA), to map 2D mouse brain slices on the 3D brain model with minimal human intervention. AMBIA has a modular design that comprises a localization module and a registration module. The localization module is a deep learning-based pipeline that localizes a single 2D slice in the 3D Allen Brain Atlas and generates a corresponding atlas plane. The registration module is built upon the Ardent python package that performs deformable 2D registration between the brain slice to its corresponding atlas. By comparing AMBIA's performance in localization and registration to human ratings, we demonstrate that it performs at a human expert level. AMBIA provides an intuitive and highly efficient way for accurate registration of experimental 2D mouse brain images to 3D digital mouse brain atlas. Our tool provides a graphical user interface and it is designed to be used by researchers with minimal programming knowledge.
{"title":"Localization and Registration of 2D Histological Mouse Brain Images in 3D Atlas Space.","authors":"Maryam Sadeghi, Arnau Ramos-Prats, Pedro Neto, Federico Castaldi, Devin Crowley, Pawel Matulewicz, Enrica Paradiso, Wolfgang Freysinger, Francesco Ferraguti, Georg Goebel","doi":"10.1007/s12021-023-09632-8","DOIUrl":"10.1007/s12021-023-09632-8","url":null,"abstract":"<p><p>To accurately explore the anatomical organization of neural circuits in the brain, it is crucial to map the experimental brain data onto a standardized system of coordinates. Studying 2D histological mouse brain slices remains the standard procedure in many laboratories. Mapping these 2D brain slices is challenging; due to deformations, artifacts, and tilted angles introduced during the standard preparation and slicing process. In addition, analysis of experimental mouse brain slices can be highly dependent on the level of expertise of the human operator. Here we propose a computational tool for Accurate Mouse Brain Image Analysis (AMBIA), to map 2D mouse brain slices on the 3D brain model with minimal human intervention. AMBIA has a modular design that comprises a localization module and a registration module. The localization module is a deep learning-based pipeline that localizes a single 2D slice in the 3D Allen Brain Atlas and generates a corresponding atlas plane. The registration module is built upon the Ardent python package that performs deformable 2D registration between the brain slice to its corresponding atlas. By comparing AMBIA's performance in localization and registration to human ratings, we demonstrate that it performs at a human expert level. AMBIA provides an intuitive and highly efficient way for accurate registration of experimental 2D mouse brain images to 3D digital mouse brain atlas. Our tool provides a graphical user interface and it is designed to be used by researchers with minimal programming knowledge.</p>","PeriodicalId":49761,"journal":{"name":"Neuroinformatics","volume":"21 3","pages":"615-630"},"PeriodicalIF":2.7,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10406728/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10020376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1007/s12021-023-09627-5
Laura Keto, Tiina Manninen
Understanding functions of astrocytes can be greatly enhanced by building and simulating computational models that capture their morphological details. Novel computational tools enable utilization of existing morphological data of astrocytes and building models that have appropriate level of details for specific simulation purposes. In addition to analyzing existing computational tools for constructing, transforming, and assessing astrocyte morphologies, we present here the CellRemorph toolkit implemented as an add-on for Blender, a 3D modeling platform increasingly recognized for its utility for manipulating 3D biological data. To our knowledge, CellRemorph is the first toolkit for transforming astrocyte morphologies from polygonal surface meshes into adjustable surface point clouds and vice versa, precisely selecting nanoprocesses, and slicing morphologies into segments with equal surface areas or volumes. CellRemorph is an open-source toolkit under the GNU General Public License and easily accessible via an intuitive graphical user interface. CellRemorph will be a valuable addition to other Blender add-ons, providing novel functionality that facilitates the creation of realistic astrocyte morphologies for different types of morphologically detailed simulations elucidating the role of astrocytes both in health and disease.
{"title":"CellRemorph: A Toolkit for Transforming, Selecting, and Slicing 3D Cell Structures on the Road to Morphologically Detailed Astrocyte Simulations.","authors":"Laura Keto, Tiina Manninen","doi":"10.1007/s12021-023-09627-5","DOIUrl":"https://doi.org/10.1007/s12021-023-09627-5","url":null,"abstract":"<p><p>Understanding functions of astrocytes can be greatly enhanced by building and simulating computational models that capture their morphological details. Novel computational tools enable utilization of existing morphological data of astrocytes and building models that have appropriate level of details for specific simulation purposes. In addition to analyzing existing computational tools for constructing, transforming, and assessing astrocyte morphologies, we present here the CellRemorph toolkit implemented as an add-on for Blender, a 3D modeling platform increasingly recognized for its utility for manipulating 3D biological data. To our knowledge, CellRemorph is the first toolkit for transforming astrocyte morphologies from polygonal surface meshes into adjustable surface point clouds and vice versa, precisely selecting nanoprocesses, and slicing morphologies into segments with equal surface areas or volumes. CellRemorph is an open-source toolkit under the GNU General Public License and easily accessible via an intuitive graphical user interface. CellRemorph will be a valuable addition to other Blender add-ons, providing novel functionality that facilitates the creation of realistic astrocyte morphologies for different types of morphologically detailed simulations elucidating the role of astrocytes both in health and disease.</p>","PeriodicalId":49761,"journal":{"name":"Neuroinformatics","volume":"21 3","pages":"483-500"},"PeriodicalIF":3.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10406679/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10392956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1007/s12021-022-09616-0
Jieqing Jiao, Fiona Heeman, Rachael Dixon, Catriona Wimberley, Isadora Lopes Alves, Juan Domingo Gispert, Adriaan A Lammertsma, Bart N M van Berckel, Casper da Costa-Luis, Pawel Markiewicz, David M Cash, M Jorge Cardoso, Sebastién Ourselin, Maqsood Yaqub, Frederik Barkhof
Current PET datasets are becoming larger, thereby increasing the demand for fast and reproducible processing pipelines. This paper presents a freely available, open source, Python-based software package called NiftyPAD, for versatile analyses of static, full or dual-time window dynamic brain PET data. The key novelties of NiftyPAD are the analyses of dual-time window scans with reference input processing, pharmacokinetic modelling with shortened PET acquisitions through the incorporation of arterial spin labelling (ASL)-derived relative perfusion measures, as well as optional PET data-based motion correction. Results obtained with NiftyPAD were compared with the well-established software packages PPET and QModeling for a range of kinetic models. Clinical data from eight subjects scanned with four different amyloid tracers were used to validate the computational performance. NiftyPAD achieved [Formula: see text] correlation with PPET, with absolute difference [Formula: see text] for linearised Logan and MRTM2 methods, and [Formula: see text] correlation with QModeling, with absolute difference [Formula: see text] for basis function based SRTM and SRTM2 models. For the recently published SRTM ASL method, which is unavailable in existing software packages, high correlations with negligible bias were observed with the full scan SRTM in terms of non-displaceable binding potential ([Formula: see text]), indicating reliable model implementation in NiftyPAD. Together, these findings illustrate that NiftyPAD is versatile, flexible, and produces comparable results with established software packages for quantification of dynamic PET data. It is freely available ( https://github.com/AMYPAD/NiftyPAD ), and allows for multi-platform usage. The modular setup makes adding new functionalities easy, and the package is lightweight with minimal dependencies, making it easy to use and integrate into existing processing pipelines.
{"title":"NiftyPAD - Novel Python Package for Quantitative Analysis of Dynamic PET Data.","authors":"Jieqing Jiao, Fiona Heeman, Rachael Dixon, Catriona Wimberley, Isadora Lopes Alves, Juan Domingo Gispert, Adriaan A Lammertsma, Bart N M van Berckel, Casper da Costa-Luis, Pawel Markiewicz, David M Cash, M Jorge Cardoso, Sebastién Ourselin, Maqsood Yaqub, Frederik Barkhof","doi":"10.1007/s12021-022-09616-0","DOIUrl":"https://doi.org/10.1007/s12021-022-09616-0","url":null,"abstract":"<p><p>Current PET datasets are becoming larger, thereby increasing the demand for fast and reproducible processing pipelines. This paper presents a freely available, open source, Python-based software package called NiftyPAD, for versatile analyses of static, full or dual-time window dynamic brain PET data. The key novelties of NiftyPAD are the analyses of dual-time window scans with reference input processing, pharmacokinetic modelling with shortened PET acquisitions through the incorporation of arterial spin labelling (ASL)-derived relative perfusion measures, as well as optional PET data-based motion correction. Results obtained with NiftyPAD were compared with the well-established software packages PPET and QModeling for a range of kinetic models. Clinical data from eight subjects scanned with four different amyloid tracers were used to validate the computational performance. NiftyPAD achieved [Formula: see text] correlation with PPET, with absolute difference [Formula: see text] for linearised Logan and MRTM2 methods, and [Formula: see text] correlation with QModeling, with absolute difference [Formula: see text] for basis function based SRTM and SRTM2 models. For the recently published SRTM ASL method, which is unavailable in existing software packages, high correlations with negligible bias were observed with the full scan SRTM in terms of non-displaceable binding potential ([Formula: see text]), indicating reliable model implementation in NiftyPAD. Together, these findings illustrate that NiftyPAD is versatile, flexible, and produces comparable results with established software packages for quantification of dynamic PET data. It is freely available ( https://github.com/AMYPAD/NiftyPAD ), and allows for multi-platform usage. The modular setup makes adding new functionalities easy, and the package is lightweight with minimal dependencies, making it easy to use and integrate into existing processing pipelines.</p>","PeriodicalId":49761,"journal":{"name":"Neuroinformatics","volume":"21 2","pages":"457-468"},"PeriodicalIF":3.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10085912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9332639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1007/s12021-022-09619-x
Aybüke Erol, Chagajeg Soloukey, Bastian Generowicz, Nikki van Dorp, Sebastiaan Koekkoek, Pieter Kruizinga, Borbála Hunyadi
{"title":"Correction to: Deconvolution of the Functional Ultrasound Response in the Mouse Visual Pathway Using Block-Term Decomposition.","authors":"Aybüke Erol, Chagajeg Soloukey, Bastian Generowicz, Nikki van Dorp, Sebastiaan Koekkoek, Pieter Kruizinga, Borbála Hunyadi","doi":"10.1007/s12021-022-09619-x","DOIUrl":"https://doi.org/10.1007/s12021-022-09619-x","url":null,"abstract":"","PeriodicalId":49761,"journal":{"name":"Neuroinformatics","volume":"21 2","pages":"267"},"PeriodicalIF":3.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10085891/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9287540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.1007/s12021-022-09614-2
Jodie P Gray, Jordi Manuello, Aaron F Alexander-Bloch, Cassandra Leonardo, Crystal Franklin, Ki Sueng Choi, Franco Cauda, Tommaso Costa, John Blangero, David C Glahn, Helen S Mayberg, Peter T Fox
Major depressive disorder (MDD) exhibits diverse symptomology and neuroimaging studies report widespread disruption of key brain areas. Numerous theories underpinning the network degeneration hypothesis (NDH) posit that neuropsychiatric diseases selectively target brain areas via meaningful network mechanisms rather than as indistinct disease effects. The present study tests the hypothesis that MDD is a network-based disorder, both structurally and functionally. Coordinate-based meta-analysis and Activation Likelihood Estimation (CBMA-ALE) were used to assess the convergence of findings from 92 previously published studies in depression. An extension of CBMA-ALE was then used to generate a node-and-edge network model representing the co-alteration of brain areas impacted by MDD. Standardized measures of graph theoretical network architecture were assessed. Co-alteration patterns among the meta-analytic MDD nodes were then tested in independent, clinical T1-weighted structural magnetic resonance imaging (MRI) and resting-state functional (rs-fMRI) data. Differences in co-alteration profiles between MDD patients and healthy controls, as well as between controls and clinical subgroups of MDD patients, were assessed. A 65-node 144-edge co-alteration network model was derived for MDD. Testing of co-alteration profiles in replication data using the MDD nodes provided distinction between MDD and healthy controls in structural data. However, co-alteration profiles were not distinguished between patients and controls in rs-fMRI data. Improved distinction between patients and healthy controls was observed in clinically homogenous MDD subgroups in T1 data. MDD abnormalities demonstrated both structural and functional network architecture, though only structural networks exhibited between-groups differences. Our findings suggest improved utility of structural co-alteration networks for ongoing biomarker development.
{"title":"Co-alteration Network Architecture of Major Depressive Disorder: A Multi-modal Neuroimaging Assessment of Large-scale Disease Effects.","authors":"Jodie P Gray, Jordi Manuello, Aaron F Alexander-Bloch, Cassandra Leonardo, Crystal Franklin, Ki Sueng Choi, Franco Cauda, Tommaso Costa, John Blangero, David C Glahn, Helen S Mayberg, Peter T Fox","doi":"10.1007/s12021-022-09614-2","DOIUrl":"https://doi.org/10.1007/s12021-022-09614-2","url":null,"abstract":"<p><p>Major depressive disorder (MDD) exhibits diverse symptomology and neuroimaging studies report widespread disruption of key brain areas. Numerous theories underpinning the network degeneration hypothesis (NDH) posit that neuropsychiatric diseases selectively target brain areas via meaningful network mechanisms rather than as indistinct disease effects. The present study tests the hypothesis that MDD is a network-based disorder, both structurally and functionally. Coordinate-based meta-analysis and Activation Likelihood Estimation (CBMA-ALE) were used to assess the convergence of findings from 92 previously published studies in depression. An extension of CBMA-ALE was then used to generate a node-and-edge network model representing the co-alteration of brain areas impacted by MDD. Standardized measures of graph theoretical network architecture were assessed. Co-alteration patterns among the meta-analytic MDD nodes were then tested in independent, clinical T1-weighted structural magnetic resonance imaging (MRI) and resting-state functional (rs-fMRI) data. Differences in co-alteration profiles between MDD patients and healthy controls, as well as between controls and clinical subgroups of MDD patients, were assessed. A 65-node 144-edge co-alteration network model was derived for MDD. Testing of co-alteration profiles in replication data using the MDD nodes provided distinction between MDD and healthy controls in structural data. However, co-alteration profiles were not distinguished between patients and controls in rs-fMRI data. Improved distinction between patients and healthy controls was observed in clinically homogenous MDD subgroups in T1 data. MDD abnormalities demonstrated both structural and functional network architecture, though only structural networks exhibited between-groups differences. Our findings suggest improved utility of structural co-alteration networks for ongoing biomarker development.</p>","PeriodicalId":49761,"journal":{"name":"Neuroinformatics","volume":"21 2","pages":"443-455"},"PeriodicalIF":3.0,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9325812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}