Hazhar Sufi Karimi, Arghya Pal, Lipeng Ning, Y. Rathi
Abstract Diffusion magnetic resonance imaging (dMRI) allows to estimate brain tissue microstructure as well as the connectivity of the white matter (known as tractography). Accurate estimation of the model parameters (by solving the inverse problem) is thus very important to infer the underlying biophysical tissue properties and fiber orientations. Although there has been extensive research on this topic with a myriad of dMRI models, most models use standard nonlinear optimization techniques and only provide an estimate of the model parameters without any information (quantification) about uncertainty in their estimation. Further, the effect of this uncertainty on the estimation of the derived dMRI microstructural measures downstream (e.g., fractional anisotropy) is often unknown and is rarely estimated. To address this issue, we first design a new deep-learning algorithm to identify the number of crossing fibers in each voxel. Then, at each voxel, we propose a robust likelihood-free deep learning method to estimate not only the mean estimate of the parameters of a multi-fiber dMRI model (e.g., the biexponential model), but also its full posterior distribution. The posterior distribution is then used to estimate the uncertainty in the model parameters as well as the derived measures. We perform several synthetic and in-vivo quantitative experiments to demonstrate the robustness of our approach for different noise levels and out-of-distribution test samples. Besides, our approach is computationally fast and requires an order of magnitude less time than standard nonlinear fitting techniques. The proposed method demonstrates much lower error (compared to existing methods) in estimating several metrics, including number of fibers in a voxel, fiber orientation, and tensor eigenvalues. The proposed methodology is quite general and can be used for the estimation of the parameters from any other dMRI model.
{"title":"Likelihood-free posterior estimation and uncertainty quantification for diffusion MRI models","authors":"Hazhar Sufi Karimi, Arghya Pal, Lipeng Ning, Y. Rathi","doi":"10.1162/imag_a_00088","DOIUrl":"https://doi.org/10.1162/imag_a_00088","url":null,"abstract":"Abstract Diffusion magnetic resonance imaging (dMRI) allows to estimate brain tissue microstructure as well as the connectivity of the white matter (known as tractography). Accurate estimation of the model parameters (by solving the inverse problem) is thus very important to infer the underlying biophysical tissue properties and fiber orientations. Although there has been extensive research on this topic with a myriad of dMRI models, most models use standard nonlinear optimization techniques and only provide an estimate of the model parameters without any information (quantification) about uncertainty in their estimation. Further, the effect of this uncertainty on the estimation of the derived dMRI microstructural measures downstream (e.g., fractional anisotropy) is often unknown and is rarely estimated. To address this issue, we first design a new deep-learning algorithm to identify the number of crossing fibers in each voxel. Then, at each voxel, we propose a robust likelihood-free deep learning method to estimate not only the mean estimate of the parameters of a multi-fiber dMRI model (e.g., the biexponential model), but also its full posterior distribution. The posterior distribution is then used to estimate the uncertainty in the model parameters as well as the derived measures. We perform several synthetic and in-vivo quantitative experiments to demonstrate the robustness of our approach for different noise levels and out-of-distribution test samples. Besides, our approach is computationally fast and requires an order of magnitude less time than standard nonlinear fitting techniques. The proposed method demonstrates much lower error (compared to existing methods) in estimating several metrics, including number of fibers in a voxel, fiber orientation, and tensor eigenvalues. The proposed methodology is quite general and can be used for the estimation of the parameters from any other dMRI model.","PeriodicalId":507939,"journal":{"name":"Imaging Neuroscience","volume":"55 4","pages":"1-22"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139875202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhengguo Tan, Patrick Alexander Liebig, R. Heidemann, F. B. Laun, Florian Knoll
Abstract The pursuit of high spatial-angular-temporal resolution for in vivo diffusion-weighted magnetic resonance imaging (DW-MRI) at ultra-high field strength (7 T and above) is important in understanding brain microstructure and function. Such pursuit, however, faces several technical challenges. First, increased off-resonance and shorter T2 relaxation require faster echo train readouts. Second, existing high-resolution DW-MRI techniques usually employ in-plane fully-sampled multi-shot EPI, which not only prolongs the scan time but also induces a high specific absorption rate (SAR) at 7 T. To address these challenges, we develop in this work navigator-based interleaved EPI (NAViEPI) which enforces the same effective echo spacing (ESP) between the imaging and the navigator echo. First, NAViEPI renders no distortion mismatch between the two echoes, and thus simplifies shot-to-shot phase variation correction. Second, NAViEPI allows for a large number of shots (e.g., >4) with undersampled iEPI acquisition, thereby rendering clinically-feasible high-resolution sub-milliemeter protocols. To retain signal-to-noise ratio (SNR) and to reduce undersampling artifacts, we developed a ky-shift encoding among diffusion encodings to explore complementary k- q-space sampling. Moreover, we developed a novel joint reconstruction with overlapping locally low-rank regularization generalized to the multi-band multi-shot acquisition at 7 T (dubbed JETS-NAViEPI). Our method was demonstrated, with experimental results covering 1 mm isotropic resolution multi b-value DWI and sub-millimeter in-plane resolution fast TRACE acquisition.
摘要 在超高场强(7 T 及以上)下进行活体弥散加权磁共振成像(DW-MRI),追求高空间-矩形-时间分辨率对于了解大脑微观结构和功能非常重要。然而,这种追求面临着几项技术挑战。首先,由于非共振的增加和 T2 松弛的缩短,需要更快的回波列读取速度。其次,现有的高分辨率 DW-MRI 技术通常采用平面内全采样多拍 EPI,这不仅会延长扫描时间,还会在 7 T 时产生较高的比吸收率(SAR)。为了应对这些挑战,我们在这项工作中开发了基于导航器的交错 EPI(NAViEPI),它能在成像和导航器回波之间实现相同的有效回波间隔(ESP)。首先,NAViEPI 使两个回波之间没有失真错配,从而简化了镜头到镜头的相位变化校正。其次,NAViEPI 允许使用欠采样 iEPI 采集大量镜头(例如大于 4 个),从而实现临床上可行的高分辨率亚毫米方案。为了保持信噪比(SNR)并减少欠采样伪影,我们在扩散编码中开发了一种 ky 移位编码,以探索互补的 k- q 空间采样。此外,我们还开发了一种新的联合重建方法,该方法采用重叠局部低秩正则化,适用于 7 T 的多波段多拍采集(命名为 JETS-NAViEPI)。我们的方法得到了验证,实验结果涵盖了 1 毫米各向同性分辨率的多 b 值 DWI 和亚毫米平面分辨率的快速 TRACE 采集。
{"title":"Accelerated diffusion-weighted magnetic resonance imaging at 7 T: Joint reconstruction for shift-encoded navigator-based interleaved echo planar imaging (JETS-NAViEPI)","authors":"Zhengguo Tan, Patrick Alexander Liebig, R. Heidemann, F. B. Laun, Florian Knoll","doi":"10.1162/imag_a_00085","DOIUrl":"https://doi.org/10.1162/imag_a_00085","url":null,"abstract":"Abstract The pursuit of high spatial-angular-temporal resolution for in vivo diffusion-weighted magnetic resonance imaging (DW-MRI) at ultra-high field strength (7 T and above) is important in understanding brain microstructure and function. Such pursuit, however, faces several technical challenges. First, increased off-resonance and shorter T2 relaxation require faster echo train readouts. Second, existing high-resolution DW-MRI techniques usually employ in-plane fully-sampled multi-shot EPI, which not only prolongs the scan time but also induces a high specific absorption rate (SAR) at 7 T. To address these challenges, we develop in this work navigator-based interleaved EPI (NAViEPI) which enforces the same effective echo spacing (ESP) between the imaging and the navigator echo. First, NAViEPI renders no distortion mismatch between the two echoes, and thus simplifies shot-to-shot phase variation correction. Second, NAViEPI allows for a large number of shots (e.g., >4) with undersampled iEPI acquisition, thereby rendering clinically-feasible high-resolution sub-milliemeter protocols. To retain signal-to-noise ratio (SNR) and to reduce undersampling artifacts, we developed a ky-shift encoding among diffusion encodings to explore complementary k- q-space sampling. Moreover, we developed a novel joint reconstruction with overlapping locally low-rank regularization generalized to the multi-band multi-shot acquisition at 7 T (dubbed JETS-NAViEPI). Our method was demonstrated, with experimental results covering 1 mm isotropic resolution multi b-value DWI and sub-millimeter in-plane resolution fast TRACE acquisition.","PeriodicalId":507939,"journal":{"name":"Imaging Neuroscience","volume":"14 2","pages":"1-15"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139885638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Puiu F. Balan, Qi Zhu, Xiaolian Li, Meiqi Niu, Lucija Rapan, Thomas Funck, Haiyan Wang, Rembrandt Bakker, N. Palomero-Gallagher, W. Vanduffel
Abstract Due to their fundamental relevance, the number of anatomical macaque brain templates is constantly growing. Novel templates aim to alleviate limitations of previously published atlases and offer the foundation to integrate multiscale multimodal data. Typical limitations of existing templates include their reliance on one subject, their unimodality (usually only T1 or histological images), or lack of anatomical details. The MEBRAINS template overcomes these limitations by using a combination of T1 and T2 images, from the same 10 animals (Macaca mulatta), which are averaged by the multi-brain toolbox for diffeomorphic registration and segmentation. The resulting volumetric T1 and T2 templates are supplemented with high-quality white and gray matter surfaces built with FreeSurfer. Human-curated segmentations of pial surface, the white/gray matter interface, and major subcortical nuclei were used to analyze the relative quality of the MEBRAINS template. Additionally, 9 computed tomography (CT) scans of the same monkeys were registered to the T1 modality and co-registered to the template. Through its main features (multi-subject, multimodal, volume-and-surface, traditional, and deep learning-based segmentations), MEBRAINS aims to improve integration of multimodal multi-scale macaque data and is quantitatively equal to, or better than, currently widely used macaque templates. We provide a detailed description of the algorithms/methods used to create the template aiming to furnish future researchers with a map-like perspective which should facilitate identification of an optimal pipeline for the task they have at hand. Finally, recently published 3D maps of the macaque inferior parietal lobe, (pre)motor and prefrontal cortex were warped to the MEBRAINS surface template, thus populating it with a parcellation scheme based on cyto- and receptor architectonic analyses. The template is integrated in the EBRAINS and Scalable Brain Atlas web-based infrastructures, each of which comes with its own suite of spatial registration tools.
{"title":"MEBRAINS 1.0: A new population-based macaque atlas","authors":"Puiu F. Balan, Qi Zhu, Xiaolian Li, Meiqi Niu, Lucija Rapan, Thomas Funck, Haiyan Wang, Rembrandt Bakker, N. Palomero-Gallagher, W. Vanduffel","doi":"10.1162/imag_a_00077","DOIUrl":"https://doi.org/10.1162/imag_a_00077","url":null,"abstract":"Abstract Due to their fundamental relevance, the number of anatomical macaque brain templates is constantly growing. Novel templates aim to alleviate limitations of previously published atlases and offer the foundation to integrate multiscale multimodal data. Typical limitations of existing templates include their reliance on one subject, their unimodality (usually only T1 or histological images), or lack of anatomical details. The MEBRAINS template overcomes these limitations by using a combination of T1 and T2 images, from the same 10 animals (Macaca mulatta), which are averaged by the multi-brain toolbox for diffeomorphic registration and segmentation. The resulting volumetric T1 and T2 templates are supplemented with high-quality white and gray matter surfaces built with FreeSurfer. Human-curated segmentations of pial surface, the white/gray matter interface, and major subcortical nuclei were used to analyze the relative quality of the MEBRAINS template. Additionally, 9 computed tomography (CT) scans of the same monkeys were registered to the T1 modality and co-registered to the template. Through its main features (multi-subject, multimodal, volume-and-surface, traditional, and deep learning-based segmentations), MEBRAINS aims to improve integration of multimodal multi-scale macaque data and is quantitatively equal to, or better than, currently widely used macaque templates. We provide a detailed description of the algorithms/methods used to create the template aiming to furnish future researchers with a map-like perspective which should facilitate identification of an optimal pipeline for the task they have at hand. Finally, recently published 3D maps of the macaque inferior parietal lobe, (pre)motor and prefrontal cortex were warped to the MEBRAINS surface template, thus populating it with a parcellation scheme based on cyto- and receptor architectonic analyses. The template is integrated in the EBRAINS and Scalable Brain Atlas web-based infrastructures, each of which comes with its own suite of spatial registration tools.","PeriodicalId":507939,"journal":{"name":"Imaging Neuroscience","volume":"116 23","pages":"1-26"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139684800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Skylar E. Stolte, A. Indahlastari, Jason Chen, Alejandro Albizu, Ayden L. Dunn, Samantha Pedersen, Kyle B. See, Adam J. Woods, Ruogu Fang
Abstract Whole-head segmentation from Magnetic Resonance Images (MRI) establishes the foundation for individualized computational models using finite element method (FEM). This foundation paves the path for computer-aided solutions in fields such as non-invasive brain stimulation. Most current automatic head segmentation tools are developed using healthy young adults. Thus, they may neglect the older population that is more prone to age-related structural decline such as brain atrophy. In this work, we present a new deep learning method called GRACE, which stands for General, Rapid, And Comprehensive whole-hEad tissue segmentation. GRACE is trained and validated on a novel dataset that consists of 177 manually corrected MR-derived reference segmentations that have undergone meticulous manual review. Each T1-weighted MRI volume is segmented into 11 tissue types, including white matter, grey matter, eyes, cerebrospinal fluid, air, blood vessel, cancellous bone, cortical bone, skin, fat, and muscle. To the best of our knowledge, this work contains the largest manually corrected dataset to date in terms of number of MRIs and segmented tissues. GRACE outperforms five freely available software tools and a traditional 3D U-Net on a five-tissue segmentation task. On this task, GRACE achieves an average Hausdorff Distance of 0.21, which exceeds the runner-up at an average Hausdorff Distance of 0.36. GRACE can segment a whole-head MRI in about 3 seconds, while the fastest software tool takes about 3 minutes. In summary, GRACE segments a spectrum of tissue types from older adults’ T1-MRI scans at favorable accuracy and speed. The trained GRACE model is optimized on older adult heads to enable high-precision modeling in age-related brain disorders. To support open science, the GRACE code and trained weights are made available online and open to the research community upon publication at https://github.com/lab-smile/GRACE.
{"title":"Precise and rapid whole-head segmentation from magnetic resonance images of older adults using deep learning","authors":"Skylar E. Stolte, A. Indahlastari, Jason Chen, Alejandro Albizu, Ayden L. Dunn, Samantha Pedersen, Kyle B. See, Adam J. Woods, Ruogu Fang","doi":"10.1162/imag_a_00090","DOIUrl":"https://doi.org/10.1162/imag_a_00090","url":null,"abstract":"Abstract Whole-head segmentation from Magnetic Resonance Images (MRI) establishes the foundation for individualized computational models using finite element method (FEM). This foundation paves the path for computer-aided solutions in fields such as non-invasive brain stimulation. Most current automatic head segmentation tools are developed using healthy young adults. Thus, they may neglect the older population that is more prone to age-related structural decline such as brain atrophy. In this work, we present a new deep learning method called GRACE, which stands for General, Rapid, And Comprehensive whole-hEad tissue segmentation. GRACE is trained and validated on a novel dataset that consists of 177 manually corrected MR-derived reference segmentations that have undergone meticulous manual review. Each T1-weighted MRI volume is segmented into 11 tissue types, including white matter, grey matter, eyes, cerebrospinal fluid, air, blood vessel, cancellous bone, cortical bone, skin, fat, and muscle. To the best of our knowledge, this work contains the largest manually corrected dataset to date in terms of number of MRIs and segmented tissues. GRACE outperforms five freely available software tools and a traditional 3D U-Net on a five-tissue segmentation task. On this task, GRACE achieves an average Hausdorff Distance of 0.21, which exceeds the runner-up at an average Hausdorff Distance of 0.36. GRACE can segment a whole-head MRI in about 3 seconds, while the fastest software tool takes about 3 minutes. In summary, GRACE segments a spectrum of tissue types from older adults’ T1-MRI scans at favorable accuracy and speed. The trained GRACE model is optimized on older adult heads to enable high-precision modeling in age-related brain disorders. To support open science, the GRACE code and trained weights are made available online and open to the research community upon publication at https://github.com/lab-smile/GRACE.","PeriodicalId":507939,"journal":{"name":"Imaging Neuroscience","volume":"80 10","pages":"1-21"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139824662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Valošek, Sandrine Bédard, M. Keřkovský, Tomáš Rohan, Julien Cohen-Adad
Abstract Measures of spinal cord morphometry computed from magnetic resonance images serve as relevant prognostic biomarkers for a range of spinal cord pathologies, including traumatic and non-traumatic spinal cord injury and neurodegenerative diseases. However, interpreting these imaging biomarkers is difficult due to considerable intra- and inter-subject variability. Yet, there is no clear consensus on a normalization method that would help reduce this variability and more insights into the distribution of these morphometrics are needed. In this study, we computed a database of normative values for six commonly used measures of spinal cord morphometry: cross-sectional area, anteroposterior diameter, transverse diameter, compression ratio, eccentricity, and solidity. Normative values were computed from a large open-access dataset of healthy adult volunteers (N = 203) and were brought to the common space of the PAM50 spinal cord template using a newly proposed normalization method based on linear interpolation. Compared to traditional image-based registration, the proposed normalization approach does not involve image transformations and, therefore, does not introduce distortions of spinal cord anatomy. This is a crucial consideration in preserving the integrity of the spinal cord anatomy in conditions such as spinal cord injury. This new morphometric database allows researchers to normalize based on sex and age, thereby minimizing inter-subject variability associated with demographic and biological factors. The proposed methodology is open-source and accessible through the Spinal Cord Toolbox (SCT) v6.0 and higher.
{"title":"A database of the healthy human spinal cord morphometry in the PAM50 template space","authors":"J. Valošek, Sandrine Bédard, M. Keřkovský, Tomáš Rohan, Julien Cohen-Adad","doi":"10.1162/imag_a_00075","DOIUrl":"https://doi.org/10.1162/imag_a_00075","url":null,"abstract":"Abstract Measures of spinal cord morphometry computed from magnetic resonance images serve as relevant prognostic biomarkers for a range of spinal cord pathologies, including traumatic and non-traumatic spinal cord injury and neurodegenerative diseases. However, interpreting these imaging biomarkers is difficult due to considerable intra- and inter-subject variability. Yet, there is no clear consensus on a normalization method that would help reduce this variability and more insights into the distribution of these morphometrics are needed. In this study, we computed a database of normative values for six commonly used measures of spinal cord morphometry: cross-sectional area, anteroposterior diameter, transverse diameter, compression ratio, eccentricity, and solidity. Normative values were computed from a large open-access dataset of healthy adult volunteers (N = 203) and were brought to the common space of the PAM50 spinal cord template using a newly proposed normalization method based on linear interpolation. Compared to traditional image-based registration, the proposed normalization approach does not involve image transformations and, therefore, does not introduce distortions of spinal cord anatomy. This is a crucial consideration in preserving the integrity of the spinal cord anatomy in conditions such as spinal cord injury. This new morphometric database allows researchers to normalize based on sex and age, thereby minimizing inter-subject variability associated with demographic and biological factors. The proposed methodology is open-source and accessible through the Spinal Cord Toolbox (SCT) v6.0 and higher.","PeriodicalId":507939,"journal":{"name":"Imaging Neuroscience","volume":"33 1","pages":"1-15"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139687429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Efstathios D. Gennatas, Jamie Wren-Jarvis, Rachel Powers, Maia C. Lazerwitz, Ioanna Bourla, Lanya T. Cai, Hannah L. Choi, Robyn Chu, Kaitlyn J. Trimarchi, R. Garcia, Elysa J. Marco, Pratik Mukherjee
Abstract Neuroimaging shows volumetric alterations of gray matter in attention-deficit hyperactivity disorder (ADHD); however, results are conflicting. This may be due to heterogeneous phenotypic sampling and limited sensitivity of volumetric analysis. Creating more homogenous cohorts and investigating gray matter microstructure may yield meaningful biomarkers for scientific and clinical applications. Children with sensory processing dysfunction (SPD) have differences in white matter microstructure, but not gray matter volumetric differences. Approximately 40% of SPD children meet research criteria for ADHD. We apply deep learning segmentation of MRI to measure gray matter volume (GMV) and density (GMD) in SPD children with (SPD+ADHD) and without co-morbid ADHD (SPD-ADHD). We hypothesize GMV and GMD are linked to ADHD but with sex-specific regional patterns. We find boys with SPD+ADHD have widespread reduction of GMD in neocortex, limbic cortex, and cerebellum versus boys with SPD-ADHD. The greatest differences are in sensory cortex with less involvement of prefrontal regions associated with attention networks and impulse control. In contrast, changes of ADHD in girls with SPD are in brainstem nuclei responsible for dopaminergic, noradrenergic, and serotonergic neurotransmission. Hence, neural correlates of ADHD in SPD are sexually dimorphic. In boys, ADHD may result from downstream effects of abnormal sensory cortical development.
摘要 神经影像学显示,注意力缺陷多动障碍(ADHD)患者的灰质容积发生了改变,但结果却相互矛盾。这可能是由于表型取样的异质性和容积分析的灵敏度有限造成的。建立更加同质的队列并研究灰质的微观结构可能会为科学和临床应用提供有意义的生物标志物。感觉处理功能障碍(SPD)儿童的白质微观结构存在差异,但灰质容积没有差异。大约 40% 的 SPD 儿童符合多动症的研究标准。我们应用核磁共振成像的深度学习分割技术,测量患有(SPD+ADHD)和未合并多动症(SPD-ADHD)的 SPD 儿童的灰质体积(GMV)和密度(GMD)。我们假设灰质体积(GMV)和灰质密度(GMD)与多动症有关,但具有性别特异性区域模式。我们发现,与 SPD-ADHD 男孩相比,SPD+ADHD 男孩的新皮层、边缘皮层和小脑的 GMD 普遍降低。最大的差异出现在感觉皮层,而与注意力网络和冲动控制相关的前额叶区域的参与程度较低。与此相反,患有 SPD 的女孩在多巴胺能、去甲肾上腺素能和血清素能神经传递方面的脑干核区发生了变化。因此,SPD 多动症的神经相关因素具有性别双态性。男孩的多动症可能是感觉皮层发育异常的下游效应所致。
{"title":"Gray matter correlates of attention-deficit hyperactivity disorder in boys versus girls with sensory processing dysfunction","authors":"Efstathios D. Gennatas, Jamie Wren-Jarvis, Rachel Powers, Maia C. Lazerwitz, Ioanna Bourla, Lanya T. Cai, Hannah L. Choi, Robyn Chu, Kaitlyn J. Trimarchi, R. Garcia, Elysa J. Marco, Pratik Mukherjee","doi":"10.1162/imag_a_00076","DOIUrl":"https://doi.org/10.1162/imag_a_00076","url":null,"abstract":"Abstract Neuroimaging shows volumetric alterations of gray matter in attention-deficit hyperactivity disorder (ADHD); however, results are conflicting. This may be due to heterogeneous phenotypic sampling and limited sensitivity of volumetric analysis. Creating more homogenous cohorts and investigating gray matter microstructure may yield meaningful biomarkers for scientific and clinical applications. Children with sensory processing dysfunction (SPD) have differences in white matter microstructure, but not gray matter volumetric differences. Approximately 40% of SPD children meet research criteria for ADHD. We apply deep learning segmentation of MRI to measure gray matter volume (GMV) and density (GMD) in SPD children with (SPD+ADHD) and without co-morbid ADHD (SPD-ADHD). We hypothesize GMV and GMD are linked to ADHD but with sex-specific regional patterns. We find boys with SPD+ADHD have widespread reduction of GMD in neocortex, limbic cortex, and cerebellum versus boys with SPD-ADHD. The greatest differences are in sensory cortex with less involvement of prefrontal regions associated with attention networks and impulse control. In contrast, changes of ADHD in girls with SPD are in brainstem nuclei responsible for dopaminergic, noradrenergic, and serotonergic neurotransmission. Hence, neural correlates of ADHD in SPD are sexually dimorphic. In boys, ADHD may result from downstream effects of abnormal sensory cortical development.","PeriodicalId":507939,"journal":{"name":"Imaging Neuroscience","volume":"60 1","pages":"1-14"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139687791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Margaret Jane Moore, Amanda K. Robinson, J. Mattingley
Abstract Prediction has been shown to play a fundamental role in facilitating efficient perception of simple visual features such as orientation and motion, but it remains unclear whether expectations modulate neural representations of more complex stimuli. Here, we addressed this issue by characterising patterns of brain activity evoked by two-dimensional images of familiar, real-world objects which were either expected or unexpected based on a preceding cue. Participants (n = 30) viewed stimuli in rapid serial visual presentation (RSVP) streams which contained both high-fidelity and degraded (diffeomorphically warped) object images. Multivariate pattern analyses of electroencephalography (EEG) data were used to quantify and compare the degree of information represented in neural activity when stimuli were random (unpredictable), expected, or unexpected. Degraded images elicited reduced representational fidelity relative to high-fidelity images. However, degraded images were represented with improved fidelity when they were presented in expected relative to random sequence positions; and stimuli in unexpected sequence positions yielded reduced representational fidelity relative to random presentations. Most notably, neural responses to unexpected stimuli contained information pertaining to the expected (but not presented) stimulus. Debriefing at the conclusion of the experiment revealed that participants were not aware of the relationship between cue and target stimuli within the RSVP streams, suggesting that the differences in stimulus decoding between conditions arose in the absence of explicit predictive knowledge. Our findings extend fundamental understanding of how the brain detects and employs predictive relationships to modulate high-level visual perception.
{"title":"Expectation Modifies the Representational Fidelity of Complex Visual Objects","authors":"Margaret Jane Moore, Amanda K. Robinson, J. Mattingley","doi":"10.1162/imag_a_00083","DOIUrl":"https://doi.org/10.1162/imag_a_00083","url":null,"abstract":"Abstract Prediction has been shown to play a fundamental role in facilitating efficient perception of simple visual features such as orientation and motion, but it remains unclear whether expectations modulate neural representations of more complex stimuli. Here, we addressed this issue by characterising patterns of brain activity evoked by two-dimensional images of familiar, real-world objects which were either expected or unexpected based on a preceding cue. Participants (n = 30) viewed stimuli in rapid serial visual presentation (RSVP) streams which contained both high-fidelity and degraded (diffeomorphically warped) object images. Multivariate pattern analyses of electroencephalography (EEG) data were used to quantify and compare the degree of information represented in neural activity when stimuli were random (unpredictable), expected, or unexpected. Degraded images elicited reduced representational fidelity relative to high-fidelity images. However, degraded images were represented with improved fidelity when they were presented in expected relative to random sequence positions; and stimuli in unexpected sequence positions yielded reduced representational fidelity relative to random presentations. Most notably, neural responses to unexpected stimuli contained information pertaining to the expected (but not presented) stimulus. Debriefing at the conclusion of the experiment revealed that participants were not aware of the relationship between cue and target stimuli within the RSVP streams, suggesting that the differences in stimulus decoding between conditions arose in the absence of explicit predictive knowledge. Our findings extend fundamental understanding of how the brain detects and employs predictive relationships to modulate high-level visual perception.","PeriodicalId":507939,"journal":{"name":"Imaging Neuroscience","volume":"20 4","pages":"1-14"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139688186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elinor Thompson, A. Schroder, Tiantian He, Cameron Shand, Sonja Soskic, N. Oxtoby, F. Barkhof, Daniel C. Alexander
Abstract Cortical atrophy and aggregates of misfolded tau proteins are key hallmarks of Alzheimer’s disease. Computational models that simulate the propagation of pathogens between connected brain regions have been used to elucidate mechanistic information about the spread of these disease biomarkers, such as disease epicentres and spreading rates. However, the connectomes that are used as substrates for these models are known to contain modality-specific false positive and false negative connections, influenced by the biases inherent to the different methods for estimating connections in the brain. In this work, we compare five types of connectomes for modelling both tau and atrophy patterns with the network diffusion model, which are validated against tau PET and structural MRI data from individuals with either mild cognitive impairment or dementia. We then test the hypothesis that a joint connectome, with combined information from different modalities, provides an improved substrate for the model. We find that a combination of multimodal information helps the model to capture observed patterns of tau deposition and atrophy better than any single modality. This is validated with data from independent datasets. Overall, our findings suggest that combining connectivity measures into a single connectome can mitigate some of the biases inherent to each modality and facilitate more accurate models of pathology spread, thus aiding our ability to understand disease mechanisms, and providing insight into the complementary information contained in different measures of brain connectivity
摘要 皮层萎缩和折叠错误的 tau 蛋白聚集是阿尔茨海默病的主要特征。模拟病原体在相连脑区之间传播的计算模型已被用于阐明这些疾病生物标志物传播的机理信息,如疾病的中心和传播速度。然而,众所周知,作为这些模型基底的连接组包含特定模式的假阳性和假阴性连接,这是受不同大脑连接估计方法固有偏差的影响。在这项研究中,我们比较了五种类型的连接组,以网络扩散模型来模拟tau和萎缩模式,并通过轻度认知障碍或痴呆症患者的tau PET和结构性核磁共振成像数据进行了验证。然后,我们检验了一个假设,即结合了不同模式信息的联合连接组能为模型提供更好的基底。我们发现,与任何单一模式相比,多模式信息的组合有助于模型更好地捕捉观察到的 tau 沉积和萎缩模式。这一点通过独立数据集的数据得到了验证。总之,我们的研究结果表明,将连通性测量结合到一个单一的连通组中可以减轻每种模式固有的一些偏差,有助于建立更准确的病理扩散模型,从而帮助我们理解疾病机制,并深入了解不同大脑连通性测量所包含的互补信息。
{"title":"Combining multimodal connectivity information improves modelling of pathology spread in Alzheimer’s disease","authors":"Elinor Thompson, A. Schroder, Tiantian He, Cameron Shand, Sonja Soskic, N. Oxtoby, F. Barkhof, Daniel C. Alexander","doi":"10.1162/imag_a_00089","DOIUrl":"https://doi.org/10.1162/imag_a_00089","url":null,"abstract":"Abstract Cortical atrophy and aggregates of misfolded tau proteins are key hallmarks of Alzheimer’s disease. Computational models that simulate the propagation of pathogens between connected brain regions have been used to elucidate mechanistic information about the spread of these disease biomarkers, such as disease epicentres and spreading rates. However, the connectomes that are used as substrates for these models are known to contain modality-specific false positive and false negative connections, influenced by the biases inherent to the different methods for estimating connections in the brain. In this work, we compare five types of connectomes for modelling both tau and atrophy patterns with the network diffusion model, which are validated against tau PET and structural MRI data from individuals with either mild cognitive impairment or dementia. We then test the hypothesis that a joint connectome, with combined information from different modalities, provides an improved substrate for the model. We find that a combination of multimodal information helps the model to capture observed patterns of tau deposition and atrophy better than any single modality. This is validated with data from independent datasets. Overall, our findings suggest that combining connectivity measures into a single connectome can mitigate some of the biases inherent to each modality and facilitate more accurate models of pathology spread, thus aiding our ability to understand disease mechanisms, and providing insight into the complementary information contained in different measures of brain connectivity","PeriodicalId":507939,"journal":{"name":"Imaging Neuroscience","volume":"297 1","pages":"1-19"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139824120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Jaffray, C. Kames, Michelle Medina, Christina Graf, Adam Clansey, Alexander Rauscher
Abstract Functional Magnetic Resonance Imaging (fMRI) is typically acquired using gradient-echo sequences with a long echo time at high temporal resolution. Gradient-echo sequences inherently encode information about the magnetic field in the often discarded image phase. We demonstrate a method for processing the phase of reconstructed fMRI data to isolate temporal fluctuations in the harmonic fields associated with respiration by solving a blind source separation problem. The fMRI-derived field fluctuations are shown to be in strong agreement with breathing belt data acquired during the same scan. This work presents a concurrent, hardware-free measurement of respiration-induced field fluctuations, providing a respiratory regressor for fMRI analysis which is independent of local contrast changes, and with potential applications in image reconstruction and fMRI analysis.
{"title":"Detection of respiration-induced field modulations in fMRI: A concurrent and navigator-free approach","authors":"Alexander Jaffray, C. Kames, Michelle Medina, Christina Graf, Adam Clansey, Alexander Rauscher","doi":"10.1162/imag_a_00091","DOIUrl":"https://doi.org/10.1162/imag_a_00091","url":null,"abstract":"Abstract Functional Magnetic Resonance Imaging (fMRI) is typically acquired using gradient-echo sequences with a long echo time at high temporal resolution. Gradient-echo sequences inherently encode information about the magnetic field in the often discarded image phase. We demonstrate a method for processing the phase of reconstructed fMRI data to isolate temporal fluctuations in the harmonic fields associated with respiration by solving a blind source separation problem. The fMRI-derived field fluctuations are shown to be in strong agreement with breathing belt data acquired during the same scan. This work presents a concurrent, hardware-free measurement of respiration-induced field fluctuations, providing a respiratory regressor for fMRI analysis which is independent of local contrast changes, and with potential applications in image reconstruction and fMRI analysis.","PeriodicalId":507939,"journal":{"name":"Imaging Neuroscience","volume":"78 ","pages":"1-13"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139822203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract After seeing one solid object apparently passing through another, or a person taking the long route to a destination when a shortcut was available, human adults classify those events as surprising. When tested on these events in violation-of-expectation (VOE) experiments, infants look longer at the same outcomes, relative to similar but expected outcomes. What cognitive processes underlie these judgments from adults, and perhaps infants’ sustained attention to these events? As one approach to test this question, we used functional magnetic resonance imaging (fMRI) to scan the brains of human adults (total N = 49, 22 female, mean age of 26 years) while they viewed stimuli that were originally designed to test for physical and psychological expectations in infants. We examined non-mutually exclusive candidates for the processes underlying the VOE effect, including domain-general processes, like visual prediction error and curiosity, and domain-specific processes, like prediction error with respect to distinctively physical and psychological expectations (objects are solid; agents behave rationally). Early visual regions did not distinguish between expected and unexpected events from either domain. By contrast, multiple demand regions, involved in goal-directed attention, responded more to unexpected events in both domains, providing evidence for domain-general goal-directed attention as a mechanism for VOE. Left supramarginal gyrus (LSMG) was engaged during physical prediction and responded preferentially to unexpected events from the physical domain, providing evidence for domain-specific physical prediction error. Thus, in adult brains, violations of physical and psychological expectations involve domain-specific, and domain-general, though not purely visual, computations.
摘要 在看到一个固体物体明显穿过另一个固体物体,或看到一个人在有捷径可走的情况下走了很长的路到达目的地后,成人会将这些事件归类为令人惊讶的事件。当在违反期望(VOE)实验中对这些事件进行测试时,相对于类似但预期的结果,婴儿会对相同的结果多看几眼。是什么认知过程支撑着成人的这些判断,或许也支撑着婴儿对这些事件的持续关注?作为测试这一问题的一种方法,我们使用功能性磁共振成像(fMRI)扫描了人类成年人(总人数=49,22 位女性,平均年龄 26 岁)的大脑,同时让他们观看原本用于测试婴儿生理和心理预期的刺激物。我们研究了VOE效应的非相互排斥的候选过程,包括领域一般过程(如视觉预测错误和好奇心)和领域特定过程(如与独特的物理和心理期望有关的预测错误)(物体是固体;代理人的行为是理性的)。早期的视觉区域并不区分任何一个领域的预期事件和意外事件。与此相反,参与目标定向注意的多个需求区对两个领域的意外事件都有更多反应,这为领域性目标定向注意作为 VOE 的一种机制提供了证据。左侧边际上回(LSMG)在物理预测过程中被激活,并优先对物理域的意外事件做出反应,这为特定域的物理预测错误提供了证据。因此,在成人大脑中,物理和心理预期的违背涉及特定领域和一般领域的计算,尽管不是纯粹的视觉计算。
{"title":"Violations of physical and psychological expectations in the human adult brain","authors":"Shari Liu, Kirsten Lydic, Lingjie Mei, Rebecca Saxe","doi":"10.1162/imag_a_00068","DOIUrl":"https://doi.org/10.1162/imag_a_00068","url":null,"abstract":"Abstract After seeing one solid object apparently passing through another, or a person taking the long route to a destination when a shortcut was available, human adults classify those events as surprising. When tested on these events in violation-of-expectation (VOE) experiments, infants look longer at the same outcomes, relative to similar but expected outcomes. What cognitive processes underlie these judgments from adults, and perhaps infants’ sustained attention to these events? As one approach to test this question, we used functional magnetic resonance imaging (fMRI) to scan the brains of human adults (total N = 49, 22 female, mean age of 26 years) while they viewed stimuli that were originally designed to test for physical and psychological expectations in infants. We examined non-mutually exclusive candidates for the processes underlying the VOE effect, including domain-general processes, like visual prediction error and curiosity, and domain-specific processes, like prediction error with respect to distinctively physical and psychological expectations (objects are solid; agents behave rationally). Early visual regions did not distinguish between expected and unexpected events from either domain. By contrast, multiple demand regions, involved in goal-directed attention, responded more to unexpected events in both domains, providing evidence for domain-general goal-directed attention as a mechanism for VOE. Left supramarginal gyrus (LSMG) was engaged during physical prediction and responded preferentially to unexpected events from the physical domain, providing evidence for domain-specific physical prediction error. Thus, in adult brains, violations of physical and psychological expectations involve domain-specific, and domain-general, though not purely visual, computations.","PeriodicalId":507939,"journal":{"name":"Imaging Neuroscience","volume":"57 1","pages":"1-25"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139686844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}