首页 > 最新文献

Frontiers in radiology最新文献

英文 中文
Following changes in brain structure and function with multimodal MRI in a year-long prospective study on the development of Type 2 diabetes. 在一项为期一年的2型糖尿病发展的前瞻性研究中,用多模态MRI跟踪脑结构和功能的变化。
Pub Date : 2025-02-13 eCollection Date: 2025-01-01 DOI: 10.3389/fradi.2025.1510850
Yingjie Wang, Richard Ortiz, Arnold Chang, Taufiq Nasseef, Natalia Rubalcaba, Chandler Munson, Ashley Ghaw, Shreyas Balaji, Yeani Kwon, Deepti Athreya, Shruti Kedharnath, Praveen P Kulkarni, Craig F Ferris

Aims: To follow disease progression in a rat model of Type 2 diabetes using multimodal MRI to assess changes in brain structure and function.

Material and methods: Female rats (n = 20) were fed a high fat/high fructose diet or lab chow starting at 90 days of age. Diet fed rats were given streptozotocin to compromise pancreatic beta cells, while chow fed controls received vehicle. At intervals of 3, 6, 9, and 12 months, rats were tested for changes in behavior and sensitivity to pain. Brain structure and function were assessed using voxel based morphometry, diffusion weighted imaging and functional connectivity.

Results: Diet fed rats presented with elevated plasma glucose levels as early as 3 months and a significant gain in weight by 6 months as compared to controls. There were no significant changes in cognitive or motor behavior over the yearlong study but there was a significant increase in sensitivity to peripheral pain in diet fed rats. There were region specific decreases in brain volume e.g., basal ganglia, thalamus and brainstem in diet fed rats. These same regions showed elevated measures of water diffusivity evidence of putative vasogenic edema. By 6 months, widespread hyperconnectivity was observed across multiple brain regions. By 12 months, only the cerebellum and hippocampus showed increased connectivity, while the hypothalamus showed decreased connectivity in diet fed rats.

Conclusions: Noninvasive multimodal MRI identified site specific changes in brain structure and function in a yearlong longitudinal study of Type 2 diabetes in rats. The identified diabetic-induced neuropathological sites may serve as biomarkers for evaluating the efficacy of novel therapeutics.

目的:利用多模态核磁共振成像跟踪 2 型糖尿病大鼠模型的疾病进展,评估大脑结构和功能的变化:雌性大鼠(n = 20)从 90 日龄开始喂食高脂肪/高果糖饮食或实验室饲料。饮食喂养的大鼠服用链脲佐菌素以损害胰腺β细胞,而饲料喂养的对照组则服用药物。每隔 3、6、9 和 12 个月,对大鼠的行为变化和对疼痛的敏感性进行测试。使用体素形态计量学、弥散加权成像和功能连接对大脑结构和功能进行了评估:结果:与对照组相比,饮食喂养的大鼠早在 3 个月时就出现血浆葡萄糖水平升高,6 个月时体重显著增加。在长达一年的研究中,大鼠的认知或运动行为没有发生明显变化,但饮食喂养大鼠对外周疼痛的敏感性明显增加。饮食喂养的大鼠脑容量会出现特定区域的减少,例如基底节、丘脑和脑干。这些区域的水扩散率升高,证明可能存在血管源性水肿。到 6 个月时,在多个脑区观察到广泛的超连接性。到 12 个月时,只有小脑和海马的连接性增强,而下丘脑的连接性降低:结论:在一项为期一年的2型糖尿病大鼠纵向研究中,无创多模态磁共振成像确定了大脑结构和功能的特定部位变化。结论:在对大鼠进行的为期一年的 2 型糖尿病纵向研究中,无创多模态核磁共振成像确定了大脑结构和功能的特定部位变化,所确定的糖尿病诱导的神经病理学部位可作为评估新型疗法疗效的生物标记物。
{"title":"Following changes in brain structure and function with multimodal MRI in a year-long prospective study on the development of Type 2 diabetes.","authors":"Yingjie Wang, Richard Ortiz, Arnold Chang, Taufiq Nasseef, Natalia Rubalcaba, Chandler Munson, Ashley Ghaw, Shreyas Balaji, Yeani Kwon, Deepti Athreya, Shruti Kedharnath, Praveen P Kulkarni, Craig F Ferris","doi":"10.3389/fradi.2025.1510850","DOIUrl":"10.3389/fradi.2025.1510850","url":null,"abstract":"<p><strong>Aims: </strong>To follow disease progression in a rat model of Type 2 diabetes using multimodal MRI to assess changes in brain structure and function.</p><p><strong>Material and methods: </strong>Female rats (<i>n</i> = 20) were fed a high fat/high fructose diet or lab chow starting at 90 days of age. Diet fed rats were given streptozotocin to compromise pancreatic beta cells, while chow fed controls received vehicle. At intervals of 3, 6, 9, and 12 months, rats were tested for changes in behavior and sensitivity to pain. Brain structure and function were assessed using voxel based morphometry, diffusion weighted imaging and functional connectivity.</p><p><strong>Results: </strong>Diet fed rats presented with elevated plasma glucose levels as early as 3 months and a significant gain in weight by 6 months as compared to controls. There were no significant changes in cognitive or motor behavior over the yearlong study but there was a significant increase in sensitivity to peripheral pain in diet fed rats. There were region specific decreases in brain volume e.g., basal ganglia, thalamus and brainstem in diet fed rats. These same regions showed elevated measures of water diffusivity evidence of putative vasogenic edema. By 6 months, widespread hyperconnectivity was observed across multiple brain regions. By 12 months, only the cerebellum and hippocampus showed increased connectivity, while the hypothalamus showed decreased connectivity in diet fed rats.</p><p><strong>Conclusions: </strong>Noninvasive multimodal MRI identified site specific changes in brain structure and function in a yearlong longitudinal study of Type 2 diabetes in rats. The identified diabetic-induced neuropathological sites may serve as biomarkers for evaluating the efficacy of novel therapeutics.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1510850"},"PeriodicalIF":0.0,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11865244/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143525459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Artificial intelligence applications for cancer diagnosis in radiology. 社论:人工智能在放射学癌症诊断中的应用。
Pub Date : 2025-01-29 eCollection Date: 2025-01-01 DOI: 10.3389/fradi.2025.1493783
Abhirup Banerjee, Hongming Shan, Ruibin Feng
{"title":"Editorial: Artificial intelligence applications for cancer diagnosis in radiology.","authors":"Abhirup Banerjee, Hongming Shan, Ruibin Feng","doi":"10.3389/fradi.2025.1493783","DOIUrl":"10.3389/fradi.2025.1493783","url":null,"abstract":"","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1493783"},"PeriodicalIF":0.0,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11813860/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143411823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of modelled diffusion-derived electrical conductivities found using magnetic resonance imaging. 利用磁共振成像发现的模拟扩散衍生电导率的比较。
Pub Date : 2025-01-22 eCollection Date: 2025-01-01 DOI: 10.3389/fradi.2025.1492479
Sasha Hakhu, Leland S Hu, Scott Beeman, Rosalind J Sadleir

Introduction: Magnetic resonance-based electrical conductivity imaging offers a promising new contrast mechanism to enhance disease diagnosis. Conductivity tensor imaging (CTI) combines data from MR diffusion microstructure imaging to reconstruct electrodeless low-frequency conductivity images. However, different microstructure imaging methods rely on varying diffusion models and parameters, leading to divergent tissue conductivity estimates. This study investigates the variability in conductivity predictions across different microstructure models and evaluates their alignment with experimental observations.

Methods: We used publicly available diffusion databases from neurotypical adults to extract microstructure parameters for three diffusion-based brain models: Neurite Orientation Dispersion and Density Imaging (NODDI), Soma and Neurite Density Imaging (SANDI), and Spherical Mean technique (SMT) conductivity predictions were calculated for gray matter (GM) and white matter (WM) tissues using each model. Comparative analyses were performed to assess the range of predicted conductivities and the consistency between bilateral tissue conductivities for each method.

Results: Significant variability in conductivity estimates was observed across the three models. Each method predicted distinct conductivity values for GM and WM tissues, with notable differences in the range of conductivities observed for specific tissue examples. Despite the variability, many WM and GM tissues exhibited symmetric bilateral conductivities within each microstructure model. SMT yielded conductivity estimates closer to values reported in experimental studies, while none of the methods aligned with spectroscopic models of tissue conductivity.

Discussion and conclusion: Our findings highlight substantial discrepancies in tissue conductivity estimates generated by different diffusion models, underscoring the challenge of selecting an appropriate model for low-frequency electrical conductivity imaging. SMT demonstrated better alignment with experimental results. However other microstructure models may produce better tissue discrimination.

导读:基于磁共振的电导率成像为增强疾病诊断提供了一种很有前景的新型对比机制。电导率张量成像(CTI)结合磁共振扩散微观结构成像数据来重建无电极低频电导率图像。然而,不同的微观结构成像方法依赖于不同的扩散模型和参数,导致不同的组织电导率估计。本研究调查了不同微观结构模型中电导率预测的可变性,并评估了它们与实验观察的一致性。方法:我们使用公开的神经典型成人扩散数据库提取三种基于扩散的脑模型的微观结构参数:神经突定向弥散和密度成像(NODDI)、Soma和神经突密度成像(SANDI),并使用每种模型计算灰质(GM)和白质(WM)组织的球面平均技术(SMT)电导率预测。进行比较分析以评估每种方法的预测电导率范围和双侧组织电导率之间的一致性。结果:在三个模型中观察到电导率估计的显著差异。每种方法预测转基因和WM组织的不同电导率值,在特定组织示例中观察到的电导率范围存在显着差异。尽管存在差异,许多WM和GM组织在每个微观结构模型中都表现出对称的双边电导率。SMT获得的电导率估计值更接近实验研究报告的值,而没有一种方法与组织电导率的光谱模型一致。讨论和结论:我们的研究结果强调了不同扩散模型产生的组织电导率估计值的实质性差异,强调了选择合适的低频电导率成像模型的挑战。SMT与实验结果吻合较好。然而,其他微观结构模型可能产生更好的组织识别。
{"title":"Comparison of modelled diffusion-derived electrical conductivities found using magnetic resonance imaging.","authors":"Sasha Hakhu, Leland S Hu, Scott Beeman, Rosalind J Sadleir","doi":"10.3389/fradi.2025.1492479","DOIUrl":"10.3389/fradi.2025.1492479","url":null,"abstract":"<p><strong>Introduction: </strong>Magnetic resonance-based electrical conductivity imaging offers a promising new contrast mechanism to enhance disease diagnosis. Conductivity tensor imaging (CTI) combines data from MR diffusion microstructure imaging to reconstruct electrodeless low-frequency conductivity images. However, different microstructure imaging methods rely on varying diffusion models and parameters, leading to divergent tissue conductivity estimates. This study investigates the variability in conductivity predictions across different microstructure models and evaluates their alignment with experimental observations.</p><p><strong>Methods: </strong>We used publicly available diffusion databases from neurotypical adults to extract microstructure parameters for three diffusion-based brain models: Neurite Orientation Dispersion and Density Imaging (NODDI), Soma and Neurite Density Imaging (SANDI), and Spherical Mean technique (SMT) conductivity predictions were calculated for gray matter (GM) and white matter (WM) tissues using each model. Comparative analyses were performed to assess the range of predicted conductivities and the consistency between bilateral tissue conductivities for each method.</p><p><strong>Results: </strong>Significant variability in conductivity estimates was observed across the three models. Each method predicted distinct conductivity values for GM and WM tissues, with notable differences in the range of conductivities observed for specific tissue examples. Despite the variability, many WM and GM tissues exhibited symmetric bilateral conductivities within each microstructure model. SMT yielded conductivity estimates closer to values reported in experimental studies, while none of the methods aligned with spectroscopic models of tissue conductivity.</p><p><strong>Discussion and conclusion: </strong>Our findings highlight substantial discrepancies in tissue conductivity estimates generated by different diffusion models, underscoring the challenge of selecting an appropriate model for low-frequency electrical conductivity imaging. SMT demonstrated better alignment with experimental results. However other microstructure models may produce better tissue discrimination.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"5 ","pages":"1492479"},"PeriodicalIF":0.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11794185/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143365546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of dark-field chest radiography and CT for the assessment of COVID-19 pneumonia. 暗场胸片与CT评价COVID-19肺炎的比较。
Pub Date : 2025-01-14 eCollection Date: 2024-01-01 DOI: 10.3389/fradi.2024.1487895
Florian T Gassert, Henriette Bast, Theresa Urban, Manuela Frank, Felix G Gassert, Konstantin Willer, Rafael C Schick, Bernhard Renger, Thomas Koehler, Alexandra Karrer, Andreas P Sauter, Alexander A Fingerle, Marcus R Makowski, Franz Pfeiffer, Daniela Pfeiffer

Background: Dark-field chest radiography allows the assessment of the structural integrity of the alveoli by exploiting the wave properties of x-rays.

Purpose: To compare the qualitative and quantitative features of dark-field chest radiography in patients with COVID-19 pneumonia with conventional CT imaging.

Materials and methods: In this prospective study conducted from May 2020 to December 2020, patients aged at least 18 years who underwent chest CT for clinically suspected COVID-19 infection were screened for participation. Inclusion criteria were a CO-RADS score ≥4, the ability to consent to the procedure and to stand upright without help. Participants were examined with a clinical dark-field chest radiography prototype. For comparison, a healthy control cohort of 40 subjects was evaluated. Using Spearman's correlation coefficient, correlation was tested between dark-field coefficient and CT-based COVID-19 index and visual total CT score as well as between the visual total dark-field score and the visual total CT score.

Results: A total of 98 participants [mean age 58 ± 14 (standard deviation) years; 59 men] were studied. The areas of signal intensity reduction observed in dark-field images showed a strong correlation with infiltrates identified on CT scans. The dark-field coefficient had a negative correlation with both the quantitative CT-based COVID-19 index (r = -.34, p = .001) and the overall CT score used for visual grading of COVID-19 severity (r = -.44, p < .001). The total visual dark-field score for the presence of COVID-19 was positively correlated to the total CT score for visual COVID-19 severity grading (r = .85, p < .001).

Conclusion: COVID-19 pneumonia-induced signal intensity losses in dark-field chest radiographs are consistent with CT-based findings, showing the technique's potential for COVID-19 assessment.

背景:暗场胸片可以通过利用x射线的波特性来评估肺泡的结构完整性。目的:比较新冠肺炎患者暗场胸片与常规CT影像的定性和定量特征。材料与方法:本前瞻性研究于2020年5月至2020年12月进行,筛选年龄在18岁以上的临床疑似COVID-19感染的胸部CT患者参与。纳入标准为CO-RADS评分≥4,同意手术的能力和无需帮助站立的能力。参与者接受临床暗场胸片原型检查。为了进行比较,对40名健康对照队列进行了评估。采用Spearman相关系数检验暗场系数与基于CT的COVID-19指数与视觉CT总评分、视觉暗场总评分与视觉CT总评分之间的相关性。结果:共98例受试者[平均年龄58±14(标准差)岁;对59名男性进行了研究。暗场图像中信号强度降低的区域与CT扫描中发现的浸润有很强的相关性。暗场系数与基于ct的COVID-19定量指数(r = -)呈负相关。34, p = .001)和用于COVID-19严重程度视觉分级的总CT评分(r = -)。44, p r =。85、p结论:暗场胸片上COVID-19肺炎引起的信号强度损失与基于ct的结果一致,表明该技术在评估COVID-19方面具有潜力。
{"title":"Comparison of dark-field chest radiography and CT for the assessment of COVID-19 pneumonia.","authors":"Florian T Gassert, Henriette Bast, Theresa Urban, Manuela Frank, Felix G Gassert, Konstantin Willer, Rafael C Schick, Bernhard Renger, Thomas Koehler, Alexandra Karrer, Andreas P Sauter, Alexander A Fingerle, Marcus R Makowski, Franz Pfeiffer, Daniela Pfeiffer","doi":"10.3389/fradi.2024.1487895","DOIUrl":"10.3389/fradi.2024.1487895","url":null,"abstract":"<p><strong>Background: </strong>Dark-field chest radiography allows the assessment of the structural integrity of the alveoli by exploiting the wave properties of x-rays.</p><p><strong>Purpose: </strong>To compare the qualitative and quantitative features of dark-field chest radiography in patients with COVID-19 pneumonia with conventional CT imaging.</p><p><strong>Materials and methods: </strong>In this prospective study conducted from May 2020 to December 2020, patients aged at least 18 years who underwent chest CT for clinically suspected COVID-19 infection were screened for participation. Inclusion criteria were a CO-RADS score ≥4, the ability to consent to the procedure and to stand upright without help. Participants were examined with a clinical dark-field chest radiography prototype. For comparison, a healthy control cohort of 40 subjects was evaluated. Using Spearman's correlation coefficient, correlation was tested between dark-field coefficient and CT-based COVID-19 index and visual total CT score as well as between the visual total dark-field score and the visual total CT score.</p><p><strong>Results: </strong>A total of 98 participants [mean age 58 ± 14 (standard deviation) years; 59 men] were studied. The areas of signal intensity reduction observed in dark-field images showed a strong correlation with infiltrates identified on CT scans. The dark-field coefficient had a negative correlation with both the quantitative CT-based COVID-19 index (<i>r</i> = -.34, <i>p</i> = .001) and the overall CT score used for visual grading of COVID-19 severity (<i>r</i> = -.44, <i>p</i> < .001). The total visual dark-field score for the presence of COVID-19 was positively correlated to the total CT score for visual COVID-19 severity grading (<i>r</i> = .85, <i>p</i> < .001).</p><p><strong>Conclusion: </strong>COVID-19 pneumonia-induced signal intensity losses in dark-field chest radiographs are consistent with CT-based findings, showing the technique's potential for COVID-19 assessment.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"4 ","pages":"1487895"},"PeriodicalIF":0.0,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11772474/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143061376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Current state and promise of user-centered design to harness explainable AI in clinical decision-support systems for patients with CNS tumors. 以用户为中心设计的现状和前景,利用可解释的人工智能在中枢神经系统肿瘤患者的临床决策支持系统中。
Pub Date : 2025-01-13 eCollection Date: 2024-01-01 DOI: 10.3389/fradi.2024.1433457
Eric W Prince, David M Mirsky, Todd C Hankinson, Carsten Görg

In neuro-oncology, MR imaging is crucial for obtaining detailed brain images to identify neoplasms, plan treatment, guide surgical intervention, and monitor the tumor's response. Recent AI advances in neuroimaging have promising applications in neuro-oncology, including guiding clinical decisions and improving patient management. However, the lack of clarity on how AI arrives at predictions has hindered its clinical translation. Explainable AI (XAI) methods aim to improve trustworthiness and informativeness, but their success depends on considering end-users' (clinicians') specific context and preferences. User-Centered Design (UCD) prioritizes user needs in an iterative design process, involving users throughout, providing an opportunity to design XAI systems tailored to clinical neuro-oncology. This review focuses on the intersection of MR imaging interpretation for neuro-oncology patient management, explainable AI for clinical decision support, and user-centered design. We provide a resource that organizes the necessary concepts, including design and evaluation, clinical translation, user experience and efficiency enhancement, and AI for improved clinical outcomes in neuro-oncology patient management. We discuss the importance of multi-disciplinary skills and user-centered design in creating successful neuro-oncology AI systems. We also discuss how explainable AI tools, embedded in a human-centered decision-making process and different from fully automated solutions, can potentially enhance clinician performance. Following UCD principles to build trust, minimize errors and bias, and create adaptable software has the promise of meeting the needs and expectations of healthcare professionals.

在神经肿瘤学中,磁共振成像对于获得详细的脑图像来识别肿瘤、计划治疗、指导手术干预和监测肿瘤反应至关重要。最近人工智能在神经影像学方面的进展在神经肿瘤学方面有很好的应用,包括指导临床决策和改善患者管理。然而,人工智能如何实现预测的不明确阻碍了它的临床应用。可解释人工智能(XAI)方法旨在提高可信度和信息性,但它们的成功取决于考虑最终用户(临床医生)的具体背景和偏好。以用户为中心的设计(UCD)在迭代设计过程中优先考虑用户需求,使用户始终参与其中,为设计适合临床神经肿瘤学的XAI系统提供了机会。这篇综述的重点是神经肿瘤学患者管理的MR成像解释、临床决策支持的可解释人工智能和以用户为中心的设计的交叉。我们提供了一个资源,组织必要的概念,包括设计和评估,临床翻译,用户体验和效率提高,以及人工智能,以改善神经肿瘤患者管理的临床结果。我们讨论了多学科技能和以用户为中心的设计在创建成功的神经肿瘤学人工智能系统中的重要性。我们还讨论了可解释的人工智能工具如何嵌入以人为中心的决策过程中,与全自动解决方案不同,可以潜在地提高临床医生的表现。遵循UCD原则来建立信任,最大限度地减少错误和偏见,并创建适应性强的软件,有望满足医疗保健专业人员的需求和期望。
{"title":"Current state and promise of user-centered design to harness explainable AI in clinical decision-support systems for patients with CNS tumors.","authors":"Eric W Prince, David M Mirsky, Todd C Hankinson, Carsten Görg","doi":"10.3389/fradi.2024.1433457","DOIUrl":"10.3389/fradi.2024.1433457","url":null,"abstract":"<p><p>In neuro-oncology, MR imaging is crucial for obtaining detailed brain images to identify neoplasms, plan treatment, guide surgical intervention, and monitor the tumor's response. Recent AI advances in neuroimaging have promising applications in neuro-oncology, including guiding clinical decisions and improving patient management. However, the lack of clarity on how AI arrives at predictions has hindered its clinical translation. Explainable AI (XAI) methods aim to improve trustworthiness and informativeness, but their success depends on considering end-users' (clinicians') specific context and preferences. User-Centered Design (UCD) prioritizes user needs in an iterative design process, involving users throughout, providing an opportunity to design XAI systems tailored to clinical neuro-oncology. This review focuses on the intersection of MR imaging interpretation for neuro-oncology patient management, explainable AI for clinical decision support, and user-centered design. We provide a resource that organizes the necessary concepts, including design and evaluation, clinical translation, user experience and efficiency enhancement, and AI for improved clinical outcomes in neuro-oncology patient management. We discuss the importance of multi-disciplinary skills and user-centered design in creating successful neuro-oncology AI systems. We also discuss how explainable AI tools, embedded in a human-centered decision-making process and different from fully automated solutions, can potentially enhance clinician performance. Following UCD principles to build trust, minimize errors and bias, and create adaptable software has the promise of meeting the needs and expectations of healthcare professionals.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"4 ","pages":"1433457"},"PeriodicalIF":0.0,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11769936/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143054195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DreamOn: a data augmentation strategy to narrow the robustness gap between expert radiologists and deep learning classifiers. DreamOn:缩小放射科专家与深度学习分类器之间鲁棒性差距的数据增强策略。
Pub Date : 2024-12-19 eCollection Date: 2024-01-01 DOI: 10.3389/fradi.2024.1420545
Luc Lerch, Lukas S Huber, Amith Kamath, Alexander Pöllinger, Aurélie Pahud de Mortanges, Verena C Obmann, Florian Dammann, Walter Senn, Mauricio Reyes

Purpose: Successful performance of deep learning models for medical image analysis is highly dependent on the quality of the images being analysed. Factors like differences in imaging equipment and calibration, as well as patient-specific factors such as movements or biological variability (e.g., tissue density), lead to a large variability in the quality of obtained medical images. Consequently, robustness against the presence of noise is a crucial factor for the application of deep learning models in clinical contexts.

Materials and methods: We evaluate the effect of various data augmentation strategies on the robustness of a ResNet-18 trained to classify breast ultrasound images and benchmark the performance against trained human radiologists. Additionally, we introduce DreamOn, a novel, biologically inspired data augmentation strategy for medical image analysis. DreamOn is based on a conditional generative adversarial network (GAN) to generate REM-dream-inspired interpolations of training images.

Results: We find that while available data augmentation approaches substantially improve robustness compared to models trained without any data augmentation, radiologists outperform models on noisy images. Using DreamOn data augmentation, we obtain a substantial improvement in robustness in the high noise regime.

Conclusions: We show that REM-dream-inspired conditional GAN-based data augmentation is a promising approach to improving deep learning model robustness against noise perturbations in medical imaging. Additionally, we highlight a gap in robustness between deep learning models and human experts, emphasizing the imperative for ongoing developments in AI to match human diagnostic expertise.

目的:医学图像分析中深度学习模型的成功表现高度依赖于所分析图像的质量。成像设备和校准的差异等因素,以及患者特定因素,如运动或生物变异性(如组织密度),导致获得的医学图像质量存在很大差异。因此,对存在噪声的鲁棒性是在临床环境中应用深度学习模型的关键因素。材料和方法:我们评估了各种数据增强策略对ResNet-18乳房超声图像分类训练的鲁棒性的影响,并将其性能与训练有素的人类放射科医生进行基准测试。此外,我们介绍了DreamOn,一种新颖的,生物学启发的数据增强策略,用于医学图像分析。DreamOn是基于条件生成对抗网络(GAN)来生成快速眼动梦启发的训练图像插值。结果:我们发现,与未经任何数据增强训练的模型相比,可用的数据增强方法大大提高了鲁棒性,放射科医生在噪声图像上的表现优于模型。使用DreamOn数据增强,我们在高噪声状态下获得了显著的鲁棒性改进。结论:我们表明,基于快速眼动梦启发的条件gan的数据增强是一种有前途的方法,可以提高医学成像中深度学习模型对噪声扰动的鲁棒性。此外,我们强调了深度学习模型与人类专家之间在鲁棒性方面的差距,强调了人工智能的持续发展与人类诊断专业知识相匹配的必要性。
{"title":"<i>DreamOn:</i> a data augmentation strategy to narrow the robustness gap between expert radiologists and deep learning classifiers.","authors":"Luc Lerch, Lukas S Huber, Amith Kamath, Alexander Pöllinger, Aurélie Pahud de Mortanges, Verena C Obmann, Florian Dammann, Walter Senn, Mauricio Reyes","doi":"10.3389/fradi.2024.1420545","DOIUrl":"https://doi.org/10.3389/fradi.2024.1420545","url":null,"abstract":"<p><strong>Purpose: </strong>Successful performance of deep learning models for medical image analysis is highly dependent on the quality of the images being analysed. Factors like differences in imaging equipment and calibration, as well as patient-specific factors such as movements or biological variability (e.g., tissue density), lead to a large variability in the quality of obtained medical images. Consequently, robustness against the presence of noise is a crucial factor for the application of deep learning models in clinical contexts.</p><p><strong>Materials and methods: </strong>We evaluate the effect of various data augmentation strategies on the robustness of a ResNet-18 trained to classify breast ultrasound images and benchmark the performance against trained human radiologists. Additionally, we introduce <i>DreamOn</i>, a novel, biologically inspired data augmentation strategy for medical image analysis. DreamOn is based on a conditional generative adversarial network (GAN) to generate REM-dream-inspired interpolations of training images.</p><p><strong>Results: </strong>We find that while available data augmentation approaches substantially improve robustness compared to models trained without any data augmentation, radiologists outperform models on noisy images. Using DreamOn data augmentation, we obtain a substantial improvement in robustness in the high noise regime.</p><p><strong>Conclusions: </strong>We show that REM-dream-inspired conditional GAN-based data augmentation is a promising approach to improving deep learning model robustness against noise perturbations in medical imaging. Additionally, we highlight a gap in robustness between deep learning models and human experts, emphasizing the imperative for ongoing developments in AI to match human diagnostic expertise.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"4 ","pages":"1420545"},"PeriodicalIF":0.0,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11696537/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142933617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Advances in artificial intelligence and machine learning applications for the imaging of bone and soft tissue tumors. 社论:人工智能和机器学习在骨和软组织肿瘤成像中的应用进展。
Pub Date : 2024-12-17 eCollection Date: 2024-01-01 DOI: 10.3389/fradi.2024.1523389
Brandon K K Fields, Bino A Varghese, George R Matcuk
{"title":"Editorial: Advances in artificial intelligence and machine learning applications for the imaging of bone and soft tissue tumors.","authors":"Brandon K K Fields, Bino A Varghese, George R Matcuk","doi":"10.3389/fradi.2024.1523389","DOIUrl":"10.3389/fradi.2024.1523389","url":null,"abstract":"","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"4 ","pages":"1523389"},"PeriodicalIF":0.0,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11685185/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142916561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synthesis of MR fingerprinting information from magnitude-only MR imaging data using a parallelized, multi network U-Net convolutional neural network. 利用并行化的多网络U-Net卷积神经网络从仅震级的MR成像数据中合成MR指纹信息。
Pub Date : 2024-12-16 eCollection Date: 2024-01-01 DOI: 10.3389/fradi.2024.1498411
Kiaran P McGee, Yi Sui, Robert J Witte, Ananya Panda, Norbert G Campeau, Thomaz R Mostardeiro, Nahil Sobh, Umberto Ravaioli, Shuyue Lucia Zhang, Kianoush Falahkheirkhah, Nicholas B Larson, Christopher G Schwarz, Jeffrey L Gunter

Background: MR fingerprinting (MRF) is a novel method for quantitative assessment of in vivo MR relaxometry that has shown high precision and accuracy. However, the method requires data acquisition using customized, complex acquisition strategies and dedicated post processing methods thereby limiting its widespread application.

Objective: To develop a deep learning (DL) network for synthesizing MRF signals from conventional magnitude-only MR imaging data and to compare the results to the actual MRF signal acquired.

Methods: A U-Net DL network was developed to synthesize MRF signals from magnitude-only 3D T 1-weighted brain MRI data acquired from 37 volunteers aged between 21 and 62 years of age. Network performance was evaluated by comparison of the relaxometry data (T 1, T 2) generated from dictionary matching of the deep learning synthesized and actual MRF data from 47 segmented anatomic regions. Clustered bootstrapping involving 10,000 bootstraps followed by calculation of the concordance correlation coefficient were performed for both T 1 and T 2 MRF data pairs. 95% confidence limits and the mean difference between true and DL relaxometry values were also calculated.

Results: The concordance correlation coefficient (and 95% confidence limits) for T 1 and T 2 MRF data pairs over the 47 anatomic segments were 0.8793 (0.8136-0.9383) and 0.9078 (0.8981-0.9145) respectively. The mean difference (and 95% confidence limits) were 48.23 (23.0-77.3) s and 2.02 (-1.4 to 4.8) s.

Conclusion: It is possible to synthesize MRF signals from MRI data using a DL network, thereby creating the potential for performing quantitative relaxometry assessment without the need for a dedicated MRF pulse sequence.

背景:磁共振指纹是一种新的定量评估体内磁共振弛豫测量的方法,具有较高的精密度和准确性。然而,该方法需要使用定制的、复杂的采集策略和专用的后处理方法进行数据采集,从而限制了其广泛应用。目的:建立一个深度学习(DL)网络,用于从常规磁共振成像数据中合成磁共振信号,并将结果与实际获得的磁共振信号进行比较。方法:开发U-Net DL网络,从37名年龄在21岁至62岁之间的志愿者获得的仅三维t1加权脑MRI数据中合成MRF信号。通过将深度学习合成的字典匹配生成的松弛测量数据(t1, t2)与47个分割解剖区域的实际MRF数据进行比较,评估网络性能。对t1和t2 MRF数据对进行了涉及10,000个bootstrap的聚类bootstrap,然后计算了一致性相关系数。还计算了95%置信限和真实松弛测量值与DL松弛测量值之间的平均差值。结果:47个解剖节段的t1和t2 MRF数据对的一致性相关系数(及95%置信限)分别为0.8793(0.8136 ~ 0.9383)和0.9078(0.8981 ~ 0.9145)。平均差异(95%置信限)为48.23 (23.0 ~ 77.3)s和2.02 (-1.4 ~ 4.8)s。结论:使用DL网络从MRI数据合成MRF信号是可能的,从而创造了在不需要专用MRF脉冲序列的情况下进行定量松弛测量评估的潜力。
{"title":"Synthesis of MR fingerprinting information from magnitude-only MR imaging data using a parallelized, multi network U-Net convolutional neural network.","authors":"Kiaran P McGee, Yi Sui, Robert J Witte, Ananya Panda, Norbert G Campeau, Thomaz R Mostardeiro, Nahil Sobh, Umberto Ravaioli, Shuyue Lucia Zhang, Kianoush Falahkheirkhah, Nicholas B Larson, Christopher G Schwarz, Jeffrey L Gunter","doi":"10.3389/fradi.2024.1498411","DOIUrl":"10.3389/fradi.2024.1498411","url":null,"abstract":"<p><strong>Background: </strong>MR fingerprinting (MRF) is a novel method for quantitative assessment of <i>in vivo</i> MR relaxometry that has shown high precision and accuracy. However, the method requires data acquisition using customized, complex acquisition strategies and dedicated post processing methods thereby limiting its widespread application.</p><p><strong>Objective: </strong>To develop a deep learning (DL) network for synthesizing MRF signals from conventional magnitude-only MR imaging data and to compare the results to the actual MRF signal acquired.</p><p><strong>Methods: </strong>A U-Net DL network was developed to synthesize MRF signals from magnitude-only 3D <i>T</i> <sub>1</sub>-weighted brain MRI data acquired from 37 volunteers aged between 21 and 62 years of age. Network performance was evaluated by comparison of the relaxometry data (<i>T</i> <sub>1</sub>, <i>T</i> <sub>2</sub>) generated from dictionary matching of the deep learning synthesized and actual MRF data from 47 segmented anatomic regions. Clustered bootstrapping involving 10,000 bootstraps followed by calculation of the concordance correlation coefficient were performed for both <i>T</i> <sub>1</sub> and <i>T</i> <sub>2</sub> MRF data pairs. 95% confidence limits and the mean difference between true and DL relaxometry values were also calculated.</p><p><strong>Results: </strong>The concordance correlation coefficient (and 95% confidence limits) for <i>T</i> <sub>1</sub> and <i>T</i> <sub>2</sub> MRF data pairs over the 47 anatomic segments were 0.8793 (0.8136-0.9383) and 0.9078 (0.8981-0.9145) respectively. The mean difference (and 95% confidence limits) were 48.23 (23.0-77.3) s and 2.02 (-1.4 to 4.8) s.</p><p><strong>Conclusion: </strong>It is possible to synthesize MRF signals from MRI data using a DL network, thereby creating the potential for performing quantitative relaxometry assessment without the need for a dedicated MRF pulse sequence.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"4 ","pages":"1498411"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11686891/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142916537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pneumothorax detection and segmentation from chest X-ray radiographs using a patch-based fully convolutional encoder-decoder network. 使用基于补丁的全卷积编码器-解码器网络从胸部x射线片中检测和分割气胸。
Pub Date : 2024-12-11 eCollection Date: 2024-01-01 DOI: 10.3389/fradi.2024.1424065
Jakov Ivan S Dumbrique, Reynan B Hernandez, Juan Miguel L Cruz, Ryan M Pagdanganan, Prospero C Naval

Pneumothorax, a life-threatening condition characterized by air accumulation in the pleural cavity, requires early and accurate detection for optimal patient outcomes. Chest X-ray radiographs are a common diagnostic tool due to their speed and affordability. However, detecting pneumothorax can be challenging for radiologists because the sole visual indicator is often a thin displaced pleural line. This research explores deep learning techniques to automate and improve the detection and segmentation of pneumothorax from chest X-ray radiographs. We propose a novel architecture that combines the advantages of fully convolutional neural networks (FCNNs) and Vision Transformers (ViTs) while using only convolutional modules to avoid the quadratic complexity of ViT's self-attention mechanism. This architecture utilizes a patch-based encoder-decoder structure with skip connections to effectively combine high-level and low-level features. Compared to prior research and baseline FCNNs, our model demonstrates significantly higher accuracy in detection and segmentation while maintaining computational efficiency. This is evident on two datasets: (1) the SIIM-ACR Pneumothorax Segmentation dataset and (2) a novel dataset we curated from The Medical City, a private hospital in the Philippines. Ablation studies further reveal that using a mixed Tversky and Focal loss function significantly improves performance compared to using solely the Tversky loss. Our findings suggest our model has the potential to improve diagnostic accuracy and efficiency in pneumothorax detection, potentially aiding radiologists in clinical settings.

气胸是一种危及生命的疾病,其特征是胸膜腔内的空气积聚,需要早期和准确的检测以获得最佳的患者预后。胸部x光片是一种常见的诊断工具,因为它速度快,价格便宜。然而,检测气胸对放射科医生来说是具有挑战性的,因为唯一的视觉指标通常是薄的移位的胸膜线。本研究探索了深度学习技术,以自动化和改进胸部x射线片气胸的检测和分割。我们提出了一种结合全卷积神经网络(FCNNs)和视觉变压器(ViTs)优点的新架构,同时仅使用卷积模块来避免ViT自关注机制的二次复杂度。该体系结构利用基于补丁的编码器-解码器结构和跳过连接,有效地结合了高级和低级功能。与之前的研究和基线fcnn相比,我们的模型在保持计算效率的同时,在检测和分割方面显示出更高的准确性。这在两个数据集上很明显:(1)SIIM-ACR气胸分割数据集和(2)我们从菲律宾私立医院the Medical City整理的新数据集。消融研究进一步表明,与单独使用Tversky损失相比,使用混合Tversky和Focal损失函数显着提高了性能。我们的研究结果表明,我们的模型有可能提高气胸检测的诊断准确性和效率,有可能在临床环境中帮助放射科医生。
{"title":"Pneumothorax detection and segmentation from chest X-ray radiographs using a patch-based fully convolutional encoder-decoder network.","authors":"Jakov Ivan S Dumbrique, Reynan B Hernandez, Juan Miguel L Cruz, Ryan M Pagdanganan, Prospero C Naval","doi":"10.3389/fradi.2024.1424065","DOIUrl":"10.3389/fradi.2024.1424065","url":null,"abstract":"<p><p>Pneumothorax, a life-threatening condition characterized by air accumulation in the pleural cavity, requires early and accurate detection for optimal patient outcomes. Chest X-ray radiographs are a common diagnostic tool due to their speed and affordability. However, detecting pneumothorax can be challenging for radiologists because the sole visual indicator is often a thin displaced pleural line. This research explores deep learning techniques to automate and improve the detection and segmentation of pneumothorax from chest X-ray radiographs. We propose a novel architecture that combines the advantages of fully convolutional neural networks (FCNNs) and Vision Transformers (ViTs) while using only convolutional modules to avoid the quadratic complexity of ViT's self-attention mechanism. This architecture utilizes a patch-based encoder-decoder structure with skip connections to effectively combine high-level and low-level features. Compared to prior research and baseline FCNNs, our model demonstrates significantly higher accuracy in detection and segmentation while maintaining computational efficiency. This is evident on two datasets: (1) the SIIM-ACR Pneumothorax Segmentation dataset and (2) a novel dataset we curated from The Medical City, a private hospital in the Philippines. Ablation studies further reveal that using a mixed Tversky and Focal loss function significantly improves performance compared to using solely the Tversky loss. Our findings suggest our model has the potential to improve diagnostic accuracy and efficiency in pneumothorax detection, potentially aiding radiologists in clinical settings.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"4 ","pages":"1424065"},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668597/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142901030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Language task-based fMRI analysis using machine learning and deep learning. 使用机器学习和深度学习的基于语言任务的fMRI分析。
Pub Date : 2024-11-27 eCollection Date: 2024-01-01 DOI: 10.3389/fradi.2024.1495181
Elaine Kuan, Viktor Vegh, John Phamnguyen, Kieran O'Brien, Amanda Hammond, David Reutens

Introduction: Task-based language fMRI is a non-invasive method of identifying brain regions subserving language that is used to plan neurosurgical resections which potentially encroach on eloquent regions. The use of unstructured fMRI paradigms, such as naturalistic fMRI, to map language is of increasing interest. Their analysis necessitates the use of alternative methods such as machine learning (ML) and deep learning (DL) because task regressors may be difficult to define in these paradigms.

Methods: Using task-based language fMRI as a starting point, this study investigates the use of different categories of ML and DL algorithms to identify brain regions subserving language. Data comprising of seven task-based language fMRI paradigms were collected from 26 individuals, and ML and DL models were trained to classify voxel-wise fMRI time series.

Results: The general machine learning and the interval-based methods were the most promising in identifying language areas using fMRI time series classification. The geneal machine learning method achieved a mean whole-brain Area Under the Receiver Operating Characteristic Curve (AUC) of 0.97 ± 0.03 , mean Dice coefficient of 0.6 ± 0.34 and mean Euclidean distance of 2.7 ± 2.4  mm between activation peaks across the evaluated regions of interest. The interval-based method achieved a mean whole-brain AUC of 0.96 ± 0.03 , mean Dice coefficient of 0.61 ± 0.33 and mean Euclidean distance of 3.3 ± 2.7  mm between activation peaks across the evaluated regions of interest.

Discussion: This study demonstrates the utility of different ML and DL methods in classifying task-based language fMRI time series. A potential application of these methods is the identification of language activation from unstructured paradigms.

基于任务的语言功能磁共振成像(fMRI)是一种非侵入性的方法,用于识别服务语言的大脑区域,用于计划可能侵犯雄辩区域的神经外科手术切除。使用非结构化的功能磁共振成像范式,如自然功能磁共振成像,来绘制语言的兴趣越来越大。他们的分析需要使用替代方法,如机器学习(ML)和深度学习(DL),因为任务回归量可能难以在这些范式中定义。方法:本研究以基于任务的语言功能磁共振成像为出发点,研究了使用不同类别的ML和DL算法来识别服务语言的大脑区域。从26个个体中收集了7个基于任务的语言fMRI范式数据,并训练ML和DL模型对体素方向的fMRI时间序列进行分类。结果:通用机器学习和基于区间的方法在fMRI时间序列分类识别语言区域方面最有前途。一般的机器学习方法获得了接受者工作特征曲线下的平均全脑面积(AUC)为0.97±0.03,平均Dice系数为0.6±0.34,平均欧几里得距离为2.7±2.4 mm。基于区间的方法获得的全脑平均AUC为0.96±0.03,平均Dice系数为0.61±0.33,平均欧几里得距离为3.3±2.7 mm。讨论:本研究展示了不同的ML和DL方法在分类基于任务的语言fMRI时间序列中的效用。这些方法的一个潜在应用是从非结构化范式中识别语言激活。
{"title":"Language task-based fMRI analysis using machine learning and deep learning.","authors":"Elaine Kuan, Viktor Vegh, John Phamnguyen, Kieran O'Brien, Amanda Hammond, David Reutens","doi":"10.3389/fradi.2024.1495181","DOIUrl":"10.3389/fradi.2024.1495181","url":null,"abstract":"<p><strong>Introduction: </strong>Task-based language fMRI is a non-invasive method of identifying brain regions subserving language that is used to plan neurosurgical resections which potentially encroach on eloquent regions. The use of unstructured fMRI paradigms, such as naturalistic fMRI, to map language is of increasing interest. Their analysis necessitates the use of alternative methods such as machine learning (ML) and deep learning (DL) because task regressors may be difficult to define in these paradigms.</p><p><strong>Methods: </strong>Using task-based language fMRI as a starting point, this study investigates the use of different categories of ML and DL algorithms to identify brain regions subserving language. Data comprising of seven task-based language fMRI paradigms were collected from 26 individuals, and ML and DL models were trained to classify voxel-wise fMRI time series.</p><p><strong>Results: </strong>The general machine learning and the interval-based methods were the most promising in identifying language areas using fMRI time series classification. The geneal machine learning method achieved a mean whole-brain Area Under the Receiver Operating Characteristic Curve (AUC) of <math><mn>0.97</mn> <mo>±</mo> <mn>0.03</mn></math> , mean Dice coefficient of <math><mn>0.6</mn> <mo>±</mo> <mn>0.34</mn></math> and mean Euclidean distance of <math><mn>2.7</mn> <mo>±</mo> <mn>2.4</mn></math>  mm between activation peaks across the evaluated regions of interest. The interval-based method achieved a mean whole-brain AUC of <math><mn>0.96</mn> <mo>±</mo> <mn>0.03</mn></math> , mean Dice coefficient of <math><mn>0.61</mn> <mo>±</mo> <mn>0.33</mn></math> and mean Euclidean distance of <math><mn>3.3</mn> <mo>±</mo> <mn>2.7</mn></math>  mm between activation peaks across the evaluated regions of interest.</p><p><strong>Discussion: </strong>This study demonstrates the utility of different ML and DL methods in classifying task-based language fMRI time series. A potential application of these methods is the identification of language activation from unstructured paradigms.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"4 ","pages":"1495181"},"PeriodicalIF":0.0,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11631583/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in radiology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1