首页 > 最新文献

Frontiers in radiology最新文献

英文 中文
Utility of multimodal longitudinal imaging data for dynamic prediction of cardiovascular and renal disease: the CARDIA study. 多模态纵向成像数据在动态预测心血管和肾脏疾病方面的实用性:CARDIA 研究。
Pub Date : 2024-02-27 eCollection Date: 2024-01-01 DOI: 10.3389/fradi.2024.1269023
Hieu Nguyen, Henrique D Vasconcellos, Kimberley Keck, Jeffrey Carr, Lenore J Launer, Eliseo Guallar, João A C Lima, Bharath Ambale-Venkatesh

Background: Medical examinations contain repeatedly measured data from multiple visits, including imaging variables collected from different modalities. However, the utility of such data for the prediction of time-to-event is unknown, and only a fraction of the data is typically used for risk prediction. We hypothesized that multimodal longitudinal imaging data could improve dynamic disease prognosis of cardiovascular and renal disease (CVRD).

Methods: In a multi-centered cohort of 5,114 CARDIA participants, we included 166 longitudinal imaging variables from five imaging modalities: Echocardiography (Echo), Cardiac and Abdominal Computed Tomography (CT), Dual-Energy x-ray Absorptiometry (DEXA), Brain Magnetic Resonance Imaging (MRI) collected from young adulthood to mid-life over 30 years (1985-2016) to perform dynamic survival analysis of CVRD events using machine learning dynamic survival analysis (Dynamic-DeepHit, LTRCforest, and Extended Cox for Time-varying Covariates). Risk probabilities were continuously updated as new data were collected. Model performance was assessed using integrated AUC and C-index and compared to traditional risk factors.

Results: Longitudinal imaging data, even when being irregularly collected with high missing rates, improved CVRD dynamic prediction (0.03 in integrated AUC, up to 0.05 in C-index compared to traditional risk factors; best model's C-index = 0.80-0.83 up to 20 years from baseline) from young adulthood followed up to midlife. Among imaging variables, Echo and CT variables contributed significantly to improved risk estimation. Echo measured in early adulthood predicted midlife CVRD risks almost as well as Echo measured 10-15 years later (0.01 C-index difference). The most recent CT exam provided the most accurate prediction for short-term risk estimation. Brain MRI markers provided additional information from cardiac Echo and CT variables that led to a slightly improved prediction.

Conclusions: Longitudinal multimodal imaging data readily collected from follow-up exams can improve CVRD dynamic prediction. Echocardiography measured early can provide a good long-term risk estimation, while CT/calcium scoring variables carry atherosclerotic signatures that benefit more immediate risk assessment starting in middle-age.

背景:医疗检查包含多次就诊的重复测量数据,包括从不同模式收集的成像变量。然而,这些数据对事件发生时间的预测作用尚不清楚,通常只有一小部分数据用于风险预测。我们假设多模态纵向成像数据可以改善心血管和肾脏疾病(CVRD)的动态疾病预后:在由 5114 名 CARDIA 参与者组成的多中心队列中,我们纳入了来自五种成像模式的 166 个纵向成像变量:我们采用机器学习动态生存分析(Dynamic-DeepHit、LTRCforest 和 Extended Cox for Time-varying Covariates)对 CVRD 事件进行动态生存分析。随着新数据的收集,风险概率不断更新。使用综合 AUC 和 C 指数评估模型性能,并与传统风险因素进行比较:结果:纵向成像数据,即使是不规则收集且缺失率较高的数据,也能改善从青年期随访到中年期的CVRD动态预测(与传统风险因素相比,综合AUC为0.03,C指数最高为0.05;最佳模型的C指数=0.80-0.83,从基线算起最长可达20年)。在成像变量中,Echo 和 CT 变量对改善风险估计有显著作用。成年早期测量的回波对中年CVRD风险的预测几乎与10-15年后测量的回波相同(C指数差异为0.01)。最新的 CT 检查为短期风险评估提供了最准确的预测。脑磁共振成像标记提供了心脏回波和CT变量的额外信息,使预测结果略有改善:结论:从随访检查中随时收集的纵向多模态成像数据可以改善 CVRD 动态预测。早期测量的超声心动图可提供良好的长期风险评估,而CT/钙评分变量则带有动脉粥样硬化特征,有利于从中年开始进行更直接的风险评估。
{"title":"Utility of multimodal longitudinal imaging data for dynamic prediction of cardiovascular and renal disease: the CARDIA study.","authors":"Hieu Nguyen, Henrique D Vasconcellos, Kimberley Keck, Jeffrey Carr, Lenore J Launer, Eliseo Guallar, João A C Lima, Bharath Ambale-Venkatesh","doi":"10.3389/fradi.2024.1269023","DOIUrl":"10.3389/fradi.2024.1269023","url":null,"abstract":"<p><strong>Background: </strong>Medical examinations contain repeatedly measured data from multiple visits, including imaging variables collected from different modalities. However, the utility of such data for the prediction of time-to-event is unknown, and only a fraction of the data is typically used for risk prediction. We hypothesized that multimodal longitudinal imaging data could improve dynamic disease prognosis of cardiovascular and renal disease (CVRD).</p><p><strong>Methods: </strong>In a multi-centered cohort of 5,114 CARDIA participants, we included 166 longitudinal imaging variables from five imaging modalities: Echocardiography (Echo), Cardiac and Abdominal Computed Tomography (CT), Dual-Energy x-ray Absorptiometry (DEXA), Brain Magnetic Resonance Imaging (MRI) collected from young adulthood to mid-life over 30 years (1985-2016) to perform dynamic survival analysis of CVRD events using machine learning dynamic survival analysis (Dynamic-DeepHit, LTRCforest, and Extended Cox for Time-varying Covariates). Risk probabilities were continuously updated as new data were collected. Model performance was assessed using integrated AUC and C-index and compared to traditional risk factors.</p><p><strong>Results: </strong>Longitudinal imaging data, even when being irregularly collected with high missing rates, improved CVRD dynamic prediction (0.03 in integrated AUC, up to 0.05 in C-index compared to traditional risk factors; best model's C-index = 0.80-0.83 up to 20 years from baseline) from young adulthood followed up to midlife. Among imaging variables, Echo and CT variables contributed significantly to improved risk estimation. Echo measured in early adulthood predicted midlife CVRD risks almost as well as Echo measured 10-15 years later (0.01 C-index difference). The most recent CT exam provided the most accurate prediction for short-term risk estimation. Brain MRI markers provided additional information from cardiac Echo and CT variables that led to a slightly improved prediction.</p><p><strong>Conclusions: </strong>Longitudinal multimodal imaging data readily collected from follow-up exams can improve CVRD dynamic prediction. Echocardiography measured early can provide a good long-term risk estimation, while CT/calcium scoring variables carry atherosclerotic signatures that benefit more immediate risk assessment starting in middle-age.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10927728/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140112390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surviving ChatGPT in healthcare. 医疗保健领域的 ChatGPT 生存之道。
Pub Date : 2024-02-23 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1224682
Zhengliang Liu, Lu Zhang, Zihao Wu, Xiaowei Yu, Chao Cao, Haixing Dai, Ninghao Liu, Jun Liu, Wei Liu, Quanzheng Li, Dinggang Shen, Xiang Li, Dajiang Zhu, Tianming Liu

At the dawn of of Artificial General Intelligence (AGI), the emergence of large language models such as ChatGPT show promise in revolutionizing healthcare by improving patient care, expanding medical access, and optimizing clinical processes. However, their integration into healthcare systems requires careful consideration of potential risks, such as inaccurate medical advice, patient privacy violations, the creation of falsified documents or images, overreliance on AGI in medical education, and the perpetuation of biases. It is crucial to implement proper oversight and regulation to address these risks, ensuring the safe and effective incorporation of AGI technologies into healthcare systems. By acknowledging and mitigating these challenges, AGI can be harnessed to enhance patient care, medical knowledge, and healthcare processes, ultimately benefiting society as a whole.

在人工通用智能(AGI)兴起之初,大型语言模型(如 ChatGPT)的出现为改善患者护理、扩大医疗途径和优化临床流程带来了革命性的希望。然而,将其整合到医疗保健系统中需要仔细考虑潜在的风险,例如不准确的医疗建议、侵犯患者隐私、创建伪造文件或图像、在医学教育中过度依赖 AGI 以及偏见的长期存在。关键是要实施适当的监督和监管来应对这些风险,确保将 AGI 技术安全有效地纳入医疗系统。通过认识和减轻这些挑战,可以利用 AGI 加强对病人的护理、医学知识和医疗流程,最终使整个社会受益。
{"title":"Surviving ChatGPT in healthcare.","authors":"Zhengliang Liu, Lu Zhang, Zihao Wu, Xiaowei Yu, Chao Cao, Haixing Dai, Ninghao Liu, Jun Liu, Wei Liu, Quanzheng Li, Dinggang Shen, Xiang Li, Dajiang Zhu, Tianming Liu","doi":"10.3389/fradi.2023.1224682","DOIUrl":"10.3389/fradi.2023.1224682","url":null,"abstract":"<p><p>At the dawn of of Artificial General Intelligence (AGI), the emergence of large language models such as ChatGPT show promise in revolutionizing healthcare by improving patient care, expanding medical access, and optimizing clinical processes. However, their integration into healthcare systems requires careful consideration of potential risks, such as inaccurate medical advice, patient privacy violations, the creation of falsified documents or images, overreliance on AGI in medical education, and the perpetuation of biases. It is crucial to implement proper oversight and regulation to address these risks, ensuring the safe and effective incorporation of AGI technologies into healthcare systems. By acknowledging and mitigating these challenges, AGI can be harnessed to enhance patient care, medical knowledge, and healthcare processes, ultimately benefiting society as a whole.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10920216/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140095288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Practical guidance to identify and troubleshoot suboptimal DSC-MRI results. 提供实用指导,以识别和排除不理想的 DSC-MRI 结果。
Pub Date : 2024-02-20 eCollection Date: 2024-01-01 DOI: 10.3389/fradi.2024.1307586
Melissa A Prah, Kathleen M Schmainda

Relative cerebral blood volume (rCBV) derived from dynamic susceptibility contrast (DSC) perfusion MR imaging (pMRI) has been shown to be a robust marker of neuroradiological tumor burden. Recent consensus recommendations in pMRI acquisition strategies have provided a pathway for pMRI inclusion in diverse patient care centers, regardless of size or experience. However, even with proper implementation and execution of the DSC-MRI protocol, issues will arise that many centers may not easily recognize or be aware of. Furthermore, missed pMRI issues are not always apparent in the resulting rCBV images, potentiating inaccurate or missed radiological diagnoses. Therefore, we gathered from our database of DSC-MRI datasets, true-to-life examples showcasing the breakdowns in acquisition, postprocessing, and interpretation, along with appropriate mitigation strategies when possible. The pMRI issues addressed include those related to image acquisition and postprocessing with a focus on contrast agent administration, timing, and rate, signal-to-noise quality, and susceptibility artifact. The goal of this work is to provide guidance to minimize and recognize pMRI issues to ensure that only quality data is interpreted.

由动态易感性对比(DSC)灌注磁共振成像(pMRI)得出的相对脑血量(rCBV)已被证明是神经放射肿瘤负荷的可靠标记物。最近就 pMRI 采集策略达成的共识建议为将 pMRI 纳入不同规模或经验的患者治疗中心提供了途径。然而,即使正确实施和执行了 DSC-MRI 方案,仍会出现一些问题,许多中心可能不容易识别或意识到这些问题。此外,漏掉的 pMRI 问题并不总能在生成的 rCBV 图像中显现出来,从而导致放射诊断不准确或漏诊。因此,我们从我们的 DSC-MRI 数据集数据库中收集了一些真实的例子,展示了在采集、后处理和解读过程中出现的问题,并在可能的情况下提供了适当的缓解策略。所涉及的 pMRI 问题包括与图像采集和后处理相关的问题,重点是造影剂给药、时间和速率、信噪比质量以及易感伪影。这项工作的目标是为尽量减少和识别 pMRI 问题提供指导,以确保只能解读高质量的数据。
{"title":"Practical guidance to identify and troubleshoot suboptimal DSC-MRI results.","authors":"Melissa A Prah, Kathleen M Schmainda","doi":"10.3389/fradi.2024.1307586","DOIUrl":"10.3389/fradi.2024.1307586","url":null,"abstract":"<p><p>Relative cerebral blood volume (rCBV) derived from dynamic susceptibility contrast (DSC) perfusion MR imaging (pMRI) has been shown to be a robust marker of neuroradiological tumor burden. Recent consensus recommendations in pMRI acquisition strategies have provided a pathway for pMRI inclusion in diverse patient care centers, regardless of size or experience. However, even with proper implementation and execution of the DSC-MRI protocol, issues will arise that many centers may not easily recognize or be aware of. Furthermore, missed pMRI issues are not always apparent in the resulting rCBV images, potentiating inaccurate or missed radiological diagnoses. Therefore, we gathered from our database of DSC-MRI datasets, true-to-life examples showcasing the breakdowns in acquisition, postprocessing, and interpretation, along with appropriate mitigation strategies when possible. The pMRI issues addressed include those related to image acquisition and postprocessing with a focus on contrast agent administration, timing, and rate, signal-to-noise quality, and susceptibility artifact. The goal of this work is to provide guidance to minimize and recognize pMRI issues to ensure that only quality data is interpreted.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10913595/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140041019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-learning for automated detection of MSU deposits on DECT: evaluating impact on efficiency and reader confidence. 深度学习自动检测 DECT 上的 MSU 沉积物:评估对效率和读者信心的影响。
Pub Date : 2024-02-19 eCollection Date: 2024-01-01 DOI: 10.3389/fradi.2024.1330399
Shahriar Faghani, Soham Patel, Nicholas G Rhodes, Garret M Powell, Francis I Baffour, Mana Moassefi, Katrina N Glazebrook, Bradley J Erickson, Christin A Tiegs-Heiden

Introduction: Dual-energy CT (DECT) is a non-invasive way to determine the presence of monosodium urate (MSU) crystals in the workup of gout. Color-coding distinguishes MSU from calcium following material decomposition and post-processing. Manually identifying these foci (most commonly labeled green) is tedious, and an automated detection system could streamline the process. This study aims to evaluate the impact of a deep-learning (DL) algorithm developed for detecting green pixelations on DECT on reader time, accuracy, and confidence.

Methods: We collected a sample of positive and negative DECTs, reviewed twice-once with and once without the DL tool-with a 2-week washout period. An attending musculoskeletal radiologist and a fellow separately reviewed the cases, simulating clinical workflow. Metrics such as time taken, confidence in diagnosis, and the tool's helpfulness were recorded and statistically analyzed.

Results: We included thirty DECTs from different patients. The DL tool significantly reduced the reading time for the trainee radiologist (p = 0.02), but not for the attending radiologist (p = 0.15). Diagnostic confidence remained unchanged for both (p = 0.45). However, the DL model identified tiny MSU deposits that led to a change in diagnosis in two cases for the in-training radiologist and one case for the attending radiologist. In 3/3 of these cases, the diagnosis was correct when using DL.

Conclusions: The implementation of the developed DL model slightly reduced reading time for our less experienced reader and led to improved diagnostic accuracy. There was no statistically significant difference in diagnostic confidence when studies were interpreted without and with the DL model.

简介:双能 CT(DECT)是在痛风检查中确定是否存在单钠尿酸盐(MSU)结晶的一种无创方法。在材料分解和后处理之后,彩色编码可将 MSU 与钙区分开来。手动识别这些病灶(最常见的标记为绿色)非常繁琐,自动检测系统可简化这一过程。本研究旨在评估为检测 DECT 绿色像素点而开发的深度学习(DL)算法对阅读时间、准确性和可信度的影响:我们收集了阳性和阴性 DECT 样本,分别使用 DL 工具和不使用 DL 工具进行两次复查,中间有两周的空白期。一名肌肉骨骼放射主治医生和一名研究员分别对病例进行复查,模拟临床工作流程。我们记录并统计分析了所花费的时间、对诊断的信心以及该工具的帮助程度等指标:结果:我们纳入了来自不同患者的 30 份 DECT。DL 工具大大减少了放射科实习医生的阅读时间(p = 0.02),但没有减少放射科主治医生的阅读时间(p = 0.15)。两者的诊断可信度保持不变(p = 0.45)。然而,DL 模型发现了微小的 MSU 沉积物,导致在训放射医师和主治放射医师分别在两个病例和一个病例中改变了诊断。在这些病例中,3/3 的诊断在使用 DL 时是正确的:结论:采用所开发的 DL 模型略微缩短了经验不足的放射科医生的读片时间,并提高了诊断准确性。在没有使用 DL 模型和使用 DL 模型解读研究结果的情况下,诊断可信度没有明显的统计学差异。
{"title":"Deep-learning for automated detection of MSU deposits on DECT: evaluating impact on efficiency and reader confidence.","authors":"Shahriar Faghani, Soham Patel, Nicholas G Rhodes, Garret M Powell, Francis I Baffour, Mana Moassefi, Katrina N Glazebrook, Bradley J Erickson, Christin A Tiegs-Heiden","doi":"10.3389/fradi.2024.1330399","DOIUrl":"10.3389/fradi.2024.1330399","url":null,"abstract":"<p><strong>Introduction: </strong>Dual-energy CT (DECT) is a non-invasive way to determine the presence of monosodium urate (MSU) crystals in the workup of gout. Color-coding distinguishes MSU from calcium following material decomposition and post-processing. Manually identifying these foci (most commonly labeled green) is tedious, and an automated detection system could streamline the process. This study aims to evaluate the impact of a deep-learning (DL) algorithm developed for detecting green pixelations on DECT on reader time, accuracy, and confidence.</p><p><strong>Methods: </strong>We collected a sample of positive and negative DECTs, reviewed twice-once with and once without the DL tool-with a 2-week washout period. An attending musculoskeletal radiologist and a fellow separately reviewed the cases, simulating clinical workflow. Metrics such as time taken, confidence in diagnosis, and the tool's helpfulness were recorded and statistically analyzed.</p><p><strong>Results: </strong>We included thirty DECTs from different patients. The DL tool significantly reduced the reading time for the trainee radiologist (<i>p</i> = 0.02), but not for the attending radiologist (<i>p</i> = 0.15). Diagnostic confidence remained unchanged for both (<i>p</i> = 0.45). However, the DL model identified tiny MSU deposits that led to a change in diagnosis in two cases for the in-training radiologist and one case for the attending radiologist. In 3/3 of these cases, the diagnosis was correct when using DL.</p><p><strong>Conclusions: </strong>The implementation of the developed DL model slightly reduced reading time for our less experienced reader and led to improved diagnostic accuracy. There was no statistically significant difference in diagnostic confidence when studies were interpreted without and with the DL model.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10909828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140029701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond images: an integrative multi-modal approach to chest x-ray report generation. 超越图像:胸部 X 光报告生成的综合多模式方法。
Pub Date : 2024-02-15 eCollection Date: 2024-01-01 DOI: 10.3389/fradi.2024.1339612
Nurbanu Aksoy, Serge Sharoff, Selcuk Baser, Nishant Ravikumar, Alejandro F Frangi

Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images. Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists. In this paper, we present a novel multi-modal deep neural network framework for generating chest x-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes. We introduce a conditioned cross-multi-head attention module to fuse these heterogeneous data modalities, bridging the semantic gap between visual and textual data. Experiments demonstrate substantial improvements from using additional modalities compared to relying on images alone. Notably, our model achieves the highest reported performance on the ROUGE-L metric compared to relevant state-of-the-art models in the literature. Furthermore, we employed both human evaluation and clinical semantic similarity measurement alongside word-overlap metrics to improve the depth of quantitative analysis. A human evaluation, conducted by a board-certified radiologist, confirms the model's accuracy in identifying high-level findings, however, it also highlights that more improvement is needed to capture nuanced details and clinical context.

从图像到文本的放射学报告生成旨在自动生成放射学报告,以描述医学图像中的发现。大多数现有方法只关注图像数据,而忽略了放射科医生可获取的其他患者信息。在本文中,我们提出了一种新颖的多模态深度神经网络框架,通过将生命体征和症状等结构化患者数据与非结构化临床笔记相结合来生成胸部 X 光报告。我们引入了条件交叉多头注意力模块,以融合这些异构数据模式,弥合视觉和文本数据之间的语义鸿沟。实验证明,与仅依赖图像相比,使用额外的模式能带来实质性的改进。值得注意的是,与文献中的相关先进模型相比,我们的模型在 ROUGE-L 指标上取得了最高的报告性能。此外,我们还采用了人工评估和临床语义相似性测量以及词重叠度量,以提高定量分析的深度。由一名获得医学会认证的放射科医生进行的人工评估证实了该模型在识别高层次结果方面的准确性,但同时也强调了在捕捉细微细节和临床背景方面还需要更多改进。
{"title":"Beyond images: an integrative multi-modal approach to chest x-ray report generation.","authors":"Nurbanu Aksoy, Serge Sharoff, Selcuk Baser, Nishant Ravikumar, Alejandro F Frangi","doi":"10.3389/fradi.2024.1339612","DOIUrl":"10.3389/fradi.2024.1339612","url":null,"abstract":"<p><p>Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images. Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists. In this paper, we present a novel multi-modal deep neural network framework for generating chest x-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes. We introduce a conditioned cross-multi-head attention module to fuse these heterogeneous data modalities, bridging the semantic gap between visual and textual data. Experiments demonstrate substantial improvements from using additional modalities compared to relying on images alone. Notably, our model achieves the highest reported performance on the ROUGE-L metric compared to relevant state-of-the-art models in the literature. Furthermore, we employed both human evaluation and clinical semantic similarity measurement alongside word-overlap metrics to improve the depth of quantitative analysis. A human evaluation, conducted by a board-certified radiologist, confirms the model's accuracy in identifying high-level findings, however, it also highlights that more improvement is needed to capture nuanced details and clinical context.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10902135/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139998818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-vitro gadolinium retro-microdialysis in agarose gel-a human brain phantom study. 琼脂糖凝胶中的体外钆后微透析--人脑模型研究。
Pub Date : 2024-01-31 eCollection Date: 2024-01-01 DOI: 10.3389/fradi.2024.1085834
Chisomo Zimphango, Marius O Mada, Stephen J Sawiak, Susan Giorgi-Coll, T Adrian Carpenter, Peter J Hutchinson, Keri L H Carpenter, Matthew G Stovell

Rationale and objectives: Cerebral microdialysis is a technique that enables monitoring of the neurochemistry of patients with significant acquired brain injury, such as traumatic brain injury (TBI) and subarachnoid haemorrhage (SAH). Cerebral microdialysis can also be used to characterise the neuro-pharmacokinetics of small-molecule study substrates using retrodialysis/retromicrodialysis. However, challenges remain: (i) lack of a simple, stable, and inexpensive brain tissue model for the study of drug neuropharmacology; and (ii) it is unclear how far small study-molecules administered via retrodialysis diffuse within the human brain.

Materials and methods: Here, we studied the radial diffusion distance of small-molecule gadolinium-DTPA from microdialysis catheters in a newly developed, simple, stable, inexpensive brain tissue model as a precursor for in-vivo studies. Brain tissue models consisting of 0.65% weight/volume agarose gel in two kinds of buffers were created. The distribution of a paramagnetic contrast agent gadolinium-DTPA (Gd-DTPA) perfusion from microdialysis catheters using magnetic resonance imaging (MRI) was characterized as a surrogate for other small-molecule study substrates.

Results: We found the mean radial diffusion distance of Gd-DTPA to be 18.5 mm after 24 h (p < 0.0001).

Conclusion: Our brain tissue model provides avenues for further tests and research into infusion studies using cerebral microdialysis, and consequently effective focal drug delivery for patients with TBI and other brain disorders.

原理和目标:脑微量透析是一种能够监测严重后天性脑损伤(如创伤性脑损伤(TBI)和蛛网膜下腔出血(SAH))患者神经化学的技术。脑微量透析还可用于利用逆透析/逆微量透析鉴定小分子研究底物的神经药代动力学。然而,挑战依然存在:(i) 缺乏用于药物神经药理学研究的简单、稳定、廉价的脑组织模型;(ii) 尚不清楚通过逆透析给药的小分子研究物质在人脑中的扩散距离。材料与方法:在此,我们研究了小分子钆-DTPA 从微透析导管在新开发的简单、稳定、廉价的脑组织模型中的径向扩散距离,以此作为体内研究的前体。脑组织模型由 0.65% 重量/体积的琼脂糖凝胶和两种缓冲液组成。利用磁共振成像(MRI)对顺磁性对比剂钆-DTPA(Gd-DTPA)从微透析导管灌注的分布进行了表征,以此作为其他小分子研究底物的替代物:结果:我们发现,24 小时后 Gd-DTPA 的平均径向扩散距离为 18.5 毫米(p 结论:Gd-DTPA 的平均径向扩散距离为 18.5 毫米:我们的脑组织模型为进一步测试和研究使用脑微透析进行输注研究提供了途径,从而为创伤性脑损伤和其他脑部疾病患者提供有效的病灶给药。
{"title":"<i>In-vitro</i> gadolinium retro-microdialysis in agarose gel-a human brain phantom study.","authors":"Chisomo Zimphango, Marius O Mada, Stephen J Sawiak, Susan Giorgi-Coll, T Adrian Carpenter, Peter J Hutchinson, Keri L H Carpenter, Matthew G Stovell","doi":"10.3389/fradi.2024.1085834","DOIUrl":"10.3389/fradi.2024.1085834","url":null,"abstract":"<p><strong>Rationale and objectives: </strong>Cerebral microdialysis is a technique that enables monitoring of the neurochemistry of patients with significant acquired brain injury, such as traumatic brain injury (TBI) and subarachnoid haemorrhage (SAH). Cerebral microdialysis can also be used to characterise the neuro-pharmacokinetics of small-molecule study substrates using retrodialysis/retromicrodialysis. However, challenges remain: (i) lack of a simple, stable, and inexpensive brain tissue model for the study of drug neuropharmacology; and (ii) it is unclear how far small study-molecules administered via retrodialysis diffuse within the human brain.</p><p><strong>Materials and methods: </strong>Here, we studied the radial diffusion distance of small-molecule gadolinium-DTPA from microdialysis catheters in a newly developed, simple, stable, inexpensive brain tissue model as a precursor for in-vivo studies. Brain tissue models consisting of 0.65% weight/volume agarose gel in two kinds of buffers were created. The distribution of a paramagnetic contrast agent gadolinium-DTPA (Gd-DTPA) perfusion from microdialysis catheters using magnetic resonance imaging (MRI) was characterized as a surrogate for other small-molecule study substrates.</p><p><strong>Results: </strong>We found the mean radial diffusion distance of Gd-DTPA to be 18.5 mm after 24 h (<i>p</i> < 0.0001).</p><p><strong>Conclusion: </strong>Our brain tissue model provides avenues for further tests and research into infusion studies using cerebral microdialysis, and consequently effective focal drug delivery for patients with TBI and other brain disorders.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10864450/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139736855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Endovascular navigation in patients: vessel-based registration of electromagnetic tracking to preoperative images 患者的血管内导航:基于血管的电磁跟踪与术前图像配准
Pub Date : 2024-01-23 DOI: 10.3389/fradi.2024.1320535
Erik Nypan, Geir Arne Tangen, Reidar Brekken, Petter Aadahl, F. Manstad-Hulaas
Electromagnetic tracking of instruments combined with preoperative images can supplement fluoroscopy for guiding endovascular aortic repair (EVAR). The aim of this study was to evaluate the in-vivo accuracy of a vessel-based registration algorithm for matching electromagnetically tracked positions of an endovascular instrument to preoperative computed tomography angiography. Five patients undergoing elective EVAR were included, and a clinically available semi-automatic 3D–3D registration algorithm, based on similarity measures computed over the entire image, was used for reference. Accuracy was reported as target registration error (TRE) evaluated in manually selected anatomic landmarks on bony structures, placed close to the volume-of-interest. The median TRE was 8.2 mm (range: 7.1 mm to 16.1 mm) for the vessel-based registration algorithm, compared to 2.2 mm (range: 1.8 mm to 3.7 mm) for the reference algorithm. This illustrates that registration based on intraoperative electromagnetic tracking is feasible, but the accuracy must be improved before clinical use.
电磁追踪器械结合术前图像可作为透视检查的补充,用于指导血管内主动脉修复术(EVAR)。本研究旨在评估一种基于血管的套准算法的体内准确性,该算法用于将电磁跟踪的血管内器械位置与术前计算机断层扫描血管造影相匹配。其中包括五名接受择期 EVAR 的患者,并使用临床上可用的半自动 3D-3D 配准算法作为参考,该算法基于对整个图像计算的相似性度量。精确度以目标配准误差(TRE)来报告,目标配准误差是在靠近感兴趣容积的骨性结构上手动选择的解剖地标进行评估的。基于血管的配准算法的目标配准误差中值为 8.2 毫米(范围:7.1 毫米至 16.1 毫米),而参考算法的目标配准误差中值为 2.2 毫米(范围:1.8 毫米至 3.7 毫米)。这说明基于术中电磁追踪的配准是可行的,但在临床使用前必须提高准确性。
{"title":"Endovascular navigation in patients: vessel-based registration of electromagnetic tracking to preoperative images","authors":"Erik Nypan, Geir Arne Tangen, Reidar Brekken, Petter Aadahl, F. Manstad-Hulaas","doi":"10.3389/fradi.2024.1320535","DOIUrl":"https://doi.org/10.3389/fradi.2024.1320535","url":null,"abstract":"Electromagnetic tracking of instruments combined with preoperative images can supplement fluoroscopy for guiding endovascular aortic repair (EVAR). The aim of this study was to evaluate the in-vivo accuracy of a vessel-based registration algorithm for matching electromagnetically tracked positions of an endovascular instrument to preoperative computed tomography angiography. Five patients undergoing elective EVAR were included, and a clinically available semi-automatic 3D–3D registration algorithm, based on similarity measures computed over the entire image, was used for reference. Accuracy was reported as target registration error (TRE) evaluated in manually selected anatomic landmarks on bony structures, placed close to the volume-of-interest. The median TRE was 8.2 mm (range: 7.1 mm to 16.1 mm) for the vessel-based registration algorithm, compared to 2.2 mm (range: 1.8 mm to 3.7 mm) for the reference algorithm. This illustrates that registration based on intraoperative electromagnetic tracking is feasible, but the accuracy must be improved before clinical use.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139603264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High resolution and contrast 7 tesla MR brain imaging of the neonate 新生儿的高分辨率和对比度 7 特斯拉磁共振脑成像
Pub Date : 2024-01-18 DOI: 10.3389/fradi.2023.1327075
Pip Bridgen, Raphaël Tomi-Tricot, Alena Uus, Daniel Cromb, Megan Quirke, J. Almalbis, Beya Bonse, Miguel De la Fuente Botella, Alessandra Maggioni, Pierluigi Di Cio, Pauline A. Cawley, Chiara Casella, A. S. Dokumacı, Alice R. Thomson, Jucha Willers Moore, Devi Bridglal, Joao Saravia, Thomas Finck, Anthony N. Price, Elisabeth Pickles, Lucilio Cordero-Grande, Alexia Egloff, J. O’Muircheartaigh, S. Counsell, Sharon Giles, M. Deprez, Enrico De Vita, M. Rutherford, A. D. Edwards, J. Hajnal, Shaihan J. Malik, T. Arichi
Ultra-high field MR imaging offers marked gains in signal-to-noise ratio, spatial resolution, and contrast which translate to improved pathological and anatomical sensitivity. These benefits are particularly relevant for the neonatal brain which is rapidly developing and sensitive to injury. However, experience of imaging neonates at 7T has been limited due to regulatory, safety, and practical considerations. We aimed to establish a program for safely acquiring high resolution and contrast brain images from neonates on a 7T system.Images were acquired from 35 neonates on 44 occasions (median age 39 + 6 postmenstrual weeks, range 33 + 4 to 52 + 6; median body weight 2.93 kg, range 1.57 to 5.3 kg) over a median time of 49 mins 30 s. Peripheral body temperature and physiological measures were recorded throughout scanning. Acquired sequences included T2 weighted (TSE), Actual Flip angle Imaging (AFI), functional MRI (BOLD EPI), susceptibility weighted imaging (SWI), and MR spectroscopy (STEAM).There was no significant difference between temperature before and after scanning (p = 0.76) and image quality assessment compared favorably to state-of-the-art 3T acquisitions. Anatomical imaging demonstrated excellent sensitivity to structures which are typically hard to visualize at lower field strengths including the hippocampus, cerebellum, and vasculature. Images were also acquired with contrast mechanisms which are enhanced at ultra-high field including susceptibility weighted imaging, functional MRI, and MR spectroscopy.We demonstrate safety and feasibility of imaging vulnerable neonates at ultra-high field and highlight the untapped potential for providing important new insights into brain development and pathological processes during this critical phase of early life.
超高磁场磁共振成像在信噪比、空间分辨率和对比度方面都有显著提高,从而提高了病理和解剖敏感性。这些优势对新生儿大脑尤为重要,因为新生儿大脑发育迅速,对损伤敏感。然而,出于监管、安全和实际操作方面的考虑,在 7T 下对新生儿进行成像的经验一直很有限。我们的目标是建立一套在 7T 系统上安全获取高分辨率和高对比度新生儿大脑图像的程序。我们在 44 次扫描中获取了 35 名新生儿的图像(中位年龄为月经后 39+6 周,范围为 33+4 至 52+6 ;中位体重为 2.93 千克,范围为 1.57 至 5.3 千克),中位时间为 49 分 30 秒。扫描序列包括 T2 加权成像(TSE)、实际翻转角度成像(AFI)、功能磁共振成像(BOLD EPI)、电感加权成像(SWI)和磁共振波谱成像(STEAM)。解剖成像对海马、小脑和血管等通常在较低场强下难以观察到的结构具有极高的灵敏度。我们证明了在超高磁场下对脆弱的新生儿进行成像的安全性和可行性,并强调了在生命早期的这一关键阶段对大脑发育和病理过程提供重要新见解的尚未开发的潜力。
{"title":"High resolution and contrast 7 tesla MR brain imaging of the neonate","authors":"Pip Bridgen, Raphaël Tomi-Tricot, Alena Uus, Daniel Cromb, Megan Quirke, J. Almalbis, Beya Bonse, Miguel De la Fuente Botella, Alessandra Maggioni, Pierluigi Di Cio, Pauline A. Cawley, Chiara Casella, A. S. Dokumacı, Alice R. Thomson, Jucha Willers Moore, Devi Bridglal, Joao Saravia, Thomas Finck, Anthony N. Price, Elisabeth Pickles, Lucilio Cordero-Grande, Alexia Egloff, J. O’Muircheartaigh, S. Counsell, Sharon Giles, M. Deprez, Enrico De Vita, M. Rutherford, A. D. Edwards, J. Hajnal, Shaihan J. Malik, T. Arichi","doi":"10.3389/fradi.2023.1327075","DOIUrl":"https://doi.org/10.3389/fradi.2023.1327075","url":null,"abstract":"Ultra-high field MR imaging offers marked gains in signal-to-noise ratio, spatial resolution, and contrast which translate to improved pathological and anatomical sensitivity. These benefits are particularly relevant for the neonatal brain which is rapidly developing and sensitive to injury. However, experience of imaging neonates at 7T has been limited due to regulatory, safety, and practical considerations. We aimed to establish a program for safely acquiring high resolution and contrast brain images from neonates on a 7T system.Images were acquired from 35 neonates on 44 occasions (median age 39 + 6 postmenstrual weeks, range 33 + 4 to 52 + 6; median body weight 2.93 kg, range 1.57 to 5.3 kg) over a median time of 49 mins 30 s. Peripheral body temperature and physiological measures were recorded throughout scanning. Acquired sequences included T2 weighted (TSE), Actual Flip angle Imaging (AFI), functional MRI (BOLD EPI), susceptibility weighted imaging (SWI), and MR spectroscopy (STEAM).There was no significant difference between temperature before and after scanning (p = 0.76) and image quality assessment compared favorably to state-of-the-art 3T acquisitions. Anatomical imaging demonstrated excellent sensitivity to structures which are typically hard to visualize at lower field strengths including the hippocampus, cerebellum, and vasculature. Images were also acquired with contrast mechanisms which are enhanced at ultra-high field including susceptibility weighted imaging, functional MRI, and MR spectroscopy.We demonstrate safety and feasibility of imaging vulnerable neonates at ultra-high field and highlight the untapped potential for providing important new insights into brain development and pathological processes during this critical phase of early life.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139615858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using a generative adversarial network to generate synthetic MRI images for multi-class automatic segmentation of brain tumors 使用生成式对抗网络生成合成磁共振成像图像,用于脑肿瘤的多类自动分割
Pub Date : 2024-01-18 DOI: 10.3389/fradi.2023.1336902
P. Raut, G. Baldini, M. Schöneck, L. Caldeira
Challenging tasks such as lesion segmentation, classification, and analysis for the assessment of disease progression can be automatically achieved using deep learning (DL)-based algorithms. DL techniques such as 3D convolutional neural networks are trained using heterogeneous volumetric imaging data such as MRI, CT, and PET, among others. However, DL-based methods are usually only applicable in the presence of the desired number of inputs. In the absence of one of the required inputs, the method cannot be used. By implementing a generative adversarial network (GAN), we aim to apply multi-label automatic segmentation of brain tumors to synthetic images when not all inputs are present. The implemented GAN is based on the Pix2Pix architecture and has been extended to a 3D framework named Pix2PixNIfTI. For this study, 1,251 patients of the BraTS2021 dataset comprising sequences such as T1w, T2w, T1CE, and FLAIR images equipped with respective multi-label segmentation were used. This dataset was used for training the Pix2PixNIfTI model for generating synthetic MRI images of all the image contrasts. The segmentation model, namely DeepMedic, was trained in a five-fold cross-validation manner for brain tumor segmentation and tested using the original inputs as the gold standard. The inference of trained segmentation models was later applied to synthetic images replacing missing input, in combination with other original images to identify the efficacy of generated images in achieving multi-class segmentation. For the multi-class segmentation using synthetic data or lesser inputs, the dice scores were observed to be significantly reduced but remained similar in range for the whole tumor when compared with evaluated original image segmentation (e.g. mean dice of synthetic T2w prediction NC, 0.74 ± 0.30; ED, 0.81 ± 0.15; CET, 0.84 ± 0.21; WT, 0.90 ± 0.08). A standard paired t-tests with multiple comparison correction were performed to assess the difference between all regions (p < 0.05). The study concludes that the use of Pix2PixNIfTI allows us to segment brain tumors when one input image is missing.
利用基于深度学习(DL)的算法,可以自动完成病变分割、分类和分析等挑战性任务,以评估疾病进展。三维卷积神经网络等深度学习技术是利用 MRI、CT 和 PET 等异构容积成像数据进行训练的。然而,基于 DL 的方法通常只适用于所需输入数量的情况。如果缺少所需的一个输入,该方法就无法使用。通过实施生成对抗网络(GAN),我们的目标是在不存在所有输入的情况下,对合成图像进行多标签脑肿瘤自动分割。所实现的 GAN 基于 Pix2Pix 架构,并已扩展到名为 Pix2PixNIfTI 的三维框架。在这项研究中,使用了 BraTS2021 数据集中的 1,251 名患者,其中包括 T1w、T2w、T1CE 和 FLAIR 图像等序列,并配备了相应的多标签分割。该数据集用于训练 Pix2PixNIfTI 模型,以生成所有图像对比度的合成 MRI 图像。分割模型,即 DeepMedic,以五倍交叉验证的方式进行脑肿瘤分割训练,并使用原始输入作为金标准进行测试。随后,将训练好的分割模型推理应用于替代缺失输入的合成图像,并与其他原始图像相结合,以确定生成图像在实现多类分割方面的功效。使用合成数据或较少的输入进行多类分割时,观察到骰子分数显著降低,但与评估的原始图像分割相比,整个肿瘤的骰子分数范围仍然相似(例如,合成 T2w 预测 NC 的平均骰子分数为 0.74 ± 0.30;ED 为 0.81 ± 0.15;CET 为 0.84 ± 0.21;WT 为 0.90 ± 0.08)。对所有区域之间的差异进行了标准配对 t 检验和多重比较校正(P < 0.05)。研究得出结论,使用 Pix2PixNIfTI 可以在缺少一张输入图像的情况下对脑肿瘤进行分割。
{"title":"Using a generative adversarial network to generate synthetic MRI images for multi-class automatic segmentation of brain tumors","authors":"P. Raut, G. Baldini, M. Schöneck, L. Caldeira","doi":"10.3389/fradi.2023.1336902","DOIUrl":"https://doi.org/10.3389/fradi.2023.1336902","url":null,"abstract":"Challenging tasks such as lesion segmentation, classification, and analysis for the assessment of disease progression can be automatically achieved using deep learning (DL)-based algorithms. DL techniques such as 3D convolutional neural networks are trained using heterogeneous volumetric imaging data such as MRI, CT, and PET, among others. However, DL-based methods are usually only applicable in the presence of the desired number of inputs. In the absence of one of the required inputs, the method cannot be used. By implementing a generative adversarial network (GAN), we aim to apply multi-label automatic segmentation of brain tumors to synthetic images when not all inputs are present. The implemented GAN is based on the Pix2Pix architecture and has been extended to a 3D framework named Pix2PixNIfTI. For this study, 1,251 patients of the BraTS2021 dataset comprising sequences such as T1w, T2w, T1CE, and FLAIR images equipped with respective multi-label segmentation were used. This dataset was used for training the Pix2PixNIfTI model for generating synthetic MRI images of all the image contrasts. The segmentation model, namely DeepMedic, was trained in a five-fold cross-validation manner for brain tumor segmentation and tested using the original inputs as the gold standard. The inference of trained segmentation models was later applied to synthetic images replacing missing input, in combination with other original images to identify the efficacy of generated images in achieving multi-class segmentation. For the multi-class segmentation using synthetic data or lesser inputs, the dice scores were observed to be significantly reduced but remained similar in range for the whole tumor when compared with evaluated original image segmentation (e.g. mean dice of synthetic T2w prediction NC, 0.74 ± 0.30; ED, 0.81 ± 0.15; CET, 0.84 ± 0.21; WT, 0.90 ± 0.08). A standard paired t-tests with multiple comparison correction were performed to assess the difference between all regions (p < 0.05). The study concludes that the use of Pix2PixNIfTI allows us to segment brain tumors when one input image is missing.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139615471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Recent advances in multimodal artificial intelligence for disease diagnosis, prognosis, and prevention 社论:多模式人工智能在疾病诊断、预后和预防方面的最新进展
Pub Date : 2024-01-10 DOI: 10.3389/fradi.2023.1349830
Hazrat Ali, Zubair Shah, Tanvir Alam, Priyantha Wijayatunga, Eyad Elyan
{"title":"Editorial: Recent advances in multimodal artificial intelligence for disease diagnosis, prognosis, and prevention","authors":"Hazrat Ali, Zubair Shah, Tanvir Alam, Priyantha Wijayatunga, Eyad Elyan","doi":"10.3389/fradi.2023.1349830","DOIUrl":"https://doi.org/10.3389/fradi.2023.1349830","url":null,"abstract":"","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139440629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in radiology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1