首页 > 最新文献

Frontiers in radiology最新文献

英文 中文
High resolution and contrast 7 tesla MR brain imaging of the neonate 新生儿的高分辨率和对比度 7 特斯拉磁共振脑成像
Pub Date : 2024-01-18 DOI: 10.3389/fradi.2023.1327075
Pip Bridgen, Raphaël Tomi-Tricot, Alena Uus, Daniel Cromb, Megan Quirke, J. Almalbis, Beya Bonse, Miguel De la Fuente Botella, Alessandra Maggioni, Pierluigi Di Cio, Pauline A. Cawley, Chiara Casella, A. S. Dokumacı, Alice R. Thomson, Jucha Willers Moore, Devi Bridglal, Joao Saravia, Thomas Finck, Anthony N. Price, Elisabeth Pickles, Lucilio Cordero-Grande, Alexia Egloff, J. O’Muircheartaigh, S. Counsell, Sharon Giles, M. Deprez, Enrico De Vita, M. Rutherford, A. D. Edwards, J. Hajnal, Shaihan J. Malik, T. Arichi
Ultra-high field MR imaging offers marked gains in signal-to-noise ratio, spatial resolution, and contrast which translate to improved pathological and anatomical sensitivity. These benefits are particularly relevant for the neonatal brain which is rapidly developing and sensitive to injury. However, experience of imaging neonates at 7T has been limited due to regulatory, safety, and practical considerations. We aimed to establish a program for safely acquiring high resolution and contrast brain images from neonates on a 7T system.Images were acquired from 35 neonates on 44 occasions (median age 39 + 6 postmenstrual weeks, range 33 + 4 to 52 + 6; median body weight 2.93 kg, range 1.57 to 5.3 kg) over a median time of 49 mins 30 s. Peripheral body temperature and physiological measures were recorded throughout scanning. Acquired sequences included T2 weighted (TSE), Actual Flip angle Imaging (AFI), functional MRI (BOLD EPI), susceptibility weighted imaging (SWI), and MR spectroscopy (STEAM).There was no significant difference between temperature before and after scanning (p = 0.76) and image quality assessment compared favorably to state-of-the-art 3T acquisitions. Anatomical imaging demonstrated excellent sensitivity to structures which are typically hard to visualize at lower field strengths including the hippocampus, cerebellum, and vasculature. Images were also acquired with contrast mechanisms which are enhanced at ultra-high field including susceptibility weighted imaging, functional MRI, and MR spectroscopy.We demonstrate safety and feasibility of imaging vulnerable neonates at ultra-high field and highlight the untapped potential for providing important new insights into brain development and pathological processes during this critical phase of early life.
超高磁场磁共振成像在信噪比、空间分辨率和对比度方面都有显著提高,从而提高了病理和解剖敏感性。这些优势对新生儿大脑尤为重要,因为新生儿大脑发育迅速,对损伤敏感。然而,出于监管、安全和实际操作方面的考虑,在 7T 下对新生儿进行成像的经验一直很有限。我们的目标是建立一套在 7T 系统上安全获取高分辨率和高对比度新生儿大脑图像的程序。我们在 44 次扫描中获取了 35 名新生儿的图像(中位年龄为月经后 39+6 周,范围为 33+4 至 52+6 ;中位体重为 2.93 千克,范围为 1.57 至 5.3 千克),中位时间为 49 分 30 秒。扫描序列包括 T2 加权成像(TSE)、实际翻转角度成像(AFI)、功能磁共振成像(BOLD EPI)、电感加权成像(SWI)和磁共振波谱成像(STEAM)。解剖成像对海马、小脑和血管等通常在较低场强下难以观察到的结构具有极高的灵敏度。我们证明了在超高磁场下对脆弱的新生儿进行成像的安全性和可行性,并强调了在生命早期的这一关键阶段对大脑发育和病理过程提供重要新见解的尚未开发的潜力。
{"title":"High resolution and contrast 7 tesla MR brain imaging of the neonate","authors":"Pip Bridgen, Raphaël Tomi-Tricot, Alena Uus, Daniel Cromb, Megan Quirke, J. Almalbis, Beya Bonse, Miguel De la Fuente Botella, Alessandra Maggioni, Pierluigi Di Cio, Pauline A. Cawley, Chiara Casella, A. S. Dokumacı, Alice R. Thomson, Jucha Willers Moore, Devi Bridglal, Joao Saravia, Thomas Finck, Anthony N. Price, Elisabeth Pickles, Lucilio Cordero-Grande, Alexia Egloff, J. O’Muircheartaigh, S. Counsell, Sharon Giles, M. Deprez, Enrico De Vita, M. Rutherford, A. D. Edwards, J. Hajnal, Shaihan J. Malik, T. Arichi","doi":"10.3389/fradi.2023.1327075","DOIUrl":"https://doi.org/10.3389/fradi.2023.1327075","url":null,"abstract":"Ultra-high field MR imaging offers marked gains in signal-to-noise ratio, spatial resolution, and contrast which translate to improved pathological and anatomical sensitivity. These benefits are particularly relevant for the neonatal brain which is rapidly developing and sensitive to injury. However, experience of imaging neonates at 7T has been limited due to regulatory, safety, and practical considerations. We aimed to establish a program for safely acquiring high resolution and contrast brain images from neonates on a 7T system.Images were acquired from 35 neonates on 44 occasions (median age 39 + 6 postmenstrual weeks, range 33 + 4 to 52 + 6; median body weight 2.93 kg, range 1.57 to 5.3 kg) over a median time of 49 mins 30 s. Peripheral body temperature and physiological measures were recorded throughout scanning. Acquired sequences included T2 weighted (TSE), Actual Flip angle Imaging (AFI), functional MRI (BOLD EPI), susceptibility weighted imaging (SWI), and MR spectroscopy (STEAM).There was no significant difference between temperature before and after scanning (p = 0.76) and image quality assessment compared favorably to state-of-the-art 3T acquisitions. Anatomical imaging demonstrated excellent sensitivity to structures which are typically hard to visualize at lower field strengths including the hippocampus, cerebellum, and vasculature. Images were also acquired with contrast mechanisms which are enhanced at ultra-high field including susceptibility weighted imaging, functional MRI, and MR spectroscopy.We demonstrate safety and feasibility of imaging vulnerable neonates at ultra-high field and highlight the untapped potential for providing important new insights into brain development and pathological processes during this critical phase of early life.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"105 26","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139615858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using a generative adversarial network to generate synthetic MRI images for multi-class automatic segmentation of brain tumors 使用生成式对抗网络生成合成磁共振成像图像,用于脑肿瘤的多类自动分割
Pub Date : 2024-01-18 DOI: 10.3389/fradi.2023.1336902
P. Raut, G. Baldini, M. Schöneck, L. Caldeira
Challenging tasks such as lesion segmentation, classification, and analysis for the assessment of disease progression can be automatically achieved using deep learning (DL)-based algorithms. DL techniques such as 3D convolutional neural networks are trained using heterogeneous volumetric imaging data such as MRI, CT, and PET, among others. However, DL-based methods are usually only applicable in the presence of the desired number of inputs. In the absence of one of the required inputs, the method cannot be used. By implementing a generative adversarial network (GAN), we aim to apply multi-label automatic segmentation of brain tumors to synthetic images when not all inputs are present. The implemented GAN is based on the Pix2Pix architecture and has been extended to a 3D framework named Pix2PixNIfTI. For this study, 1,251 patients of the BraTS2021 dataset comprising sequences such as T1w, T2w, T1CE, and FLAIR images equipped with respective multi-label segmentation were used. This dataset was used for training the Pix2PixNIfTI model for generating synthetic MRI images of all the image contrasts. The segmentation model, namely DeepMedic, was trained in a five-fold cross-validation manner for brain tumor segmentation and tested using the original inputs as the gold standard. The inference of trained segmentation models was later applied to synthetic images replacing missing input, in combination with other original images to identify the efficacy of generated images in achieving multi-class segmentation. For the multi-class segmentation using synthetic data or lesser inputs, the dice scores were observed to be significantly reduced but remained similar in range for the whole tumor when compared with evaluated original image segmentation (e.g. mean dice of synthetic T2w prediction NC, 0.74 ± 0.30; ED, 0.81 ± 0.15; CET, 0.84 ± 0.21; WT, 0.90 ± 0.08). A standard paired t-tests with multiple comparison correction were performed to assess the difference between all regions (p < 0.05). The study concludes that the use of Pix2PixNIfTI allows us to segment brain tumors when one input image is missing.
利用基于深度学习(DL)的算法,可以自动完成病变分割、分类和分析等挑战性任务,以评估疾病进展。三维卷积神经网络等深度学习技术是利用 MRI、CT 和 PET 等异构容积成像数据进行训练的。然而,基于 DL 的方法通常只适用于所需输入数量的情况。如果缺少所需的一个输入,该方法就无法使用。通过实施生成对抗网络(GAN),我们的目标是在不存在所有输入的情况下,对合成图像进行多标签脑肿瘤自动分割。所实现的 GAN 基于 Pix2Pix 架构,并已扩展到名为 Pix2PixNIfTI 的三维框架。在这项研究中,使用了 BraTS2021 数据集中的 1,251 名患者,其中包括 T1w、T2w、T1CE 和 FLAIR 图像等序列,并配备了相应的多标签分割。该数据集用于训练 Pix2PixNIfTI 模型,以生成所有图像对比度的合成 MRI 图像。分割模型,即 DeepMedic,以五倍交叉验证的方式进行脑肿瘤分割训练,并使用原始输入作为金标准进行测试。随后,将训练好的分割模型推理应用于替代缺失输入的合成图像,并与其他原始图像相结合,以确定生成图像在实现多类分割方面的功效。使用合成数据或较少的输入进行多类分割时,观察到骰子分数显著降低,但与评估的原始图像分割相比,整个肿瘤的骰子分数范围仍然相似(例如,合成 T2w 预测 NC 的平均骰子分数为 0.74 ± 0.30;ED 为 0.81 ± 0.15;CET 为 0.84 ± 0.21;WT 为 0.90 ± 0.08)。对所有区域之间的差异进行了标准配对 t 检验和多重比较校正(P < 0.05)。研究得出结论,使用 Pix2PixNIfTI 可以在缺少一张输入图像的情况下对脑肿瘤进行分割。
{"title":"Using a generative adversarial network to generate synthetic MRI images for multi-class automatic segmentation of brain tumors","authors":"P. Raut, G. Baldini, M. Schöneck, L. Caldeira","doi":"10.3389/fradi.2023.1336902","DOIUrl":"https://doi.org/10.3389/fradi.2023.1336902","url":null,"abstract":"Challenging tasks such as lesion segmentation, classification, and analysis for the assessment of disease progression can be automatically achieved using deep learning (DL)-based algorithms. DL techniques such as 3D convolutional neural networks are trained using heterogeneous volumetric imaging data such as MRI, CT, and PET, among others. However, DL-based methods are usually only applicable in the presence of the desired number of inputs. In the absence of one of the required inputs, the method cannot be used. By implementing a generative adversarial network (GAN), we aim to apply multi-label automatic segmentation of brain tumors to synthetic images when not all inputs are present. The implemented GAN is based on the Pix2Pix architecture and has been extended to a 3D framework named Pix2PixNIfTI. For this study, 1,251 patients of the BraTS2021 dataset comprising sequences such as T1w, T2w, T1CE, and FLAIR images equipped with respective multi-label segmentation were used. This dataset was used for training the Pix2PixNIfTI model for generating synthetic MRI images of all the image contrasts. The segmentation model, namely DeepMedic, was trained in a five-fold cross-validation manner for brain tumor segmentation and tested using the original inputs as the gold standard. The inference of trained segmentation models was later applied to synthetic images replacing missing input, in combination with other original images to identify the efficacy of generated images in achieving multi-class segmentation. For the multi-class segmentation using synthetic data or lesser inputs, the dice scores were observed to be significantly reduced but remained similar in range for the whole tumor when compared with evaluated original image segmentation (e.g. mean dice of synthetic T2w prediction NC, 0.74 ± 0.30; ED, 0.81 ± 0.15; CET, 0.84 ± 0.21; WT, 0.90 ± 0.08). A standard paired t-tests with multiple comparison correction were performed to assess the difference between all regions (p < 0.05). The study concludes that the use of Pix2PixNIfTI allows us to segment brain tumors when one input image is missing.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"110 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139615471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Recent advances in multimodal artificial intelligence for disease diagnosis, prognosis, and prevention 社论:多模式人工智能在疾病诊断、预后和预防方面的最新进展
Pub Date : 2024-01-10 DOI: 10.3389/fradi.2023.1349830
Hazrat Ali, Zubair Shah, Tanvir Alam, Priyantha Wijayatunga, Eyad Elyan
{"title":"Editorial: Recent advances in multimodal artificial intelligence for disease diagnosis, prognosis, and prevention","authors":"Hazrat Ali, Zubair Shah, Tanvir Alam, Priyantha Wijayatunga, Eyad Elyan","doi":"10.3389/fradi.2023.1349830","DOIUrl":"https://doi.org/10.3389/fradi.2023.1349830","url":null,"abstract":"","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"71 23","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139440629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RoMIA: a framework for creating Robust Medical Imaging AI models for chest radiographs. RoMIA:为胸片创建鲁棒医学成像人工智能模型的框架。
Pub Date : 2024-01-08 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1274273
Aditi Anand, Sarada Krithivasan, Kaushik Roy

Artificial Intelligence (AI) methods, particularly Deep Neural Networks (DNNs), have shown great promise in a range of medical imaging tasks. However, the susceptibility of DNNs to producing erroneous outputs under the presence of input noise and variations is of great concern and one of the largest challenges to their adoption in medical settings. Towards addressing this challenge, we explore the robustness of DNNs trained for chest radiograph classification under a range of perturbations reflective of clinical settings. We propose RoMIA, a framework for the creation of Robust Medical Imaging AI models. RoMIA adds three key steps to the model training and deployment flow: (i) Noise-added training, wherein a part of the training data is synthetically transformed to represent common noise sources, (ii) Fine-tuning with input mixing, in which the model is refined with inputs formed by mixing data from the original training set with a small number of images from a different source, and (iii) DCT-based denoising, which removes a fraction of high-frequency components of each image before applying the model to classify it. We applied RoMIA to create six different robust models for classifying chest radiographs using the CheXpert dataset. We evaluated the models on the CheXphoto dataset, which consists of naturally and synthetically perturbed images intended to evaluate robustness. Models produced by RoMIA show 3%-5% improvement in robust accuracy, which corresponds to an average reduction of 22.6% in misclassifications. These results suggest that RoMIA can be a useful step towards enabling the adoption of AI models in medical imaging applications.

人工智能(AI)方法,尤其是深度神经网络(DNN),在一系列医学成像任务中显示出巨大的前景。然而,DNNs 在输入噪声和变化的情况下容易产生错误输出,这一点非常令人担忧,也是其在医疗环境中应用所面临的最大挑战之一。为了应对这一挑战,我们探索了在一系列反映临床环境的扰动下为胸片分类而训练的 DNN 的鲁棒性。我们提出了用于创建鲁棒医学影像人工智能模型的框架 RoMIA。RoMIA 在模型训练和部署流程中增加了三个关键步骤:(i) 添加噪声训练,即对部分训练数据进行合成转换,以代表常见的噪声源;(ii) 输入混合微调,即通过将原始训练集的数据与来自不同来源的少量图像混合形成的输入来完善模型;(iii) 基于 DCT 的去噪,即在应用模型进行分类之前去除每张图像的部分高频成分。我们应用 RoMIA 创建了六种不同的稳健模型,用于使用 CheXpert 数据集对胸部 X 光片进行分类。我们在 CheXphoto 数据集上对模型进行了评估,该数据集由自然和合成扰动图像组成,旨在评估鲁棒性。由 RoMIA 生成的模型在鲁棒性准确性方面提高了 3%-5%,相当于平均减少了 22.6% 的错误分类。这些结果表明,RoMIA 可以成为医疗成像应用中采用人工智能模型的有用步骤。
{"title":"RoMIA: a framework for creating Robust Medical Imaging AI models for chest radiographs.","authors":"Aditi Anand, Sarada Krithivasan, Kaushik Roy","doi":"10.3389/fradi.2023.1274273","DOIUrl":"10.3389/fradi.2023.1274273","url":null,"abstract":"<p><p>Artificial Intelligence (AI) methods, particularly Deep Neural Networks (DNNs), have shown great promise in a range of medical imaging tasks. However, the susceptibility of DNNs to producing erroneous outputs under the presence of input noise and variations is of great concern and one of the largest challenges to their adoption in medical settings. Towards addressing this challenge, we explore the robustness of DNNs trained for chest radiograph classification under a range of perturbations reflective of clinical settings. We propose RoMIA, a framework for the creation of Robust Medical Imaging AI models. RoMIA adds three key steps to the model training and deployment flow: (i) Noise-added training, wherein a part of the training data is synthetically transformed to represent common noise sources, (ii) Fine-tuning with input mixing, in which the model is refined with inputs formed by mixing data from the original training set with a small number of images from a different source, and (iii) DCT-based denoising, which removes a fraction of high-frequency components of each image before applying the model to classify it. We applied RoMIA to create six different robust models for classifying chest radiographs using the CheXpert dataset. We evaluated the models on the CheXphoto dataset, which consists of naturally and synthetically perturbed images intended to evaluate robustness. Models produced by RoMIA show 3%-5% improvement in robust accuracy, which corresponds to an average reduction of 22.6% in misclassifications. These results suggest that RoMIA can be a useful step towards enabling the adoption of AI models in medical imaging applications.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1274273"},"PeriodicalIF":0.0,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10800823/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139522371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imaging spectrum of amyloid-related imaging abnormalities associated with aducanumab immunotherapy 与阿杜单抗免疫疗法相关的淀粉样蛋白相关成像异常的成像谱
Pub Date : 2024-01-05 DOI: 10.3389/fradi.2023.1305390
H. Sotoudeh, Mohammadreza Alizadeh, Ramin Shahidi, Parnian Shobeiri, Z. Saadatpour, C. A. Wheeler, Marissa Natelson Love, Manoj Tanwar
Alzheimer's Disease (AD) is a leading cause of morbidity. Management of AD has traditionally been aimed at symptom relief rather than disease modification. Recently, AD research has begun to shift focus towards disease-modifying therapies that can alter the progression of AD. In this context, a class of immunotherapy agents known as monoclonal antibodies target diverse cerebral amyloid-beta (Aβ) epitopes to inhibit disease progression. Aducanumab was authorized by the US Food and Drug Administration (FDA) to treat AD on June 7, 2021. Aducanumab has shown promising clinical and biomarker efficacy but is associated with amyloid-related imaging abnormalities (ARIA). Neuroradiologists play a critical role in diagnosing ARIA, necessitating familiarity with this condition. This pictorial review will appraise the radiologic presentation of ARIA in patients on aducanumab.
阿尔茨海默病(AD)是发病率最高的疾病。对阿尔茨海默病的治疗历来以缓解症状而非改变病情为目标。最近,老年痴呆症研究的重点开始转向能够改变老年痴呆症病情发展的治疗方法。在这种情况下,一类被称为单克隆抗体的免疫疗法制剂以不同的脑淀粉样蛋白-β(Aβ)表位为靶点,抑制疾病的进展。阿杜单抗于2021年6月7日获得美国食品药品管理局(FDA)授权用于治疗AD。阿杜单抗已显示出良好的临床和生物标志物疗效,但与淀粉样蛋白相关的成像异常(ARIA)有关。神经放射科医生在诊断 ARIA 方面起着至关重要的作用,因此必须熟悉这种情况。本图解综述将评估阿杜单抗患者 ARIA 的放射学表现。
{"title":"Imaging spectrum of amyloid-related imaging abnormalities associated with aducanumab immunotherapy","authors":"H. Sotoudeh, Mohammadreza Alizadeh, Ramin Shahidi, Parnian Shobeiri, Z. Saadatpour, C. A. Wheeler, Marissa Natelson Love, Manoj Tanwar","doi":"10.3389/fradi.2023.1305390","DOIUrl":"https://doi.org/10.3389/fradi.2023.1305390","url":null,"abstract":"Alzheimer's Disease (AD) is a leading cause of morbidity. Management of AD has traditionally been aimed at symptom relief rather than disease modification. Recently, AD research has begun to shift focus towards disease-modifying therapies that can alter the progression of AD. In this context, a class of immunotherapy agents known as monoclonal antibodies target diverse cerebral amyloid-beta (Aβ) epitopes to inhibit disease progression. Aducanumab was authorized by the US Food and Drug Administration (FDA) to treat AD on June 7, 2021. Aducanumab has shown promising clinical and biomarker efficacy but is associated with amyloid-related imaging abnormalities (ARIA). Neuroradiologists play a critical role in diagnosing ARIA, necessitating familiarity with this condition. This pictorial review will appraise the radiologic presentation of ARIA in patients on aducanumab.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"101 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139383592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empowering breast cancer diagnosis and radiology practice: advances in artificial intelligence for contrast-enhanced mammography 增强乳腺癌诊断和放射学实践能力:造影剂增强乳腺 X 射线摄影的人工智能进展
Pub Date : 2024-01-05 DOI: 10.3389/fradi.2023.1326831
Ketki Kinkar, Brandon K. K. Fields, Mary W. Yamashita, Bino A. Varghese
Artificial intelligence (AI) applications in breast imaging span a wide range of tasks including decision support, risk assessment, patient management, quality assessment, treatment response assessment and image enhancement. However, their integration into the clinical workflow has been slow due to the lack of a consensus on data quality, benchmarked robust implementation, and consensus-based guidelines to ensure standardization and generalization. Contrast-enhanced mammography (CEM) has improved sensitivity and specificity compared to current standards of breast cancer diagnostic imaging i.e., mammography (MG) and/or conventional ultrasound (US), with comparable accuracy to MRI (current diagnostic imaging benchmark), but at a much lower cost and higher throughput. This makes CEM an excellent tool for widespread breast lesion characterization for all women, including underserved and minority women. Underlining the critical need for early detection and accurate diagnosis of breast cancer, this review examines the limitations of conventional approaches and reveals how AI can help overcome them. The Methodical approaches, such as image processing, feature extraction, quantitative analysis, lesion classification, lesion segmentation, integration with clinical data, early detection, and screening support have been carefully analysed in recent studies addressing breast cancer detection and diagnosis. Recent guidelines described by Checklist for Artificial Intelligence in Medical Imaging (CLAIM) to establish a robust framework for rigorous evaluation and surveying has inspired the current review criteria.
人工智能(AI)在乳腺成像中的应用范围广泛,包括决策支持、风险评估、患者管理、质量评估、治疗反应评估和图像增强。然而,由于缺乏对数据质量的共识、以基准为基础的稳健实施以及基于共识的指南来确保标准化和通用化,它们与临床工作流程的整合一直进展缓慢。与目前的乳腺癌诊断成像标准(即乳腺 X 线照相术(MG)和/或传统超声波(US))相比,对比增强乳腺 X 线照相术(CEM)具有更高的灵敏度和特异性,其准确性与核磁共振成像(目前的诊断成像基准)相当,但成本更低,吞吐量更大。这使得 CEM 成为一种优秀的工具,可广泛用于所有妇女(包括服务不足的妇女和少数民族妇女)的乳腺病变特征描述。本综述强调了早期检测和准确诊断乳腺癌的迫切需要,探讨了传统方法的局限性,并揭示了人工智能如何帮助克服这些局限性。在最近针对乳腺癌检测和诊断的研究中,对图像处理、特征提取、定量分析、病灶分类、病灶分割、与临床数据整合、早期检测和筛查支持等方法进行了仔细分析。医学影像人工智能核对表(CLAIM)所描述的最新指导方针为严格的评估和调查建立了一个稳健的框架,这也启发了当前的审查标准。
{"title":"Empowering breast cancer diagnosis and radiology practice: advances in artificial intelligence for contrast-enhanced mammography","authors":"Ketki Kinkar, Brandon K. K. Fields, Mary W. Yamashita, Bino A. Varghese","doi":"10.3389/fradi.2023.1326831","DOIUrl":"https://doi.org/10.3389/fradi.2023.1326831","url":null,"abstract":"Artificial intelligence (AI) applications in breast imaging span a wide range of tasks including decision support, risk assessment, patient management, quality assessment, treatment response assessment and image enhancement. However, their integration into the clinical workflow has been slow due to the lack of a consensus on data quality, benchmarked robust implementation, and consensus-based guidelines to ensure standardization and generalization. Contrast-enhanced mammography (CEM) has improved sensitivity and specificity compared to current standards of breast cancer diagnostic imaging i.e., mammography (MG) and/or conventional ultrasound (US), with comparable accuracy to MRI (current diagnostic imaging benchmark), but at a much lower cost and higher throughput. This makes CEM an excellent tool for widespread breast lesion characterization for all women, including underserved and minority women. Underlining the critical need for early detection and accurate diagnosis of breast cancer, this review examines the limitations of conventional approaches and reveals how AI can help overcome them. The Methodical approaches, such as image processing, feature extraction, quantitative analysis, lesion classification, lesion segmentation, integration with clinical data, early detection, and screening support have been carefully analysed in recent studies addressing breast cancer detection and diagnosis. Recent guidelines described by Checklist for Artificial Intelligence in Medical Imaging (CLAIM) to establish a robust framework for rigorous evaluation and surveying has inspired the current review criteria.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"11 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139382926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Rising stars in neuroradiology: 2022 社论:神经放射学的新星:2022 年
Pub Date : 2024-01-05 DOI: 10.3389/fradi.2023.1349600
Thomas C. Booth
{"title":"Editorial: Rising stars in neuroradiology: 2022","authors":"Thomas C. Booth","doi":"10.3389/fradi.2023.1349600","DOIUrl":"https://doi.org/10.3389/fradi.2023.1349600","url":null,"abstract":"","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"13 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139384013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Radiomics and radiogenomics in genitourinary oncology: artificial intelligence and deep learning applications 社论:泌尿生殖系统肿瘤学中的放射组学和放射基因组学:人工智能和深度学习应用
Pub Date : 2023-12-18 DOI: 10.3389/fradi.2023.1325594
Alessandro Stefano, Elena Bertelli, A. Comelli, Marco Gatti, A. Stanzione
{"title":"Editorial: Radiomics and radiogenomics in genitourinary oncology: artificial intelligence and deep learning applications","authors":"Alessandro Stefano, Elena Bertelli, A. Comelli, Marco Gatti, A. Stanzione","doi":"10.3389/fradi.2023.1325594","DOIUrl":"https://doi.org/10.3389/fradi.2023.1325594","url":null,"abstract":"","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"10 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139173884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Standardized brain tumor imaging protocols for clinical trials: current recommendations and tips for integration 用于临床试验的标准化脑肿瘤成像协议:当前建议和整合提示
Pub Date : 2023-12-13 DOI: 10.3389/fradi.2023.1267615
F. Sanvito, Timothy J. Kaufmann, T. Cloughesy, Patrick Y. Wen, B. Ellingson
Standardized MRI acquisition protocols are crucial for reducing the measurement and interpretation variability associated with response assessment in brain tumor clinical trials. The main challenge is that standardized protocols should ensure high image quality while maximizing the number of institutions meeting the acquisition requirements. In recent years, extensive effort has been made by consensus groups to propose different “ideal” and “minimum requirements” brain tumor imaging protocols (BTIPs) for gliomas, brain metastases (BM), and primary central nervous system lymphomas (PCSNL). In clinical practice, BTIPs for clinical trials can be easily integrated with additional MRI sequences that may be desired for clinical patient management at individual sites. In this review, we summarize the general concepts behind the choice and timing of sequences included in the current recommended BTIPs, we provide a comparative overview, and discuss tips and caveats to integrate additional clinical or research sequences while preserving the recommended BTIPs. Finally, we also reflect on potential future directions for brain tumor imaging in clinical trials.
标准化的磁共振成像采集方案对于减少脑肿瘤临床试验中与反应评估相关的测量和解释变异性至关重要。主要的挑战在于,标准化方案既要确保高图像质量,又要最大限度地增加符合采集要求的机构数量。近年来,共识小组做出了大量努力,针对胶质瘤、脑转移瘤(BM)和原发性中枢神经系统淋巴瘤(PCSNL)提出了不同的 "理想 "和 "最低要求 "脑肿瘤成像方案(BTIPs)。在临床实践中,用于临床试验的 BTIPs 可以很容易地与个别部位临床患者管理所需的其他 MRI 序列相结合。在这篇综述中,我们总结了当前推荐的 BTIPs 所包含的序列选择和时间安排背后的一般概念,提供了比较概述,并讨论了在保留推荐的 BTIPs 的同时整合其他临床或研究序列的技巧和注意事项。最后,我们还对临床试验中脑肿瘤成像的未来潜在方向进行了思考。
{"title":"Standardized brain tumor imaging protocols for clinical trials: current recommendations and tips for integration","authors":"F. Sanvito, Timothy J. Kaufmann, T. Cloughesy, Patrick Y. Wen, B. Ellingson","doi":"10.3389/fradi.2023.1267615","DOIUrl":"https://doi.org/10.3389/fradi.2023.1267615","url":null,"abstract":"Standardized MRI acquisition protocols are crucial for reducing the measurement and interpretation variability associated with response assessment in brain tumor clinical trials. The main challenge is that standardized protocols should ensure high image quality while maximizing the number of institutions meeting the acquisition requirements. In recent years, extensive effort has been made by consensus groups to propose different “ideal” and “minimum requirements” brain tumor imaging protocols (BTIPs) for gliomas, brain metastases (BM), and primary central nervous system lymphomas (PCSNL). In clinical practice, BTIPs for clinical trials can be easily integrated with additional MRI sequences that may be desired for clinical patient management at individual sites. In this review, we summarize the general concepts behind the choice and timing of sequences included in the current recommended BTIPs, we provide a comparative overview, and discuss tips and caveats to integrate additional clinical or research sequences while preserving the recommended BTIPs. Finally, we also reflect on potential future directions for brain tumor imaging in clinical trials.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"8 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139004949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility of four-dimensional similarity filter for radiation dose reduction in dynamic myocardial computed tomography perfusion imaging. 四维相似性滤波器在动态心肌计算机断层扫描灌注成像中减少辐射剂量的可行性。
Pub Date : 2023-12-01 eCollection Date: 2023-01-01 DOI: 10.3389/fradi.2023.1214521
Yuta Yamamoto, Yuki Tanabe, Akira Kurata, Shuhei Yamamoto, Tomoyuki Kido, Teruyoshi Uetani, Shuntaro Ikeda, Shota Nakano, Osamu Yamaguchi, Teruhito Kido

Rationale and objectives: We aimed to evaluate the impact of four-dimensional noise reduction filtering using a four-dimensional similarity filter (4D-SF) on radiation dose reduction in dynamic myocardial computed tomography perfusion (CTP).

Materials and methods: Forty-three patients who underwent dynamic myocardial CTP using 320-row computed tomography (CT) were included in the study. The original images were reconstructed using iterative reconstruction (IR). Three different CTP datasets with simulated noise, corresponding to 25%, 50%, and 75% reduction of the original dose (300 mA), were reconstructed using a combination of IR and 4D-SF. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were assessed, and CT-derived myocardial blood flow (CT-MBF) was quantified. The results were compared between the original and simulated images with radiation dose reduction.

Results: The median SNR (first quartile-third quartile) at the original, 25%-, 50%-, and 75%-dose reduced-simulated images with 4D-SF was 8.3 (6.5-10.2), 16.5 (11.9-21.7), 15.6 (11.0-20.1), and 12.8 (8.8-18.1) and that of CNR was 4.4 (3.2-5.8), 6.7 (4.6-10.3), 6.6 (4.3-10.1), and 5.5 (3.5-9.1), respectively. All the dose-reduced-simulated CTPs with 4D-SF had significantly higher image quality scores in SNR and CNR than the original ones (25%-, 50%-, and 75%-dose reduced vs. original images, p < 0.05, in each). The CT-MBF in 75%-dose reduced-simulated CTP was significantly lower than 25%-, 50%- dose-reduced-simulated, and original CTPs (vs. 75%-dose reduced-simulated images, p < 0.05, in each).

Conclusion: 4D-SF has the potential to reduce the radiation dose associated with dynamic myocardial CTP imaging by half, without impairing the robustness of MBF quantification.

原理和目的:我们旨在评估使用四维相似性过滤器(4D-SF)进行四维降噪过滤对减少动态心肌计算机断层扫描灌注(CTP)辐射剂量的影响:研究对象包括 43 名使用 320 排计算机断层扫描(CT)进行动态心肌 CTP 的患者。原始图像采用迭代重建(IR)技术进行重建。使用 IR 和 4D-SF 组合重建了三个不同的模拟噪声 CTP 数据集,分别相当于原始剂量(300 mA)的 25%、50% 和 75%。对信噪比(SNR)和对比度-噪声比(CNR)进行了评估,并对 CT 衍生的心肌血流(CT-MBF)进行了量化。结果比较了原始图像和减少辐射剂量后的模拟图像:结果:原始图像、25%、50% 和 75% 剂量减少后的 4D-SF 模拟图像的信噪比中位数(第一四分位数-第三四分位数)分别为 8.3(6.5-10.2)、16.5(11.9-21.7)、16.5(11.9-21.7)和 16.5(11.9-21.7)。5(11.9-21.7)、15.6(11.0-20.1)和 12.8(8.8-18.1),CNR 分别为 4.4(3.2-5.8)、6.7(4.6-10.3)、6.6(4.3-10.1)和 5.5(3.5-9.1)。所有使用 4D-SF 的剂量减低模拟 CTP 在 SNR 和 CNR 方面的图像质量评分都明显高于原始图像(25%、50% 和 75% 剂量减低与原始图像对比,P P 结论:4D-SF 有可能将动态心肌 CTP 成像的辐射剂量减少一半,而不影响 MBF 定量的稳健性。
{"title":"Feasibility of four-dimensional similarity filter for radiation dose reduction in dynamic myocardial computed tomography perfusion imaging.","authors":"Yuta Yamamoto, Yuki Tanabe, Akira Kurata, Shuhei Yamamoto, Tomoyuki Kido, Teruyoshi Uetani, Shuntaro Ikeda, Shota Nakano, Osamu Yamaguchi, Teruhito Kido","doi":"10.3389/fradi.2023.1214521","DOIUrl":"https://doi.org/10.3389/fradi.2023.1214521","url":null,"abstract":"<p><strong>Rationale and objectives: </strong>We aimed to evaluate the impact of four-dimensional noise reduction filtering using a four-dimensional similarity filter (4D-SF) on radiation dose reduction in dynamic myocardial computed tomography perfusion (CTP).</p><p><strong>Materials and methods: </strong>Forty-three patients who underwent dynamic myocardial CTP using 320-row computed tomography (CT) were included in the study. The original images were reconstructed using iterative reconstruction (IR). Three different CTP datasets with simulated noise, corresponding to 25%, 50%, and 75% reduction of the original dose (300 mA), were reconstructed using a combination of IR and 4D-SF. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were assessed, and CT-derived myocardial blood flow (CT-MBF) was quantified. The results were compared between the original and simulated images with radiation dose reduction.</p><p><strong>Results: </strong>The median SNR (first quartile-third quartile) at the original, 25%-, 50%-, and 75%-dose reduced-simulated images with 4D-SF was 8.3 (6.5-10.2), 16.5 (11.9-21.7), 15.6 (11.0-20.1), and 12.8 (8.8-18.1) and that of CNR was 4.4 (3.2-5.8), 6.7 (4.6-10.3), 6.6 (4.3-10.1), and 5.5 (3.5-9.1), respectively. All the dose-reduced-simulated CTPs with 4D-SF had significantly higher image quality scores in SNR and CNR than the original ones (25%-, 50%-, and 75%-dose reduced vs. original images, <i>p</i> < 0.05, in each). The CT-MBF in 75%-dose reduced-simulated CTP was significantly lower than 25%-, 50%- dose-reduced-simulated, and original CTPs (vs. 75%-dose reduced-simulated images, <i>p</i> < 0.05, in each).</p><p><strong>Conclusion: </strong>4D-SF has the potential to reduce the radiation dose associated with dynamic myocardial CTP imaging by half, without impairing the robustness of MBF quantification.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1214521"},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10722229/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138814458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in radiology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1