Pub Date : 2024-01-18DOI: 10.3389/fradi.2023.1327075
Pip Bridgen, Raphaël Tomi-Tricot, Alena Uus, Daniel Cromb, Megan Quirke, J. Almalbis, Beya Bonse, Miguel De la Fuente Botella, Alessandra Maggioni, Pierluigi Di Cio, Pauline A. Cawley, Chiara Casella, A. S. Dokumacı, Alice R. Thomson, Jucha Willers Moore, Devi Bridglal, Joao Saravia, Thomas Finck, Anthony N. Price, Elisabeth Pickles, Lucilio Cordero-Grande, Alexia Egloff, J. O’Muircheartaigh, S. Counsell, Sharon Giles, M. Deprez, Enrico De Vita, M. Rutherford, A. D. Edwards, J. Hajnal, Shaihan J. Malik, T. Arichi
Ultra-high field MR imaging offers marked gains in signal-to-noise ratio, spatial resolution, and contrast which translate to improved pathological and anatomical sensitivity. These benefits are particularly relevant for the neonatal brain which is rapidly developing and sensitive to injury. However, experience of imaging neonates at 7T has been limited due to regulatory, safety, and practical considerations. We aimed to establish a program for safely acquiring high resolution and contrast brain images from neonates on a 7T system.Images were acquired from 35 neonates on 44 occasions (median age 39 + 6 postmenstrual weeks, range 33 + 4 to 52 + 6; median body weight 2.93 kg, range 1.57 to 5.3 kg) over a median time of 49 mins 30 s. Peripheral body temperature and physiological measures were recorded throughout scanning. Acquired sequences included T2 weighted (TSE), Actual Flip angle Imaging (AFI), functional MRI (BOLD EPI), susceptibility weighted imaging (SWI), and MR spectroscopy (STEAM).There was no significant difference between temperature before and after scanning (p = 0.76) and image quality assessment compared favorably to state-of-the-art 3T acquisitions. Anatomical imaging demonstrated excellent sensitivity to structures which are typically hard to visualize at lower field strengths including the hippocampus, cerebellum, and vasculature. Images were also acquired with contrast mechanisms which are enhanced at ultra-high field including susceptibility weighted imaging, functional MRI, and MR spectroscopy.We demonstrate safety and feasibility of imaging vulnerable neonates at ultra-high field and highlight the untapped potential for providing important new insights into brain development and pathological processes during this critical phase of early life.
{"title":"High resolution and contrast 7 tesla MR brain imaging of the neonate","authors":"Pip Bridgen, Raphaël Tomi-Tricot, Alena Uus, Daniel Cromb, Megan Quirke, J. Almalbis, Beya Bonse, Miguel De la Fuente Botella, Alessandra Maggioni, Pierluigi Di Cio, Pauline A. Cawley, Chiara Casella, A. S. Dokumacı, Alice R. Thomson, Jucha Willers Moore, Devi Bridglal, Joao Saravia, Thomas Finck, Anthony N. Price, Elisabeth Pickles, Lucilio Cordero-Grande, Alexia Egloff, J. O’Muircheartaigh, S. Counsell, Sharon Giles, M. Deprez, Enrico De Vita, M. Rutherford, A. D. Edwards, J. Hajnal, Shaihan J. Malik, T. Arichi","doi":"10.3389/fradi.2023.1327075","DOIUrl":"https://doi.org/10.3389/fradi.2023.1327075","url":null,"abstract":"Ultra-high field MR imaging offers marked gains in signal-to-noise ratio, spatial resolution, and contrast which translate to improved pathological and anatomical sensitivity. These benefits are particularly relevant for the neonatal brain which is rapidly developing and sensitive to injury. However, experience of imaging neonates at 7T has been limited due to regulatory, safety, and practical considerations. We aimed to establish a program for safely acquiring high resolution and contrast brain images from neonates on a 7T system.Images were acquired from 35 neonates on 44 occasions (median age 39 + 6 postmenstrual weeks, range 33 + 4 to 52 + 6; median body weight 2.93 kg, range 1.57 to 5.3 kg) over a median time of 49 mins 30 s. Peripheral body temperature and physiological measures were recorded throughout scanning. Acquired sequences included T2 weighted (TSE), Actual Flip angle Imaging (AFI), functional MRI (BOLD EPI), susceptibility weighted imaging (SWI), and MR spectroscopy (STEAM).There was no significant difference between temperature before and after scanning (p = 0.76) and image quality assessment compared favorably to state-of-the-art 3T acquisitions. Anatomical imaging demonstrated excellent sensitivity to structures which are typically hard to visualize at lower field strengths including the hippocampus, cerebellum, and vasculature. Images were also acquired with contrast mechanisms which are enhanced at ultra-high field including susceptibility weighted imaging, functional MRI, and MR spectroscopy.We demonstrate safety and feasibility of imaging vulnerable neonates at ultra-high field and highlight the untapped potential for providing important new insights into brain development and pathological processes during this critical phase of early life.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"105 26","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139615858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-18DOI: 10.3389/fradi.2023.1336902
P. Raut, G. Baldini, M. Schöneck, L. Caldeira
Challenging tasks such as lesion segmentation, classification, and analysis for the assessment of disease progression can be automatically achieved using deep learning (DL)-based algorithms. DL techniques such as 3D convolutional neural networks are trained using heterogeneous volumetric imaging data such as MRI, CT, and PET, among others. However, DL-based methods are usually only applicable in the presence of the desired number of inputs. In the absence of one of the required inputs, the method cannot be used. By implementing a generative adversarial network (GAN), we aim to apply multi-label automatic segmentation of brain tumors to synthetic images when not all inputs are present. The implemented GAN is based on the Pix2Pix architecture and has been extended to a 3D framework named Pix2PixNIfTI. For this study, 1,251 patients of the BraTS2021 dataset comprising sequences such as T1w, T2w, T1CE, and FLAIR images equipped with respective multi-label segmentation were used. This dataset was used for training the Pix2PixNIfTI model for generating synthetic MRI images of all the image contrasts. The segmentation model, namely DeepMedic, was trained in a five-fold cross-validation manner for brain tumor segmentation and tested using the original inputs as the gold standard. The inference of trained segmentation models was later applied to synthetic images replacing missing input, in combination with other original images to identify the efficacy of generated images in achieving multi-class segmentation. For the multi-class segmentation using synthetic data or lesser inputs, the dice scores were observed to be significantly reduced but remained similar in range for the whole tumor when compared with evaluated original image segmentation (e.g. mean dice of synthetic T2w prediction NC, 0.74 ± 0.30; ED, 0.81 ± 0.15; CET, 0.84 ± 0.21; WT, 0.90 ± 0.08). A standard paired t-tests with multiple comparison correction were performed to assess the difference between all regions (p < 0.05). The study concludes that the use of Pix2PixNIfTI allows us to segment brain tumors when one input image is missing.
{"title":"Using a generative adversarial network to generate synthetic MRI images for multi-class automatic segmentation of brain tumors","authors":"P. Raut, G. Baldini, M. Schöneck, L. Caldeira","doi":"10.3389/fradi.2023.1336902","DOIUrl":"https://doi.org/10.3389/fradi.2023.1336902","url":null,"abstract":"Challenging tasks such as lesion segmentation, classification, and analysis for the assessment of disease progression can be automatically achieved using deep learning (DL)-based algorithms. DL techniques such as 3D convolutional neural networks are trained using heterogeneous volumetric imaging data such as MRI, CT, and PET, among others. However, DL-based methods are usually only applicable in the presence of the desired number of inputs. In the absence of one of the required inputs, the method cannot be used. By implementing a generative adversarial network (GAN), we aim to apply multi-label automatic segmentation of brain tumors to synthetic images when not all inputs are present. The implemented GAN is based on the Pix2Pix architecture and has been extended to a 3D framework named Pix2PixNIfTI. For this study, 1,251 patients of the BraTS2021 dataset comprising sequences such as T1w, T2w, T1CE, and FLAIR images equipped with respective multi-label segmentation were used. This dataset was used for training the Pix2PixNIfTI model for generating synthetic MRI images of all the image contrasts. The segmentation model, namely DeepMedic, was trained in a five-fold cross-validation manner for brain tumor segmentation and tested using the original inputs as the gold standard. The inference of trained segmentation models was later applied to synthetic images replacing missing input, in combination with other original images to identify the efficacy of generated images in achieving multi-class segmentation. For the multi-class segmentation using synthetic data or lesser inputs, the dice scores were observed to be significantly reduced but remained similar in range for the whole tumor when compared with evaluated original image segmentation (e.g. mean dice of synthetic T2w prediction NC, 0.74 ± 0.30; ED, 0.81 ± 0.15; CET, 0.84 ± 0.21; WT, 0.90 ± 0.08). A standard paired t-tests with multiple comparison correction were performed to assess the difference between all regions (p < 0.05). The study concludes that the use of Pix2PixNIfTI allows us to segment brain tumors when one input image is missing.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"110 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139615471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-08eCollection Date: 2023-01-01DOI: 10.3389/fradi.2023.1274273
Aditi Anand, Sarada Krithivasan, Kaushik Roy
Artificial Intelligence (AI) methods, particularly Deep Neural Networks (DNNs), have shown great promise in a range of medical imaging tasks. However, the susceptibility of DNNs to producing erroneous outputs under the presence of input noise and variations is of great concern and one of the largest challenges to their adoption in medical settings. Towards addressing this challenge, we explore the robustness of DNNs trained for chest radiograph classification under a range of perturbations reflective of clinical settings. We propose RoMIA, a framework for the creation of Robust Medical Imaging AI models. RoMIA adds three key steps to the model training and deployment flow: (i) Noise-added training, wherein a part of the training data is synthetically transformed to represent common noise sources, (ii) Fine-tuning with input mixing, in which the model is refined with inputs formed by mixing data from the original training set with a small number of images from a different source, and (iii) DCT-based denoising, which removes a fraction of high-frequency components of each image before applying the model to classify it. We applied RoMIA to create six different robust models for classifying chest radiographs using the CheXpert dataset. We evaluated the models on the CheXphoto dataset, which consists of naturally and synthetically perturbed images intended to evaluate robustness. Models produced by RoMIA show 3%-5% improvement in robust accuracy, which corresponds to an average reduction of 22.6% in misclassifications. These results suggest that RoMIA can be a useful step towards enabling the adoption of AI models in medical imaging applications.
{"title":"RoMIA: a framework for creating Robust Medical Imaging AI models for chest radiographs.","authors":"Aditi Anand, Sarada Krithivasan, Kaushik Roy","doi":"10.3389/fradi.2023.1274273","DOIUrl":"10.3389/fradi.2023.1274273","url":null,"abstract":"<p><p>Artificial Intelligence (AI) methods, particularly Deep Neural Networks (DNNs), have shown great promise in a range of medical imaging tasks. However, the susceptibility of DNNs to producing erroneous outputs under the presence of input noise and variations is of great concern and one of the largest challenges to their adoption in medical settings. Towards addressing this challenge, we explore the robustness of DNNs trained for chest radiograph classification under a range of perturbations reflective of clinical settings. We propose RoMIA, a framework for the creation of Robust Medical Imaging AI models. RoMIA adds three key steps to the model training and deployment flow: (i) Noise-added training, wherein a part of the training data is synthetically transformed to represent common noise sources, (ii) Fine-tuning with input mixing, in which the model is refined with inputs formed by mixing data from the original training set with a small number of images from a different source, and (iii) DCT-based denoising, which removes a fraction of high-frequency components of each image before applying the model to classify it. We applied RoMIA to create six different robust models for classifying chest radiographs using the CheXpert dataset. We evaluated the models on the CheXphoto dataset, which consists of naturally and synthetically perturbed images intended to evaluate robustness. Models produced by RoMIA show 3%-5% improvement in robust accuracy, which corresponds to an average reduction of 22.6% in misclassifications. These results suggest that RoMIA can be a useful step towards enabling the adoption of AI models in medical imaging applications.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1274273"},"PeriodicalIF":0.0,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10800823/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139522371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-05DOI: 10.3389/fradi.2023.1305390
H. Sotoudeh, Mohammadreza Alizadeh, Ramin Shahidi, Parnian Shobeiri, Z. Saadatpour, C. A. Wheeler, Marissa Natelson Love, Manoj Tanwar
Alzheimer's Disease (AD) is a leading cause of morbidity. Management of AD has traditionally been aimed at symptom relief rather than disease modification. Recently, AD research has begun to shift focus towards disease-modifying therapies that can alter the progression of AD. In this context, a class of immunotherapy agents known as monoclonal antibodies target diverse cerebral amyloid-beta (Aβ) epitopes to inhibit disease progression. Aducanumab was authorized by the US Food and Drug Administration (FDA) to treat AD on June 7, 2021. Aducanumab has shown promising clinical and biomarker efficacy but is associated with amyloid-related imaging abnormalities (ARIA). Neuroradiologists play a critical role in diagnosing ARIA, necessitating familiarity with this condition. This pictorial review will appraise the radiologic presentation of ARIA in patients on aducanumab.
{"title":"Imaging spectrum of amyloid-related imaging abnormalities associated with aducanumab immunotherapy","authors":"H. Sotoudeh, Mohammadreza Alizadeh, Ramin Shahidi, Parnian Shobeiri, Z. Saadatpour, C. A. Wheeler, Marissa Natelson Love, Manoj Tanwar","doi":"10.3389/fradi.2023.1305390","DOIUrl":"https://doi.org/10.3389/fradi.2023.1305390","url":null,"abstract":"Alzheimer's Disease (AD) is a leading cause of morbidity. Management of AD has traditionally been aimed at symptom relief rather than disease modification. Recently, AD research has begun to shift focus towards disease-modifying therapies that can alter the progression of AD. In this context, a class of immunotherapy agents known as monoclonal antibodies target diverse cerebral amyloid-beta (Aβ) epitopes to inhibit disease progression. Aducanumab was authorized by the US Food and Drug Administration (FDA) to treat AD on June 7, 2021. Aducanumab has shown promising clinical and biomarker efficacy but is associated with amyloid-related imaging abnormalities (ARIA). Neuroradiologists play a critical role in diagnosing ARIA, necessitating familiarity with this condition. This pictorial review will appraise the radiologic presentation of ARIA in patients on aducanumab.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"101 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139383592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-05DOI: 10.3389/fradi.2023.1326831
Ketki Kinkar, Brandon K. K. Fields, Mary W. Yamashita, Bino A. Varghese
Artificial intelligence (AI) applications in breast imaging span a wide range of tasks including decision support, risk assessment, patient management, quality assessment, treatment response assessment and image enhancement. However, their integration into the clinical workflow has been slow due to the lack of a consensus on data quality, benchmarked robust implementation, and consensus-based guidelines to ensure standardization and generalization. Contrast-enhanced mammography (CEM) has improved sensitivity and specificity compared to current standards of breast cancer diagnostic imaging i.e., mammography (MG) and/or conventional ultrasound (US), with comparable accuracy to MRI (current diagnostic imaging benchmark), but at a much lower cost and higher throughput. This makes CEM an excellent tool for widespread breast lesion characterization for all women, including underserved and minority women. Underlining the critical need for early detection and accurate diagnosis of breast cancer, this review examines the limitations of conventional approaches and reveals how AI can help overcome them. The Methodical approaches, such as image processing, feature extraction, quantitative analysis, lesion classification, lesion segmentation, integration with clinical data, early detection, and screening support have been carefully analysed in recent studies addressing breast cancer detection and diagnosis. Recent guidelines described by Checklist for Artificial Intelligence in Medical Imaging (CLAIM) to establish a robust framework for rigorous evaluation and surveying has inspired the current review criteria.
人工智能(AI)在乳腺成像中的应用范围广泛,包括决策支持、风险评估、患者管理、质量评估、治疗反应评估和图像增强。然而,由于缺乏对数据质量的共识、以基准为基础的稳健实施以及基于共识的指南来确保标准化和通用化,它们与临床工作流程的整合一直进展缓慢。与目前的乳腺癌诊断成像标准(即乳腺 X 线照相术(MG)和/或传统超声波(US))相比,对比增强乳腺 X 线照相术(CEM)具有更高的灵敏度和特异性,其准确性与核磁共振成像(目前的诊断成像基准)相当,但成本更低,吞吐量更大。这使得 CEM 成为一种优秀的工具,可广泛用于所有妇女(包括服务不足的妇女和少数民族妇女)的乳腺病变特征描述。本综述强调了早期检测和准确诊断乳腺癌的迫切需要,探讨了传统方法的局限性,并揭示了人工智能如何帮助克服这些局限性。在最近针对乳腺癌检测和诊断的研究中,对图像处理、特征提取、定量分析、病灶分类、病灶分割、与临床数据整合、早期检测和筛查支持等方法进行了仔细分析。医学影像人工智能核对表(CLAIM)所描述的最新指导方针为严格的评估和调查建立了一个稳健的框架,这也启发了当前的审查标准。
{"title":"Empowering breast cancer diagnosis and radiology practice: advances in artificial intelligence for contrast-enhanced mammography","authors":"Ketki Kinkar, Brandon K. K. Fields, Mary W. Yamashita, Bino A. Varghese","doi":"10.3389/fradi.2023.1326831","DOIUrl":"https://doi.org/10.3389/fradi.2023.1326831","url":null,"abstract":"Artificial intelligence (AI) applications in breast imaging span a wide range of tasks including decision support, risk assessment, patient management, quality assessment, treatment response assessment and image enhancement. However, their integration into the clinical workflow has been slow due to the lack of a consensus on data quality, benchmarked robust implementation, and consensus-based guidelines to ensure standardization and generalization. Contrast-enhanced mammography (CEM) has improved sensitivity and specificity compared to current standards of breast cancer diagnostic imaging i.e., mammography (MG) and/or conventional ultrasound (US), with comparable accuracy to MRI (current diagnostic imaging benchmark), but at a much lower cost and higher throughput. This makes CEM an excellent tool for widespread breast lesion characterization for all women, including underserved and minority women. Underlining the critical need for early detection and accurate diagnosis of breast cancer, this review examines the limitations of conventional approaches and reveals how AI can help overcome them. The Methodical approaches, such as image processing, feature extraction, quantitative analysis, lesion classification, lesion segmentation, integration with clinical data, early detection, and screening support have been carefully analysed in recent studies addressing breast cancer detection and diagnosis. Recent guidelines described by Checklist for Artificial Intelligence in Medical Imaging (CLAIM) to establish a robust framework for rigorous evaluation and surveying has inspired the current review criteria.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"11 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139382926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-05DOI: 10.3389/fradi.2023.1349600
Thomas C. Booth
{"title":"Editorial: Rising stars in neuroradiology: 2022","authors":"Thomas C. Booth","doi":"10.3389/fradi.2023.1349600","DOIUrl":"https://doi.org/10.3389/fradi.2023.1349600","url":null,"abstract":"","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"13 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139384013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-18DOI: 10.3389/fradi.2023.1325594
Alessandro Stefano, Elena Bertelli, A. Comelli, Marco Gatti, A. Stanzione
{"title":"Editorial: Radiomics and radiogenomics in genitourinary oncology: artificial intelligence and deep learning applications","authors":"Alessandro Stefano, Elena Bertelli, A. Comelli, Marco Gatti, A. Stanzione","doi":"10.3389/fradi.2023.1325594","DOIUrl":"https://doi.org/10.3389/fradi.2023.1325594","url":null,"abstract":"","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"10 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139173884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-13DOI: 10.3389/fradi.2023.1267615
F. Sanvito, Timothy J. Kaufmann, T. Cloughesy, Patrick Y. Wen, B. Ellingson
Standardized MRI acquisition protocols are crucial for reducing the measurement and interpretation variability associated with response assessment in brain tumor clinical trials. The main challenge is that standardized protocols should ensure high image quality while maximizing the number of institutions meeting the acquisition requirements. In recent years, extensive effort has been made by consensus groups to propose different “ideal” and “minimum requirements” brain tumor imaging protocols (BTIPs) for gliomas, brain metastases (BM), and primary central nervous system lymphomas (PCSNL). In clinical practice, BTIPs for clinical trials can be easily integrated with additional MRI sequences that may be desired for clinical patient management at individual sites. In this review, we summarize the general concepts behind the choice and timing of sequences included in the current recommended BTIPs, we provide a comparative overview, and discuss tips and caveats to integrate additional clinical or research sequences while preserving the recommended BTIPs. Finally, we also reflect on potential future directions for brain tumor imaging in clinical trials.
{"title":"Standardized brain tumor imaging protocols for clinical trials: current recommendations and tips for integration","authors":"F. Sanvito, Timothy J. Kaufmann, T. Cloughesy, Patrick Y. Wen, B. Ellingson","doi":"10.3389/fradi.2023.1267615","DOIUrl":"https://doi.org/10.3389/fradi.2023.1267615","url":null,"abstract":"Standardized MRI acquisition protocols are crucial for reducing the measurement and interpretation variability associated with response assessment in brain tumor clinical trials. The main challenge is that standardized protocols should ensure high image quality while maximizing the number of institutions meeting the acquisition requirements. In recent years, extensive effort has been made by consensus groups to propose different “ideal” and “minimum requirements” brain tumor imaging protocols (BTIPs) for gliomas, brain metastases (BM), and primary central nervous system lymphomas (PCSNL). In clinical practice, BTIPs for clinical trials can be easily integrated with additional MRI sequences that may be desired for clinical patient management at individual sites. In this review, we summarize the general concepts behind the choice and timing of sequences included in the current recommended BTIPs, we provide a comparative overview, and discuss tips and caveats to integrate additional clinical or research sequences while preserving the recommended BTIPs. Finally, we also reflect on potential future directions for brain tumor imaging in clinical trials.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"8 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139004949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rationale and objectives: We aimed to evaluate the impact of four-dimensional noise reduction filtering using a four-dimensional similarity filter (4D-SF) on radiation dose reduction in dynamic myocardial computed tomography perfusion (CTP).
Materials and methods: Forty-three patients who underwent dynamic myocardial CTP using 320-row computed tomography (CT) were included in the study. The original images were reconstructed using iterative reconstruction (IR). Three different CTP datasets with simulated noise, corresponding to 25%, 50%, and 75% reduction of the original dose (300 mA), were reconstructed using a combination of IR and 4D-SF. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were assessed, and CT-derived myocardial blood flow (CT-MBF) was quantified. The results were compared between the original and simulated images with radiation dose reduction.
Results: The median SNR (first quartile-third quartile) at the original, 25%-, 50%-, and 75%-dose reduced-simulated images with 4D-SF was 8.3 (6.5-10.2), 16.5 (11.9-21.7), 15.6 (11.0-20.1), and 12.8 (8.8-18.1) and that of CNR was 4.4 (3.2-5.8), 6.7 (4.6-10.3), 6.6 (4.3-10.1), and 5.5 (3.5-9.1), respectively. All the dose-reduced-simulated CTPs with 4D-SF had significantly higher image quality scores in SNR and CNR than the original ones (25%-, 50%-, and 75%-dose reduced vs. original images, p < 0.05, in each). The CT-MBF in 75%-dose reduced-simulated CTP was significantly lower than 25%-, 50%- dose-reduced-simulated, and original CTPs (vs. 75%-dose reduced-simulated images, p < 0.05, in each).
Conclusion: 4D-SF has the potential to reduce the radiation dose associated with dynamic myocardial CTP imaging by half, without impairing the robustness of MBF quantification.
{"title":"Feasibility of four-dimensional similarity filter for radiation dose reduction in dynamic myocardial computed tomography perfusion imaging.","authors":"Yuta Yamamoto, Yuki Tanabe, Akira Kurata, Shuhei Yamamoto, Tomoyuki Kido, Teruyoshi Uetani, Shuntaro Ikeda, Shota Nakano, Osamu Yamaguchi, Teruhito Kido","doi":"10.3389/fradi.2023.1214521","DOIUrl":"https://doi.org/10.3389/fradi.2023.1214521","url":null,"abstract":"<p><strong>Rationale and objectives: </strong>We aimed to evaluate the impact of four-dimensional noise reduction filtering using a four-dimensional similarity filter (4D-SF) on radiation dose reduction in dynamic myocardial computed tomography perfusion (CTP).</p><p><strong>Materials and methods: </strong>Forty-three patients who underwent dynamic myocardial CTP using 320-row computed tomography (CT) were included in the study. The original images were reconstructed using iterative reconstruction (IR). Three different CTP datasets with simulated noise, corresponding to 25%, 50%, and 75% reduction of the original dose (300 mA), were reconstructed using a combination of IR and 4D-SF. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were assessed, and CT-derived myocardial blood flow (CT-MBF) was quantified. The results were compared between the original and simulated images with radiation dose reduction.</p><p><strong>Results: </strong>The median SNR (first quartile-third quartile) at the original, 25%-, 50%-, and 75%-dose reduced-simulated images with 4D-SF was 8.3 (6.5-10.2), 16.5 (11.9-21.7), 15.6 (11.0-20.1), and 12.8 (8.8-18.1) and that of CNR was 4.4 (3.2-5.8), 6.7 (4.6-10.3), 6.6 (4.3-10.1), and 5.5 (3.5-9.1), respectively. All the dose-reduced-simulated CTPs with 4D-SF had significantly higher image quality scores in SNR and CNR than the original ones (25%-, 50%-, and 75%-dose reduced vs. original images, <i>p</i> < 0.05, in each). The CT-MBF in 75%-dose reduced-simulated CTP was significantly lower than 25%-, 50%- dose-reduced-simulated, and original CTPs (vs. 75%-dose reduced-simulated images, <i>p</i> < 0.05, in each).</p><p><strong>Conclusion: </strong>4D-SF has the potential to reduce the radiation dose associated with dynamic myocardial CTP imaging by half, without impairing the robustness of MBF quantification.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1214521"},"PeriodicalIF":0.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10722229/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138814458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}