Pub Date : 2024-02-20eCollection Date: 2024-01-01DOI: 10.3389/fradi.2024.1307586
Melissa A Prah, Kathleen M Schmainda
Relative cerebral blood volume (rCBV) derived from dynamic susceptibility contrast (DSC) perfusion MR imaging (pMRI) has been shown to be a robust marker of neuroradiological tumor burden. Recent consensus recommendations in pMRI acquisition strategies have provided a pathway for pMRI inclusion in diverse patient care centers, regardless of size or experience. However, even with proper implementation and execution of the DSC-MRI protocol, issues will arise that many centers may not easily recognize or be aware of. Furthermore, missed pMRI issues are not always apparent in the resulting rCBV images, potentiating inaccurate or missed radiological diagnoses. Therefore, we gathered from our database of DSC-MRI datasets, true-to-life examples showcasing the breakdowns in acquisition, postprocessing, and interpretation, along with appropriate mitigation strategies when possible. The pMRI issues addressed include those related to image acquisition and postprocessing with a focus on contrast agent administration, timing, and rate, signal-to-noise quality, and susceptibility artifact. The goal of this work is to provide guidance to minimize and recognize pMRI issues to ensure that only quality data is interpreted.
{"title":"Practical guidance to identify and troubleshoot suboptimal DSC-MRI results.","authors":"Melissa A Prah, Kathleen M Schmainda","doi":"10.3389/fradi.2024.1307586","DOIUrl":"10.3389/fradi.2024.1307586","url":null,"abstract":"<p><p>Relative cerebral blood volume (rCBV) derived from dynamic susceptibility contrast (DSC) perfusion MR imaging (pMRI) has been shown to be a robust marker of neuroradiological tumor burden. Recent consensus recommendations in pMRI acquisition strategies have provided a pathway for pMRI inclusion in diverse patient care centers, regardless of size or experience. However, even with proper implementation and execution of the DSC-MRI protocol, issues will arise that many centers may not easily recognize or be aware of. Furthermore, missed pMRI issues are not always apparent in the resulting rCBV images, potentiating inaccurate or missed radiological diagnoses. Therefore, we gathered from our database of DSC-MRI datasets, true-to-life examples showcasing the breakdowns in acquisition, postprocessing, and interpretation, along with appropriate mitigation strategies when possible. The pMRI issues addressed include those related to image acquisition and postprocessing with a focus on contrast agent administration, timing, and rate, signal-to-noise quality, and susceptibility artifact. The goal of this work is to provide guidance to minimize and recognize pMRI issues to ensure that only quality data is interpreted.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"4 ","pages":"1307586"},"PeriodicalIF":0.0,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10913595/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140041019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-19eCollection Date: 2024-01-01DOI: 10.3389/fradi.2024.1330399
Shahriar Faghani, Soham Patel, Nicholas G Rhodes, Garret M Powell, Francis I Baffour, Mana Moassefi, Katrina N Glazebrook, Bradley J Erickson, Christin A Tiegs-Heiden
Introduction: Dual-energy CT (DECT) is a non-invasive way to determine the presence of monosodium urate (MSU) crystals in the workup of gout. Color-coding distinguishes MSU from calcium following material decomposition and post-processing. Manually identifying these foci (most commonly labeled green) is tedious, and an automated detection system could streamline the process. This study aims to evaluate the impact of a deep-learning (DL) algorithm developed for detecting green pixelations on DECT on reader time, accuracy, and confidence.
Methods: We collected a sample of positive and negative DECTs, reviewed twice-once with and once without the DL tool-with a 2-week washout period. An attending musculoskeletal radiologist and a fellow separately reviewed the cases, simulating clinical workflow. Metrics such as time taken, confidence in diagnosis, and the tool's helpfulness were recorded and statistically analyzed.
Results: We included thirty DECTs from different patients. The DL tool significantly reduced the reading time for the trainee radiologist (p = 0.02), but not for the attending radiologist (p = 0.15). Diagnostic confidence remained unchanged for both (p = 0.45). However, the DL model identified tiny MSU deposits that led to a change in diagnosis in two cases for the in-training radiologist and one case for the attending radiologist. In 3/3 of these cases, the diagnosis was correct when using DL.
Conclusions: The implementation of the developed DL model slightly reduced reading time for our less experienced reader and led to improved diagnostic accuracy. There was no statistically significant difference in diagnostic confidence when studies were interpreted without and with the DL model.
{"title":"Deep-learning for automated detection of MSU deposits on DECT: evaluating impact on efficiency and reader confidence.","authors":"Shahriar Faghani, Soham Patel, Nicholas G Rhodes, Garret M Powell, Francis I Baffour, Mana Moassefi, Katrina N Glazebrook, Bradley J Erickson, Christin A Tiegs-Heiden","doi":"10.3389/fradi.2024.1330399","DOIUrl":"10.3389/fradi.2024.1330399","url":null,"abstract":"<p><strong>Introduction: </strong>Dual-energy CT (DECT) is a non-invasive way to determine the presence of monosodium urate (MSU) crystals in the workup of gout. Color-coding distinguishes MSU from calcium following material decomposition and post-processing. Manually identifying these foci (most commonly labeled green) is tedious, and an automated detection system could streamline the process. This study aims to evaluate the impact of a deep-learning (DL) algorithm developed for detecting green pixelations on DECT on reader time, accuracy, and confidence.</p><p><strong>Methods: </strong>We collected a sample of positive and negative DECTs, reviewed twice-once with and once without the DL tool-with a 2-week washout period. An attending musculoskeletal radiologist and a fellow separately reviewed the cases, simulating clinical workflow. Metrics such as time taken, confidence in diagnosis, and the tool's helpfulness were recorded and statistically analyzed.</p><p><strong>Results: </strong>We included thirty DECTs from different patients. The DL tool significantly reduced the reading time for the trainee radiologist (<i>p</i> = 0.02), but not for the attending radiologist (<i>p</i> = 0.15). Diagnostic confidence remained unchanged for both (<i>p</i> = 0.45). However, the DL model identified tiny MSU deposits that led to a change in diagnosis in two cases for the in-training radiologist and one case for the attending radiologist. In 3/3 of these cases, the diagnosis was correct when using DL.</p><p><strong>Conclusions: </strong>The implementation of the developed DL model slightly reduced reading time for our less experienced reader and led to improved diagnostic accuracy. There was no statistically significant difference in diagnostic confidence when studies were interpreted without and with the DL model.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"4 ","pages":"1330399"},"PeriodicalIF":0.0,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10909828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140029701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-15eCollection Date: 2024-01-01DOI: 10.3389/fradi.2024.1339612
Nurbanu Aksoy, Serge Sharoff, Selcuk Baser, Nishant Ravikumar, Alejandro F Frangi
Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images. Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists. In this paper, we present a novel multi-modal deep neural network framework for generating chest x-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes. We introduce a conditioned cross-multi-head attention module to fuse these heterogeneous data modalities, bridging the semantic gap between visual and textual data. Experiments demonstrate substantial improvements from using additional modalities compared to relying on images alone. Notably, our model achieves the highest reported performance on the ROUGE-L metric compared to relevant state-of-the-art models in the literature. Furthermore, we employed both human evaluation and clinical semantic similarity measurement alongside word-overlap metrics to improve the depth of quantitative analysis. A human evaluation, conducted by a board-certified radiologist, confirms the model's accuracy in identifying high-level findings, however, it also highlights that more improvement is needed to capture nuanced details and clinical context.
从图像到文本的放射学报告生成旨在自动生成放射学报告,以描述医学图像中的发现。大多数现有方法只关注图像数据,而忽略了放射科医生可获取的其他患者信息。在本文中,我们提出了一种新颖的多模态深度神经网络框架,通过将生命体征和症状等结构化患者数据与非结构化临床笔记相结合来生成胸部 X 光报告。我们引入了条件交叉多头注意力模块,以融合这些异构数据模式,弥合视觉和文本数据之间的语义鸿沟。实验证明,与仅依赖图像相比,使用额外的模式能带来实质性的改进。值得注意的是,与文献中的相关先进模型相比,我们的模型在 ROUGE-L 指标上取得了最高的报告性能。此外,我们还采用了人工评估和临床语义相似性测量以及词重叠度量,以提高定量分析的深度。由一名获得医学会认证的放射科医生进行的人工评估证实了该模型在识别高层次结果方面的准确性,但同时也强调了在捕捉细微细节和临床背景方面还需要更多改进。
{"title":"Beyond images: an integrative multi-modal approach to chest x-ray report generation.","authors":"Nurbanu Aksoy, Serge Sharoff, Selcuk Baser, Nishant Ravikumar, Alejandro F Frangi","doi":"10.3389/fradi.2024.1339612","DOIUrl":"10.3389/fradi.2024.1339612","url":null,"abstract":"<p><p>Image-to-text radiology report generation aims to automatically produce radiology reports that describe the findings in medical images. Most existing methods focus solely on the image data, disregarding the other patient information accessible to radiologists. In this paper, we present a novel multi-modal deep neural network framework for generating chest x-rays reports by integrating structured patient data, such as vital signs and symptoms, alongside unstructured clinical notes. We introduce a conditioned cross-multi-head attention module to fuse these heterogeneous data modalities, bridging the semantic gap between visual and textual data. Experiments demonstrate substantial improvements from using additional modalities compared to relying on images alone. Notably, our model achieves the highest reported performance on the ROUGE-L metric compared to relevant state-of-the-art models in the literature. Furthermore, we employed both human evaluation and clinical semantic similarity measurement alongside word-overlap metrics to improve the depth of quantitative analysis. A human evaluation, conducted by a board-certified radiologist, confirms the model's accuracy in identifying high-level findings, however, it also highlights that more improvement is needed to capture nuanced details and clinical context.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"4 ","pages":"1339612"},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10902135/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139998818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-31eCollection Date: 2024-01-01DOI: 10.3389/fradi.2024.1085834
Chisomo Zimphango, Marius O Mada, Stephen J Sawiak, Susan Giorgi-Coll, T Adrian Carpenter, Peter J Hutchinson, Keri L H Carpenter, Matthew G Stovell
Rationale and objectives: Cerebral microdialysis is a technique that enables monitoring of the neurochemistry of patients with significant acquired brain injury, such as traumatic brain injury (TBI) and subarachnoid haemorrhage (SAH). Cerebral microdialysis can also be used to characterise the neuro-pharmacokinetics of small-molecule study substrates using retrodialysis/retromicrodialysis. However, challenges remain: (i) lack of a simple, stable, and inexpensive brain tissue model for the study of drug neuropharmacology; and (ii) it is unclear how far small study-molecules administered via retrodialysis diffuse within the human brain.
Materials and methods: Here, we studied the radial diffusion distance of small-molecule gadolinium-DTPA from microdialysis catheters in a newly developed, simple, stable, inexpensive brain tissue model as a precursor for in-vivo studies. Brain tissue models consisting of 0.65% weight/volume agarose gel in two kinds of buffers were created. The distribution of a paramagnetic contrast agent gadolinium-DTPA (Gd-DTPA) perfusion from microdialysis catheters using magnetic resonance imaging (MRI) was characterized as a surrogate for other small-molecule study substrates.
Results: We found the mean radial diffusion distance of Gd-DTPA to be 18.5 mm after 24 h (p < 0.0001).
Conclusion: Our brain tissue model provides avenues for further tests and research into infusion studies using cerebral microdialysis, and consequently effective focal drug delivery for patients with TBI and other brain disorders.
{"title":"<i>In-vitro</i> gadolinium retro-microdialysis in agarose gel-a human brain phantom study.","authors":"Chisomo Zimphango, Marius O Mada, Stephen J Sawiak, Susan Giorgi-Coll, T Adrian Carpenter, Peter J Hutchinson, Keri L H Carpenter, Matthew G Stovell","doi":"10.3389/fradi.2024.1085834","DOIUrl":"10.3389/fradi.2024.1085834","url":null,"abstract":"<p><strong>Rationale and objectives: </strong>Cerebral microdialysis is a technique that enables monitoring of the neurochemistry of patients with significant acquired brain injury, such as traumatic brain injury (TBI) and subarachnoid haemorrhage (SAH). Cerebral microdialysis can also be used to characterise the neuro-pharmacokinetics of small-molecule study substrates using retrodialysis/retromicrodialysis. However, challenges remain: (i) lack of a simple, stable, and inexpensive brain tissue model for the study of drug neuropharmacology; and (ii) it is unclear how far small study-molecules administered via retrodialysis diffuse within the human brain.</p><p><strong>Materials and methods: </strong>Here, we studied the radial diffusion distance of small-molecule gadolinium-DTPA from microdialysis catheters in a newly developed, simple, stable, inexpensive brain tissue model as a precursor for in-vivo studies. Brain tissue models consisting of 0.65% weight/volume agarose gel in two kinds of buffers were created. The distribution of a paramagnetic contrast agent gadolinium-DTPA (Gd-DTPA) perfusion from microdialysis catheters using magnetic resonance imaging (MRI) was characterized as a surrogate for other small-molecule study substrates.</p><p><strong>Results: </strong>We found the mean radial diffusion distance of Gd-DTPA to be 18.5 mm after 24 h (<i>p</i> < 0.0001).</p><p><strong>Conclusion: </strong>Our brain tissue model provides avenues for further tests and research into infusion studies using cerebral microdialysis, and consequently effective focal drug delivery for patients with TBI and other brain disorders.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"4 ","pages":"1085834"},"PeriodicalIF":0.0,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10864450/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139736855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-23DOI: 10.3389/fradi.2024.1320535
Erik Nypan, Geir Arne Tangen, Reidar Brekken, Petter Aadahl, F. Manstad-Hulaas
Electromagnetic tracking of instruments combined with preoperative images can supplement fluoroscopy for guiding endovascular aortic repair (EVAR). The aim of this study was to evaluate the in-vivo accuracy of a vessel-based registration algorithm for matching electromagnetically tracked positions of an endovascular instrument to preoperative computed tomography angiography. Five patients undergoing elective EVAR were included, and a clinically available semi-automatic 3D–3D registration algorithm, based on similarity measures computed over the entire image, was used for reference. Accuracy was reported as target registration error (TRE) evaluated in manually selected anatomic landmarks on bony structures, placed close to the volume-of-interest. The median TRE was 8.2 mm (range: 7.1 mm to 16.1 mm) for the vessel-based registration algorithm, compared to 2.2 mm (range: 1.8 mm to 3.7 mm) for the reference algorithm. This illustrates that registration based on intraoperative electromagnetic tracking is feasible, but the accuracy must be improved before clinical use.
{"title":"Endovascular navigation in patients: vessel-based registration of electromagnetic tracking to preoperative images","authors":"Erik Nypan, Geir Arne Tangen, Reidar Brekken, Petter Aadahl, F. Manstad-Hulaas","doi":"10.3389/fradi.2024.1320535","DOIUrl":"https://doi.org/10.3389/fradi.2024.1320535","url":null,"abstract":"Electromagnetic tracking of instruments combined with preoperative images can supplement fluoroscopy for guiding endovascular aortic repair (EVAR). The aim of this study was to evaluate the in-vivo accuracy of a vessel-based registration algorithm for matching electromagnetically tracked positions of an endovascular instrument to preoperative computed tomography angiography. Five patients undergoing elective EVAR were included, and a clinically available semi-automatic 3D–3D registration algorithm, based on similarity measures computed over the entire image, was used for reference. Accuracy was reported as target registration error (TRE) evaluated in manually selected anatomic landmarks on bony structures, placed close to the volume-of-interest. The median TRE was 8.2 mm (range: 7.1 mm to 16.1 mm) for the vessel-based registration algorithm, compared to 2.2 mm (range: 1.8 mm to 3.7 mm) for the reference algorithm. This illustrates that registration based on intraoperative electromagnetic tracking is feasible, but the accuracy must be improved before clinical use.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"61 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139603264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-18DOI: 10.3389/fradi.2023.1327075
Pip Bridgen, Raphaël Tomi-Tricot, Alena Uus, Daniel Cromb, Megan Quirke, J. Almalbis, Beya Bonse, Miguel De la Fuente Botella, Alessandra Maggioni, Pierluigi Di Cio, Pauline A. Cawley, Chiara Casella, A. S. Dokumacı, Alice R. Thomson, Jucha Willers Moore, Devi Bridglal, Joao Saravia, Thomas Finck, Anthony N. Price, Elisabeth Pickles, Lucilio Cordero-Grande, Alexia Egloff, J. O’Muircheartaigh, S. Counsell, Sharon Giles, M. Deprez, Enrico De Vita, M. Rutherford, A. D. Edwards, J. Hajnal, Shaihan J. Malik, T. Arichi
Ultra-high field MR imaging offers marked gains in signal-to-noise ratio, spatial resolution, and contrast which translate to improved pathological and anatomical sensitivity. These benefits are particularly relevant for the neonatal brain which is rapidly developing and sensitive to injury. However, experience of imaging neonates at 7T has been limited due to regulatory, safety, and practical considerations. We aimed to establish a program for safely acquiring high resolution and contrast brain images from neonates on a 7T system.Images were acquired from 35 neonates on 44 occasions (median age 39 + 6 postmenstrual weeks, range 33 + 4 to 52 + 6; median body weight 2.93 kg, range 1.57 to 5.3 kg) over a median time of 49 mins 30 s. Peripheral body temperature and physiological measures were recorded throughout scanning. Acquired sequences included T2 weighted (TSE), Actual Flip angle Imaging (AFI), functional MRI (BOLD EPI), susceptibility weighted imaging (SWI), and MR spectroscopy (STEAM).There was no significant difference between temperature before and after scanning (p = 0.76) and image quality assessment compared favorably to state-of-the-art 3T acquisitions. Anatomical imaging demonstrated excellent sensitivity to structures which are typically hard to visualize at lower field strengths including the hippocampus, cerebellum, and vasculature. Images were also acquired with contrast mechanisms which are enhanced at ultra-high field including susceptibility weighted imaging, functional MRI, and MR spectroscopy.We demonstrate safety and feasibility of imaging vulnerable neonates at ultra-high field and highlight the untapped potential for providing important new insights into brain development and pathological processes during this critical phase of early life.
{"title":"High resolution and contrast 7 tesla MR brain imaging of the neonate","authors":"Pip Bridgen, Raphaël Tomi-Tricot, Alena Uus, Daniel Cromb, Megan Quirke, J. Almalbis, Beya Bonse, Miguel De la Fuente Botella, Alessandra Maggioni, Pierluigi Di Cio, Pauline A. Cawley, Chiara Casella, A. S. Dokumacı, Alice R. Thomson, Jucha Willers Moore, Devi Bridglal, Joao Saravia, Thomas Finck, Anthony N. Price, Elisabeth Pickles, Lucilio Cordero-Grande, Alexia Egloff, J. O’Muircheartaigh, S. Counsell, Sharon Giles, M. Deprez, Enrico De Vita, M. Rutherford, A. D. Edwards, J. Hajnal, Shaihan J. Malik, T. Arichi","doi":"10.3389/fradi.2023.1327075","DOIUrl":"https://doi.org/10.3389/fradi.2023.1327075","url":null,"abstract":"Ultra-high field MR imaging offers marked gains in signal-to-noise ratio, spatial resolution, and contrast which translate to improved pathological and anatomical sensitivity. These benefits are particularly relevant for the neonatal brain which is rapidly developing and sensitive to injury. However, experience of imaging neonates at 7T has been limited due to regulatory, safety, and practical considerations. We aimed to establish a program for safely acquiring high resolution and contrast brain images from neonates on a 7T system.Images were acquired from 35 neonates on 44 occasions (median age 39 + 6 postmenstrual weeks, range 33 + 4 to 52 + 6; median body weight 2.93 kg, range 1.57 to 5.3 kg) over a median time of 49 mins 30 s. Peripheral body temperature and physiological measures were recorded throughout scanning. Acquired sequences included T2 weighted (TSE), Actual Flip angle Imaging (AFI), functional MRI (BOLD EPI), susceptibility weighted imaging (SWI), and MR spectroscopy (STEAM).There was no significant difference between temperature before and after scanning (p = 0.76) and image quality assessment compared favorably to state-of-the-art 3T acquisitions. Anatomical imaging demonstrated excellent sensitivity to structures which are typically hard to visualize at lower field strengths including the hippocampus, cerebellum, and vasculature. Images were also acquired with contrast mechanisms which are enhanced at ultra-high field including susceptibility weighted imaging, functional MRI, and MR spectroscopy.We demonstrate safety and feasibility of imaging vulnerable neonates at ultra-high field and highlight the untapped potential for providing important new insights into brain development and pathological processes during this critical phase of early life.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"105 26","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139615858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-18DOI: 10.3389/fradi.2023.1336902
P. Raut, G. Baldini, M. Schöneck, L. Caldeira
Challenging tasks such as lesion segmentation, classification, and analysis for the assessment of disease progression can be automatically achieved using deep learning (DL)-based algorithms. DL techniques such as 3D convolutional neural networks are trained using heterogeneous volumetric imaging data such as MRI, CT, and PET, among others. However, DL-based methods are usually only applicable in the presence of the desired number of inputs. In the absence of one of the required inputs, the method cannot be used. By implementing a generative adversarial network (GAN), we aim to apply multi-label automatic segmentation of brain tumors to synthetic images when not all inputs are present. The implemented GAN is based on the Pix2Pix architecture and has been extended to a 3D framework named Pix2PixNIfTI. For this study, 1,251 patients of the BraTS2021 dataset comprising sequences such as T1w, T2w, T1CE, and FLAIR images equipped with respective multi-label segmentation were used. This dataset was used for training the Pix2PixNIfTI model for generating synthetic MRI images of all the image contrasts. The segmentation model, namely DeepMedic, was trained in a five-fold cross-validation manner for brain tumor segmentation and tested using the original inputs as the gold standard. The inference of trained segmentation models was later applied to synthetic images replacing missing input, in combination with other original images to identify the efficacy of generated images in achieving multi-class segmentation. For the multi-class segmentation using synthetic data or lesser inputs, the dice scores were observed to be significantly reduced but remained similar in range for the whole tumor when compared with evaluated original image segmentation (e.g. mean dice of synthetic T2w prediction NC, 0.74 ± 0.30; ED, 0.81 ± 0.15; CET, 0.84 ± 0.21; WT, 0.90 ± 0.08). A standard paired t-tests with multiple comparison correction were performed to assess the difference between all regions (p < 0.05). The study concludes that the use of Pix2PixNIfTI allows us to segment brain tumors when one input image is missing.
{"title":"Using a generative adversarial network to generate synthetic MRI images for multi-class automatic segmentation of brain tumors","authors":"P. Raut, G. Baldini, M. Schöneck, L. Caldeira","doi":"10.3389/fradi.2023.1336902","DOIUrl":"https://doi.org/10.3389/fradi.2023.1336902","url":null,"abstract":"Challenging tasks such as lesion segmentation, classification, and analysis for the assessment of disease progression can be automatically achieved using deep learning (DL)-based algorithms. DL techniques such as 3D convolutional neural networks are trained using heterogeneous volumetric imaging data such as MRI, CT, and PET, among others. However, DL-based methods are usually only applicable in the presence of the desired number of inputs. In the absence of one of the required inputs, the method cannot be used. By implementing a generative adversarial network (GAN), we aim to apply multi-label automatic segmentation of brain tumors to synthetic images when not all inputs are present. The implemented GAN is based on the Pix2Pix architecture and has been extended to a 3D framework named Pix2PixNIfTI. For this study, 1,251 patients of the BraTS2021 dataset comprising sequences such as T1w, T2w, T1CE, and FLAIR images equipped with respective multi-label segmentation were used. This dataset was used for training the Pix2PixNIfTI model for generating synthetic MRI images of all the image contrasts. The segmentation model, namely DeepMedic, was trained in a five-fold cross-validation manner for brain tumor segmentation and tested using the original inputs as the gold standard. The inference of trained segmentation models was later applied to synthetic images replacing missing input, in combination with other original images to identify the efficacy of generated images in achieving multi-class segmentation. For the multi-class segmentation using synthetic data or lesser inputs, the dice scores were observed to be significantly reduced but remained similar in range for the whole tumor when compared with evaluated original image segmentation (e.g. mean dice of synthetic T2w prediction NC, 0.74 ± 0.30; ED, 0.81 ± 0.15; CET, 0.84 ± 0.21; WT, 0.90 ± 0.08). A standard paired t-tests with multiple comparison correction were performed to assess the difference between all regions (p < 0.05). The study concludes that the use of Pix2PixNIfTI allows us to segment brain tumors when one input image is missing.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"110 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139615471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-08eCollection Date: 2023-01-01DOI: 10.3389/fradi.2023.1274273
Aditi Anand, Sarada Krithivasan, Kaushik Roy
Artificial Intelligence (AI) methods, particularly Deep Neural Networks (DNNs), have shown great promise in a range of medical imaging tasks. However, the susceptibility of DNNs to producing erroneous outputs under the presence of input noise and variations is of great concern and one of the largest challenges to their adoption in medical settings. Towards addressing this challenge, we explore the robustness of DNNs trained for chest radiograph classification under a range of perturbations reflective of clinical settings. We propose RoMIA, a framework for the creation of Robust Medical Imaging AI models. RoMIA adds three key steps to the model training and deployment flow: (i) Noise-added training, wherein a part of the training data is synthetically transformed to represent common noise sources, (ii) Fine-tuning with input mixing, in which the model is refined with inputs formed by mixing data from the original training set with a small number of images from a different source, and (iii) DCT-based denoising, which removes a fraction of high-frequency components of each image before applying the model to classify it. We applied RoMIA to create six different robust models for classifying chest radiographs using the CheXpert dataset. We evaluated the models on the CheXphoto dataset, which consists of naturally and synthetically perturbed images intended to evaluate robustness. Models produced by RoMIA show 3%-5% improvement in robust accuracy, which corresponds to an average reduction of 22.6% in misclassifications. These results suggest that RoMIA can be a useful step towards enabling the adoption of AI models in medical imaging applications.
{"title":"RoMIA: a framework for creating Robust Medical Imaging AI models for chest radiographs.","authors":"Aditi Anand, Sarada Krithivasan, Kaushik Roy","doi":"10.3389/fradi.2023.1274273","DOIUrl":"10.3389/fradi.2023.1274273","url":null,"abstract":"<p><p>Artificial Intelligence (AI) methods, particularly Deep Neural Networks (DNNs), have shown great promise in a range of medical imaging tasks. However, the susceptibility of DNNs to producing erroneous outputs under the presence of input noise and variations is of great concern and one of the largest challenges to their adoption in medical settings. Towards addressing this challenge, we explore the robustness of DNNs trained for chest radiograph classification under a range of perturbations reflective of clinical settings. We propose RoMIA, a framework for the creation of Robust Medical Imaging AI models. RoMIA adds three key steps to the model training and deployment flow: (i) Noise-added training, wherein a part of the training data is synthetically transformed to represent common noise sources, (ii) Fine-tuning with input mixing, in which the model is refined with inputs formed by mixing data from the original training set with a small number of images from a different source, and (iii) DCT-based denoising, which removes a fraction of high-frequency components of each image before applying the model to classify it. We applied RoMIA to create six different robust models for classifying chest radiographs using the CheXpert dataset. We evaluated the models on the CheXphoto dataset, which consists of naturally and synthetically perturbed images intended to evaluate robustness. Models produced by RoMIA show 3%-5% improvement in robust accuracy, which corresponds to an average reduction of 22.6% in misclassifications. These results suggest that RoMIA can be a useful step towards enabling the adoption of AI models in medical imaging applications.</p>","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"3 ","pages":"1274273"},"PeriodicalIF":0.0,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10800823/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139522371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-05DOI: 10.3389/fradi.2023.1305390
H. Sotoudeh, Mohammadreza Alizadeh, Ramin Shahidi, Parnian Shobeiri, Z. Saadatpour, C. A. Wheeler, Marissa Natelson Love, Manoj Tanwar
Alzheimer's Disease (AD) is a leading cause of morbidity. Management of AD has traditionally been aimed at symptom relief rather than disease modification. Recently, AD research has begun to shift focus towards disease-modifying therapies that can alter the progression of AD. In this context, a class of immunotherapy agents known as monoclonal antibodies target diverse cerebral amyloid-beta (Aβ) epitopes to inhibit disease progression. Aducanumab was authorized by the US Food and Drug Administration (FDA) to treat AD on June 7, 2021. Aducanumab has shown promising clinical and biomarker efficacy but is associated with amyloid-related imaging abnormalities (ARIA). Neuroradiologists play a critical role in diagnosing ARIA, necessitating familiarity with this condition. This pictorial review will appraise the radiologic presentation of ARIA in patients on aducanumab.
{"title":"Imaging spectrum of amyloid-related imaging abnormalities associated with aducanumab immunotherapy","authors":"H. Sotoudeh, Mohammadreza Alizadeh, Ramin Shahidi, Parnian Shobeiri, Z. Saadatpour, C. A. Wheeler, Marissa Natelson Love, Manoj Tanwar","doi":"10.3389/fradi.2023.1305390","DOIUrl":"https://doi.org/10.3389/fradi.2023.1305390","url":null,"abstract":"Alzheimer's Disease (AD) is a leading cause of morbidity. Management of AD has traditionally been aimed at symptom relief rather than disease modification. Recently, AD research has begun to shift focus towards disease-modifying therapies that can alter the progression of AD. In this context, a class of immunotherapy agents known as monoclonal antibodies target diverse cerebral amyloid-beta (Aβ) epitopes to inhibit disease progression. Aducanumab was authorized by the US Food and Drug Administration (FDA) to treat AD on June 7, 2021. Aducanumab has shown promising clinical and biomarker efficacy but is associated with amyloid-related imaging abnormalities (ARIA). Neuroradiologists play a critical role in diagnosing ARIA, necessitating familiarity with this condition. This pictorial review will appraise the radiologic presentation of ARIA in patients on aducanumab.","PeriodicalId":73101,"journal":{"name":"Frontiers in radiology","volume":"101 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139383592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}