Pub Date : 2024-09-01Epub Date: 2024-10-17DOI: 10.1117/1.JMI.11.5.053502
Nicholas Felice, Benjamin Wildman-Tobriner, William Paul Segars, Mustafa R Bashir, Daniele Marin, Ehsan Samei, Ehsan Abadi
Purpose: Photon-counting computed tomography (PCCT) has the potential to provide superior image quality to energy-integrating CT (EICT). We objectively compare PCCT to EICT for liver lesion detection.
Approach: Fifty anthropomorphic, computational phantoms with inserted liver lesions were generated. Contrast-enhanced scans of each phantom were simulated at the portal venous phase. The acquisitions were done using DukeSim, a validated CT simulation platform. Scans were simulated at two dose levels ( 1.5 to 6.0 mGy) modeling PCCT (NAEOTOM Alpha, Siemens, Erlangen, Germany) and EICT (SOMATOM Flash, Siemens). Images were reconstructed with varying levels of kernel sharpness (soft, medium, sharp). To provide a quantitative estimate of image quality, the modulation transfer function (MTF), frequency at 50% of the MTF ( ), noise magnitude, contrast-to-noise ratio (CNR, per lesion), and detectability index ( , per lesion) were measured.
Results: Across all studied conditions, the best detection performance, measured by , was for PCCT images with the highest dose level and softest kernel. With soft kernel reconstruction, PCCT demonstrated improved lesion CNR and compared with EICT, with a mean increase in CNR of 35.0% ( ) and 21% ( ) and a mean increase in of 41.0% ( ) and 23.3% ( ) for the 1.5 and 6.0 mGy acquisitions, respectively. The improvements were greatest for larger phantoms, low-contrast lesions, and low-dose scans.
Conclusions: PCCT demonstrated objective improvement in liver lesion detection and image quality metrics compared with EICT. These advances may lead to earlier and more accurate liver lesion detection, thus improving patient care.
{"title":"Photon-counting computed tomography versus energy-integrating computed tomography for detection of small liver lesions: comparison using a virtual framework imaging.","authors":"Nicholas Felice, Benjamin Wildman-Tobriner, William Paul Segars, Mustafa R Bashir, Daniele Marin, Ehsan Samei, Ehsan Abadi","doi":"10.1117/1.JMI.11.5.053502","DOIUrl":"10.1117/1.JMI.11.5.053502","url":null,"abstract":"<p><strong>Purpose: </strong>Photon-counting computed tomography (PCCT) has the potential to provide superior image quality to energy-integrating CT (EICT). We objectively compare PCCT to EICT for liver lesion detection.</p><p><strong>Approach: </strong>Fifty anthropomorphic, computational phantoms with inserted liver lesions were generated. Contrast-enhanced scans of each phantom were simulated at the portal venous phase. The acquisitions were done using DukeSim, a validated CT simulation platform. Scans were simulated at two dose levels ( <math> <mrow> <msub><mrow><mi>CTDI</mi></mrow> <mrow><mi>vol</mi></mrow> </msub> </mrow> </math> 1.5 to 6.0 mGy) modeling PCCT (NAEOTOM Alpha, Siemens, Erlangen, Germany) and EICT (SOMATOM Flash, Siemens). Images were reconstructed with varying levels of kernel sharpness (soft, medium, sharp). To provide a quantitative estimate of image quality, the modulation transfer function (MTF), frequency at 50% of the MTF ( <math> <mrow><msub><mi>f</mi> <mn>50</mn></msub> </mrow> </math> ), noise magnitude, contrast-to-noise ratio (CNR, per lesion), and detectability index ( <math> <mrow> <msup><mrow><mi>d</mi></mrow> <mrow><mo>'</mo></mrow> </msup> </mrow> </math> , per lesion) were measured.</p><p><strong>Results: </strong>Across all studied conditions, the best detection performance, measured by <math> <mrow> <msup><mrow><mi>d</mi></mrow> <mrow><mo>'</mo></mrow> </msup> </mrow> </math> , was for PCCT images with the highest dose level and softest kernel. With soft kernel reconstruction, PCCT demonstrated improved lesion CNR and <math> <mrow> <msup><mrow><mi>d</mi></mrow> <mrow><mo>'</mo></mrow> </msup> </mrow> </math> compared with EICT, with a mean increase in CNR of 35.0% ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) and 21% ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) and a mean increase in <math> <mrow> <msup><mrow><mi>d</mi></mrow> <mrow><mo>'</mo></mrow> </msup> </mrow> </math> of 41.0% ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ) and 23.3% ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.007</mn></mrow> </math> ) for the 1.5 and 6.0 mGy acquisitions, respectively. The improvements were greatest for larger phantoms, low-contrast lesions, and low-dose scans.</p><p><strong>Conclusions: </strong>PCCT demonstrated objective improvement in liver lesion detection and image quality metrics compared with EICT. These advances may lead to earlier and more accurate liver lesion detection, thus improving patient care.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"053502"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486217/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01Epub Date: 2024-10-28DOI: 10.1117/1.JMI.11.5.050101
Elias Levy, Bennett Landman
The editorial evaluates how the GenAI technologies available in 2024 (without specific coding) could impact scientific processes, exploring two AI tools with the aim of demonstrating what happens when using custom LLMs in five research lab workflows.
{"title":"ChatGP-Me?","authors":"Elias Levy, Bennett Landman","doi":"10.1117/1.JMI.11.5.050101","DOIUrl":"https://doi.org/10.1117/1.JMI.11.5.050101","url":null,"abstract":"<p><p>The editorial evaluates how the GenAI technologies available in 2024 (without specific coding) could impact scientific processes, exploring two AI tools with the aim of demonstrating what happens when using custom LLMs in five research lab workflows.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"050101"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11513651/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142548313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-07-25DOI: 10.1117/1.JMI.11.4.043501
Diego Rosich, Margarita Chevalier, Adrián Belarra, Tatiana Alieva
Purpose: Propagation and speckle-based techniques allow reconstruction of the phase of an X-ray beam with a simple experimental setup. Furthermore, their implementation is feasible using low-coherence laboratory X-ray sources. We investigate different approaches to include X-ray polychromaticity for sample thickness recovery using such techniques.
Approach: Single-shot Paganin (PT) and Arhatari (AT) propagation-based and speckle-based (ST) techniques are considered. The radiation beam polychromaticity is addressed using three different averaging approaches. The emission-detection process is considered for modulating the X-ray beam spectrum. Reconstructed thickness of three nylon-6 fibers with diameters in the millimeter-range, placed at various object-detector distances are analyzed. In addition, the thickness of an in-house made breast phantom is recovered by using multi-material Paganin's technique (MPT) and compared with micro-CT data.
Results: The best quantitative result is obtained for the PT and ST combined with sample thickness averaging (TA) approach that involves individual thickness recovery for each X-ray spectral component and the smallest considered object-detector distance. The error in the recovered fiber diameters for both techniques is , despite the higher noise level in ST images. All cases provide estimates of fiber diameter ratios with an error of 3% with respect to the nominal diameter ratios. The breast phantom thickness difference between MPT-TA and micro-CT is about 10%.
Conclusions: We demonstrate the single-shot PT-TA and ST-TA techniques feasibility for thickness recovery of millimeter-sized samples using polychromatic microfocus X-ray sources. The application of MPT-TA for thicker and multi-material samples is promising.
目的:基于传播和斑点的技术可以通过简单的实验装置重建 X 射线束的相位。此外,使用低相干实验室 X 射线源也可以实现这些技术。我们研究了不同的方法,将 X 射线多色性纳入此类技术的样本厚度恢复中:方法:我们考虑了基于单发帕加宁(PT)和阿尔哈特里(AT)传播和基于斑点(ST)的技术。使用三种不同的平均方法来解决辐射光束的多色性问题。考虑了调制 X 射线束光谱的发射检测过程。分析了放置在不同物体-探测器距离上的三根直径在毫米范围内的尼龙-6 纤维的重建厚度。此外,还使用多材料帕加宁技术(MPT)恢复了自制乳房模型的厚度,并与显微 CT 数据进行了比较:结果:PTT 和 ST 结合样本厚度平均(TA)方法获得了最佳定量结果,TA 方法包括对每个 X 射线光谱成分和最小考虑的物体-探测器距离进行单独厚度恢复。尽管 ST 图像的噪声水平较高,但两种技术恢复的纤维直径误差均为 4%。所有情况下,纤维直径比的估计值与标称直径比的误差均为 3%。MPT-TA 和 micro-CT 之间的乳房模型厚度差异约为 10%:我们证明了使用多色微焦 X 射线源进行毫米级样品厚度恢复的单发 PT-TA 和 ST-TA 技术的可行性。将 MPT-TA 应用于较厚的多材料样品前景广阔。
{"title":"Exploring single-shot propagation and speckle based phase recovery techniques for object thickness estimation by using a polychromatic X-ray laboratory source.","authors":"Diego Rosich, Margarita Chevalier, Adrián Belarra, Tatiana Alieva","doi":"10.1117/1.JMI.11.4.043501","DOIUrl":"10.1117/1.JMI.11.4.043501","url":null,"abstract":"<p><strong>Purpose: </strong>Propagation and speckle-based techniques allow reconstruction of the phase of an X-ray beam with a simple experimental setup. Furthermore, their implementation is feasible using low-coherence laboratory X-ray sources. We investigate different approaches to include X-ray polychromaticity for sample thickness recovery using such techniques.</p><p><strong>Approach: </strong>Single-shot Paganin (PT) and Arhatari (AT) propagation-based and speckle-based (ST) techniques are considered. The radiation beam polychromaticity is addressed using three different averaging approaches. The emission-detection process is considered for modulating the X-ray beam spectrum. Reconstructed thickness of three nylon-6 fibers with diameters in the millimeter-range, placed at various object-detector distances are analyzed. In addition, the thickness of an in-house made breast phantom is recovered by using multi-material Paganin's technique (MPT) and compared with micro-CT data.</p><p><strong>Results: </strong>The best quantitative result is obtained for the PT and ST combined with sample thickness averaging (TA) approach that involves individual thickness recovery for each X-ray spectral component and the smallest considered object-detector distance. The error in the recovered fiber diameters for both techniques is <math><mrow><mo><</mo> <mn>4</mn> <mo>%</mo></mrow> </math> , despite the higher noise level in ST images. All cases provide estimates of fiber diameter ratios with an error of 3% with respect to the nominal diameter ratios. The breast phantom thickness difference between MPT-TA and micro-CT is about 10%.</p><p><strong>Conclusions: </strong>We demonstrate the single-shot PT-TA and ST-TA techniques feasibility for thickness recovery of millimeter-sized samples using polychromatic microfocus X-ray sources. The application of MPT-TA for thicker and multi-material samples is promising.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"043501"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11272094/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141789482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Deep learning is the standard for medical image segmentation. However, it may encounter difficulties when the training set is small. Also, it may generate anatomically aberrant segmentations. Anatomical knowledge can be potentially useful as a constraint in deep learning segmentation methods. We propose a loss function based on projected pooling to introduce soft topological constraints. Our main application is the segmentation of the red nucleus from quantitative susceptibility mapping (QSM) which is of interest in parkinsonian syndromes.
Approach: This new loss function introduces soft constraints on the topology by magnifying small parts of the structure to segment to avoid that they are discarded in the segmentation process. To that purpose, we use projection of the structure onto the three planes and then use a series of MaxPooling operations with increasing kernel sizes. These operations are performed both for the ground truth and the prediction and the difference is computed to obtain the loss function. As a result, it can reduce topological errors as well as defects in the structure boundary. The approach is easy to implement and computationally efficient.
Results: When applied to the segmentation of the red nucleus from QSM data, the approach led to a very high accuracy (Dice 89.9%) and no topological errors. Moreover, the proposed loss function improved the Dice accuracy over the baseline when the training set was small. We also studied three tasks from the medical segmentation decathlon challenge (MSD) (heart, spleen, and hippocampus). For the MSD tasks, the Dice accuracies were similar for both approaches but the topological errors were reduced.
Conclusions: We propose an effective method to automatically segment the red nucleus which is based on a new loss for introducing topology constraints in deep learning segmentation.
{"title":"Projected pooling loss for red nucleus segmentation with soft topology constraints.","authors":"Guanghui Fu, Rosana El Jurdi, Lydia Chougar, Didier Dormont, Romain Valabregue, Stéphane Lehéricy, Olivier Colliot","doi":"10.1117/1.JMI.11.4.044002","DOIUrl":"10.1117/1.JMI.11.4.044002","url":null,"abstract":"<p><strong>Purpose: </strong>Deep learning is the standard for medical image segmentation. However, it may encounter difficulties when the training set is small. Also, it may generate anatomically aberrant segmentations. Anatomical knowledge can be potentially useful as a constraint in deep learning segmentation methods. We propose a loss function based on projected pooling to introduce soft topological constraints. Our main application is the segmentation of the red nucleus from quantitative susceptibility mapping (QSM) which is of interest in parkinsonian syndromes.</p><p><strong>Approach: </strong>This new loss function introduces soft constraints on the topology by magnifying small parts of the structure to segment to avoid that they are discarded in the segmentation process. To that purpose, we use projection of the structure onto the three planes and then use a series of MaxPooling operations with increasing kernel sizes. These operations are performed both for the ground truth and the prediction and the difference is computed to obtain the loss function. As a result, it can reduce topological errors as well as defects in the structure boundary. The approach is easy to implement and computationally efficient.</p><p><strong>Results: </strong>When applied to the segmentation of the red nucleus from QSM data, the approach led to a very high accuracy (Dice 89.9%) and no topological errors. Moreover, the proposed loss function improved the Dice accuracy over the baseline when the training set was small. We also studied three tasks from the medical segmentation decathlon challenge (MSD) (heart, spleen, and hippocampus). For the MSD tasks, the Dice accuracies were similar for both approaches but the topological errors were reduced.</p><p><strong>Conclusions: </strong>We propose an effective method to automatically segment the red nucleus which is based on a new loss for introducing topology constraints in deep learning segmentation.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044002"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11232703/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-07-09DOI: 10.1117/1.JMI.11.4.044502
Jenita Manokaran, Richa Mittal, Eranga Ukwatta
<p><strong>Purpose: </strong>Lung cancer is the second most common cancer and the leading cause of cancer death globally. Low dose computed tomography (LDCT) is the recommended imaging screening tool for the early detection of lung cancer. A fully automated computer-aided detection method for LDCT will greatly improve the existing clinical workflow. Most of the existing methods for lung detection are designed for high-dose CTs (HDCTs), and those methods cannot be directly applied to LDCTs due to domain shifts and inferior quality of LDCT images. In this work, we describe a semi-automated transfer learning-based approach for the early detection of lung nodules using LDCTs.</p><p><strong>Approach: </strong>In this work, we developed an algorithm based on the object detection model, you only look once (YOLO) to detect lung nodules. The YOLO model was first trained on CTs, and the pre-trained weights were used as initial weights during the retraining of the model on LDCTs using a medical-to-medical transfer learning approach. The dataset for this study was from a screening trial consisting of LDCTs acquired from 50 biopsy-confirmed lung cancer patients obtained over 3 consecutive years (T1, T2, and T3). About 60 lung cancer patients' HDCTs were obtained from a public dataset. The developed model was evaluated using a hold-out test set comprising 15 patient cases (93 slices with cancerous nodules) using precision, specificity, recall, and F1-score. The evaluation metrics were reported patient-wise on a per-year basis and averaged for 3 years. For comparative analysis, the proposed detection model was trained using pre-trained weights from the COCO dataset as the initial weights. A paired t-test and chi-squared test with an alpha value of 0.05 were used for statistical significance testing.</p><p><strong>Results: </strong>The results were reported by comparing the proposed model developed using HDCT pre-trained weights with COCO pre-trained weights. The former approach versus the latter approach obtained a precision of 0.982 versus 0.93 in detecting cancerous nodules, specificity of 0.923 versus 0.849 in identifying slices with no cancerous nodules, recall of 0.87 versus 0.886, and F1-score of 0.924 versus 0.903. As the nodule progressed, the former approach achieved a precision of 1, specificity of 0.92, and sensitivity of 0.930. The statistical analysis performed in the comparative study resulted in a <math><mrow><mi>p</mi></mrow> </math> -value of 0.0054 for precision and a <math><mrow><mi>p</mi></mrow> </math> -value of 0.00034 for specificity.</p><p><strong>Conclusions: </strong>In this study, a semi-automated method was developed to detect lung nodules in LDCTs using HDCT pre-trained weights as the initial weights and retraining the model. Further, the results were compared by replacing HDCT pre-trained weights in the above approach with COCO pre-trained weights. The proposed method may identify early lung nodules during the screening program, re
{"title":"Pulmonary nodule detection in low dose computed tomography using a medical-to-medical transfer learning approach.","authors":"Jenita Manokaran, Richa Mittal, Eranga Ukwatta","doi":"10.1117/1.JMI.11.4.044502","DOIUrl":"10.1117/1.JMI.11.4.044502","url":null,"abstract":"<p><strong>Purpose: </strong>Lung cancer is the second most common cancer and the leading cause of cancer death globally. Low dose computed tomography (LDCT) is the recommended imaging screening tool for the early detection of lung cancer. A fully automated computer-aided detection method for LDCT will greatly improve the existing clinical workflow. Most of the existing methods for lung detection are designed for high-dose CTs (HDCTs), and those methods cannot be directly applied to LDCTs due to domain shifts and inferior quality of LDCT images. In this work, we describe a semi-automated transfer learning-based approach for the early detection of lung nodules using LDCTs.</p><p><strong>Approach: </strong>In this work, we developed an algorithm based on the object detection model, you only look once (YOLO) to detect lung nodules. The YOLO model was first trained on CTs, and the pre-trained weights were used as initial weights during the retraining of the model on LDCTs using a medical-to-medical transfer learning approach. The dataset for this study was from a screening trial consisting of LDCTs acquired from 50 biopsy-confirmed lung cancer patients obtained over 3 consecutive years (T1, T2, and T3). About 60 lung cancer patients' HDCTs were obtained from a public dataset. The developed model was evaluated using a hold-out test set comprising 15 patient cases (93 slices with cancerous nodules) using precision, specificity, recall, and F1-score. The evaluation metrics were reported patient-wise on a per-year basis and averaged for 3 years. For comparative analysis, the proposed detection model was trained using pre-trained weights from the COCO dataset as the initial weights. A paired t-test and chi-squared test with an alpha value of 0.05 were used for statistical significance testing.</p><p><strong>Results: </strong>The results were reported by comparing the proposed model developed using HDCT pre-trained weights with COCO pre-trained weights. The former approach versus the latter approach obtained a precision of 0.982 versus 0.93 in detecting cancerous nodules, specificity of 0.923 versus 0.849 in identifying slices with no cancerous nodules, recall of 0.87 versus 0.886, and F1-score of 0.924 versus 0.903. As the nodule progressed, the former approach achieved a precision of 1, specificity of 0.92, and sensitivity of 0.930. The statistical analysis performed in the comparative study resulted in a <math><mrow><mi>p</mi></mrow> </math> -value of 0.0054 for precision and a <math><mrow><mi>p</mi></mrow> </math> -value of 0.00034 for specificity.</p><p><strong>Conclusions: </strong>In this study, a semi-automated method was developed to detect lung nodules in LDCTs using HDCT pre-trained weights as the initial weights and retraining the model. Further, the results were compared by replacing HDCT pre-trained weights in the above approach with COCO pre-trained weights. The proposed method may identify early lung nodules during the screening program, re","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044502"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11232701/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: We aim to develop modified clinical indication (CI)-based image quality scoring criteria (IQSC) for assessing image quality (IQ) and establishing acceptable quality doses (AQDs) in adult computed tomography (CT) examinations, based on CIs and patient sizes.
Approach: CT images, volume CT dose index ( ), and dose length product (DLP) were collected retrospectively between September 2020 and September 2021 for eight common CIs from two CT scanners at a central hospital in the Kingdom of Bahrain. Using the modified CI-based IQSC and a Likert scale (0 to 4), three radiologists assessed the IQ of each examination. AQDs were then established as the median value of and DLP for images with an average score of 3 and compared to national diagnostic reference levels (NDRLs).
Results: Out of 581 examinations, 60 were excluded from the study due to average scores above or below 3. The established AQDs were lower than the NDRLs for all CIs, except for oncologic follow-up for large patients (28 versus 26 mGy) in scanner A, besides abdominal pain for medium patients (16 versus 15 mGy) and large patients (34 versus 27 mGy), and diverticulitis/appendicitis for medium patients (15 versus 12 mGy) and large patients (33 versus 30 mGy) in scanner B, indicating the need for optimization.
Conclusions: CI-based IQSC is crucial for IQ assessment and establishing AQDs according to patient size. It identifies stations requiring optimization of patient radiation exposure.
{"title":"Assessment of image quality and establishment of local acceptable quality dose for computed tomography based on patient effective diameter.","authors":"Nada Hasan, Chadia Rizk, Fatema Marzooq, Khalid Khan, Maryam AlKhaja, Esameldeen Babikir","doi":"10.1117/1.JMI.11.4.043502","DOIUrl":"10.1117/1.JMI.11.4.043502","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to develop modified clinical indication (CI)-based image quality scoring criteria (IQSC) for assessing image quality (IQ) and establishing acceptable quality doses (AQDs) in adult computed tomography (CT) examinations, based on CIs and patient sizes.</p><p><strong>Approach: </strong>CT images, volume CT dose index ( <math> <mrow> <msub><mrow><mi>CTDI</mi></mrow> <mrow><mi>vol</mi></mrow> </msub> </mrow> </math> ), and dose length product (DLP) were collected retrospectively between September 2020 and September 2021 for eight common CIs from two CT scanners at a central hospital in the Kingdom of Bahrain. Using the modified CI-based IQSC and a Likert scale (0 to 4), three radiologists assessed the IQ of each examination. AQDs were then established as the median value of <math> <mrow> <msub><mrow><mi>CTDI</mi></mrow> <mrow><mi>vol</mi></mrow> </msub> </mrow> </math> and DLP for images with an average score of 3 and compared to national diagnostic reference levels (NDRLs).</p><p><strong>Results: </strong>Out of 581 examinations, 60 were excluded from the study due to average scores above or below 3. The established AQDs were lower than the NDRLs for all CIs, except <math><mrow><mi>AQDs</mi> <mo>/</mo> <msub><mrow><mi>CTDI</mi></mrow> <mrow><mi>vol</mi></mrow> </msub> </mrow> </math> for oncologic follow-up for large patients (28 versus 26 mGy) in scanner A, besides abdominal pain for medium patients (16 versus 15 mGy) and large patients (34 versus 27 mGy), and diverticulitis/appendicitis for medium patients (15 versus 12 mGy) and large patients (33 versus 30 mGy) in scanner B, indicating the need for optimization.</p><p><strong>Conclusions: </strong>CI-based IQSC is crucial for IQ assessment and establishing AQDs according to patient size. It identifies stations requiring optimization of patient radiation exposure.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"043502"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11328147/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142001020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-08-28DOI: 10.1117/1.JMI.11.4.045504
Sarah J Lewis, Jayden B Wells, Warren M Reed, Claudia Mello-Thoms, Peter A O'Reilly, Marion Dimigen
Purpose: Reporting templates for chest radiographs (CXRs) for patients presenting or being clinically managed for severe acute respiratory syndrome coronavirus 2 [coronavirus disease 2019 (COVID-19)] has attracted advocacy from international radiology societies. We aim to explore the effectiveness and useability of three international templates through the concordance of, and between, radiologists reporting on the presence and severity of COVID-19 on CXRs.
Approach: Seventy CXRs were obtained from a referral hospital, 50 from patients with COVID-19 (30 rated "classic" COVID-19 appearance and 20 "indeterminate") and 10 "normal" and 10 "alternative pathology" CXRs. The recruited radiologists were assigned to three test sets with the same CXRs but with different template orders. Each radiologist read their test set three times and assigned a classification to the CXR using the Royal Australian New Zealand College of Radiology (RANZCR), British Society of Thoracic Imaging (BSTI), and Modified COVID-19 Reporting and Data System (Dutch; mCO-RADS) templates. Inter-reader variability and intra-reader variability were measured using Fleiss' kappa coefficient.
Results: Twelve Australian radiologists participated. The BSTI template had the highest inter-reader agreement (0.46; "moderate" agreement), followed by RANZCR (0.45) and mCO-RADS (0.32). Concordance was driven by strong agreement in "normal" and "alternative" classifications and was lowest for "indeterminate." General consistency was observed across classifications and templates, with intra-reader variability ranging from "good" to "very good" for COVID-19 CXRs (0.61), "normal" CXRs (0.76), and "alternative" (0.68).
Conclusions: Reporting templates may be useful in reducing variation among radiology reports, with intra-reader variability showing promise. Feasibility and implementation require a wider approach including referring and treating doctors plus the development of training packages for radiologists specific to the template being used.
{"title":"Use of reporting templates for chest radiographs in a coronavirus disease 2019 context: measuring concordance of radiologists with three international templates.","authors":"Sarah J Lewis, Jayden B Wells, Warren M Reed, Claudia Mello-Thoms, Peter A O'Reilly, Marion Dimigen","doi":"10.1117/1.JMI.11.4.045504","DOIUrl":"https://doi.org/10.1117/1.JMI.11.4.045504","url":null,"abstract":"<p><strong>Purpose: </strong>Reporting templates for chest radiographs (CXRs) for patients presenting or being clinically managed for severe acute respiratory syndrome coronavirus 2 [coronavirus disease 2019 (COVID-19)] has attracted advocacy from international radiology societies. We aim to explore the effectiveness and useability of three international templates through the concordance of, and between, radiologists reporting on the presence and severity of COVID-19 on CXRs.</p><p><strong>Approach: </strong>Seventy CXRs were obtained from a referral hospital, 50 from patients with COVID-19 (30 rated \"classic\" COVID-19 appearance and 20 \"indeterminate\") and 10 \"normal\" and 10 \"alternative pathology\" CXRs. The recruited radiologists were assigned to three test sets with the same CXRs but with different template orders. Each radiologist read their test set three times and assigned a classification to the CXR using the Royal Australian New Zealand College of Radiology (RANZCR), British Society of Thoracic Imaging (BSTI), and Modified COVID-19 Reporting and Data System (Dutch; mCO-RADS) templates. Inter-reader variability and intra-reader variability were measured using Fleiss' kappa coefficient.</p><p><strong>Results: </strong>Twelve Australian radiologists participated. The BSTI template had the highest inter-reader agreement (0.46; \"moderate\" agreement), followed by RANZCR (0.45) and mCO-RADS (0.32). Concordance was driven by strong agreement in \"normal\" and \"alternative\" classifications and was lowest for \"indeterminate.\" General consistency was observed across classifications and templates, with intra-reader variability ranging from \"good\" to \"very good\" for COVID-19 CXRs (0.61), \"normal\" CXRs (0.76), and \"alternative\" (0.68).</p><p><strong>Conclusions: </strong>Reporting templates may be useful in reducing variation among radiology reports, with intra-reader variability showing promise. Feasibility and implementation require a wider approach including referring and treating doctors plus the development of training packages for radiologists specific to the template being used.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"045504"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11349612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142113460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-07-12DOI: 10.1117/1.JMI.11.4.044503
Hinrich Rahlfs, Markus Hüllebrand, Sebastian Schmitter, Christoph Strecker, Andreas Harloff, Anja Hennemuth
Purpose: Atherosclerosis of the carotid artery is a major risk factor for stroke. Quantitative assessment of the carotid vessel wall can be based on cross-sections of three-dimensional (3D) black-blood magnetic resonance imaging (MRI). To increase reproducibility, a reliable automatic segmentation in these cross-sections is essential.
Approach: We propose an automatic segmentation of the carotid artery in cross-sections perpendicular to the centerline to make the segmentation invariant to the image plane orientation and allow a correct assessment of the vessel wall thickness (VWT). We trained a residual U-Net on eight sparsely sampled cross-sections per carotid artery and evaluated if the model can segment areas that are not represented in the training data. We used 218 MRI datasets of 121 subjects that show hypertension and plaque in the ICA or CCA measuring in ultrasound.
Results: The model achieves a high mean Dice coefficient of 0.948/0.859 for the vessel's lumen/wall, a low mean Hausdorff distance of , and a low mean average contour distance of on the test set. The model reaches similar results for regions of the carotid artery that are not incorporated in the training set and on MRI of young, healthy subjects. The model also achieves a low median Hausdorff distance of on the 2021 Carotid Artery Vessel Wall Segmentation Challenge test set.
Conclusions: The proposed method can reduce the effort for carotid artery vessel wall assessment. Together with human supervision, it can be used for clinical applications, as it allows a reliable measurement of the VWT for different patient demographics and MRI acquisition settings.
{"title":"Learning carotid vessel wall segmentation in black-blood MRI using sparsely sampled cross-sections from 3D data.","authors":"Hinrich Rahlfs, Markus Hüllebrand, Sebastian Schmitter, Christoph Strecker, Andreas Harloff, Anja Hennemuth","doi":"10.1117/1.JMI.11.4.044503","DOIUrl":"10.1117/1.JMI.11.4.044503","url":null,"abstract":"<p><strong>Purpose: </strong>Atherosclerosis of the carotid artery is a major risk factor for stroke. Quantitative assessment of the carotid vessel wall can be based on cross-sections of three-dimensional (3D) black-blood magnetic resonance imaging (MRI). To increase reproducibility, a reliable automatic segmentation in these cross-sections is essential.</p><p><strong>Approach: </strong>We propose an automatic segmentation of the carotid artery in cross-sections perpendicular to the centerline to make the segmentation invariant to the image plane orientation and allow a correct assessment of the vessel wall thickness (VWT). We trained a residual U-Net on eight sparsely sampled cross-sections per carotid artery and evaluated if the model can segment areas that are not represented in the training data. We used 218 MRI datasets of 121 subjects that show hypertension and plaque in the ICA or CCA measuring <math><mrow><mo>≥</mo> <mn>1.5</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> in ultrasound.</p><p><strong>Results: </strong>The model achieves a high mean Dice coefficient of 0.948/0.859 for the vessel's lumen/wall, a low mean Hausdorff distance of <math><mrow><mn>0.417</mn> <mo>/</mo> <mn>0.660</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> , and a low mean average contour distance of <math><mrow><mn>0.094</mn> <mo>/</mo> <mn>0.119</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> on the test set. The model reaches similar results for regions of the carotid artery that are not incorporated in the training set and on MRI of young, healthy subjects. The model also achieves a low median Hausdorff distance of <math><mrow><mn>0.437</mn> <mo>/</mo> <mn>0.552</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> on the 2021 Carotid Artery Vessel Wall Segmentation Challenge test set.</p><p><strong>Conclusions: </strong>The proposed method can reduce the effort for carotid artery vessel wall assessment. Together with human supervision, it can be used for clinical applications, as it allows a reliable measurement of the VWT for different patient demographics and MRI acquisition settings.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044503"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11245174/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141617377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-07-17DOI: 10.1117/1.JMI.11.4.044003
Arpitha Ravi, Philipp Bernhardt, Mathis Hoffmann, Richard Obler, Cuong Nguyen, Andreas Berting, René Chapot, Andreas Maier
Purpose: Monitoring radiation dose and time parameters during radiological interventions is crucial, especially in neurointerventional procedures, such as aneurysm treatment with embolization coils. The algorithm presented detects the presence of these embolization coils in medical images. It establishes a bounding box as a reference for automated collimation, with the primary objective being to enhance the efficiency and safety of neurointerventional procedures by actively optimizing image quality while minimizing patient dose.
Methods: Two distinct methodologies are evaluated in our study. The first involves deep learning, employing the Faster R-CNN model with a ResNet-50 FPN as a backbone and a RetinaNet model. The second method utilizes a classical blob detection approach, serving as a benchmark for comparison.
Results: We performed a fivefold cross-validation, and our top-performing model achieved mean mAP@75 of 0.84 across all folds on validation data and mean mAP@75 of 0.94 on independent test data. Since we use an upscaled bounding box, achieving 100% overlap between ground truth and prediction is not necessary. To highlight the real-world applications of our algorithm, we conducted a simulation featuring a coil constructed from an alloy wire, effectively showcasing the implementation of automatic collimation. This resulted in a notable reduction in the dose area product, signifying the reduction of stochastic risks for both patients and medical staff by minimizing scatter radiation. Additionally, our algorithm assists in avoiding extreme brightness or darkness in X-ray angiography images during narrow collimation, ultimately streamlining the collimation process for physicians.
Conclusion: To our knowledge, this marks the initial attempt at an approach successfully detecting embolization coils, showcasing the extended applications of integrating detection results into the X-ray angiography system. The method we present has the potential for broader application, allowing its extension to detect other medical objects utilized in interventional procedures.
{"title":"Optimizing neurointerventional procedures: an algorithm for embolization coil detection and automated collimation to enable dose reduction.","authors":"Arpitha Ravi, Philipp Bernhardt, Mathis Hoffmann, Richard Obler, Cuong Nguyen, Andreas Berting, René Chapot, Andreas Maier","doi":"10.1117/1.JMI.11.4.044003","DOIUrl":"10.1117/1.JMI.11.4.044003","url":null,"abstract":"<p><strong>Purpose: </strong>Monitoring radiation dose and time parameters during radiological interventions is crucial, especially in neurointerventional procedures, such as aneurysm treatment with embolization coils. The algorithm presented detects the presence of these embolization coils in medical images. It establishes a bounding box as a reference for automated collimation, with the primary objective being to enhance the efficiency and safety of neurointerventional procedures by actively optimizing image quality while minimizing patient dose.</p><p><strong>Methods: </strong>Two distinct methodologies are evaluated in our study. The first involves deep learning, employing the Faster R-CNN model with a ResNet-50 FPN as a backbone and a RetinaNet model. The second method utilizes a classical blob detection approach, serving as a benchmark for comparison.</p><p><strong>Results: </strong>We performed a fivefold cross-validation, and our top-performing model achieved mean mAP@75 of 0.84 across all folds on validation data and mean mAP@75 of 0.94 on independent test data. Since we use an upscaled bounding box, achieving 100% overlap between ground truth and prediction is not necessary. To highlight the real-world applications of our algorithm, we conducted a simulation featuring a coil constructed from an alloy wire, effectively showcasing the implementation of automatic collimation. This resulted in a notable reduction in the dose area product, signifying the reduction of stochastic risks for both patients and medical staff by minimizing scatter radiation. Additionally, our algorithm assists in avoiding extreme brightness or darkness in X-ray angiography images during narrow collimation, ultimately streamlining the collimation process for physicians.</p><p><strong>Conclusion: </strong>To our knowledge, this marks the initial attempt at an approach successfully detecting embolization coils, showcasing the extended applications of integrating detection results into the X-ray angiography system. The method we present has the potential for broader application, allowing its extension to detect other medical objects utilized in interventional procedures.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"044003"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11259374/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141735411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-07-19DOI: 10.1117/1.JMI.11.4.046001
Maksym Sharma, Miranda Kirby, Aaron Fenster, David G McCormack, Grace Parraga
<p><strong>Purpose: </strong>Our objective was to train machine-learning algorithms on hyperpolarized <math> <mrow> <mmultiscripts><mrow><mi>He</mi></mrow> <mprescripts></mprescripts> <none></none> <mrow><mn>3</mn></mrow> </mmultiscripts> </mrow> </math> magnetic resonance imaging (MRI) datasets to generate models of accelerated lung function decline in participants with and without chronic-obstructive-pulmonary-disease. We hypothesized that hyperpolarized gas MRI ventilation, machine-learning, and multivariate modeling could be combined to predict clinically-relevant changes in forced expiratory volume in 1 s ( <math> <mrow><msub><mi>FEV</mi> <mn>1</mn></msub> </mrow> </math> ) across 3 years.</p><p><strong>Approach: </strong>Hyperpolarized <math> <mrow> <mmultiscripts><mrow><mi>He</mi></mrow> <mprescripts></mprescripts> <none></none> <mrow><mn>3</mn></mrow> </mmultiscripts> </mrow> </math> MRI was acquired using a coronal Cartesian fast gradient recalled echo sequence with a partial echo and segmented using a k-means clustering algorithm. A maximum entropy mask was used to generate a region-of-interest for texture feature extraction using a custom-developed algorithm and the PyRadiomics platform. The principal component and Boruta analyses were used for feature selection. Ensemble-based and single machine-learning classifiers were evaluated using area-under-the-receiver-operator-curve and sensitivity-specificity analysis.</p><p><strong>Results: </strong>We evaluated 88 ex-smoker participants with <math><mrow><mn>31</mn> <mo>±</mo> <mn>7</mn></mrow> </math> months follow-up data, 57 of whom (22 females/35 males, <math><mrow><mn>70</mn> <mo>±</mo> <mn>9</mn></mrow> </math> years) had negligible changes in <math> <mrow><msub><mi>FEV</mi> <mn>1</mn></msub> </mrow> </math> and 31 participants (7 females/24 males, <math><mrow><mn>68</mn> <mo>±</mo> <mn>9</mn></mrow> </math> years) with worsening <math> <mrow> <msub><mrow><mi>FEV</mi></mrow> <mrow><mn>1</mn></mrow> </msub> <mo>≥</mo> <mn>60</mn> <mtext> </mtext> <mi>mL</mi> <mo>/</mo> <mtext>year</mtext></mrow> </math> . In addition, 3/88 ex-smokers reported a change in smoking status. We generated machine-learning models to predict <math> <mrow><msub><mi>FEV</mi> <mn>1</mn></msub> </mrow> </math> decline using demographics, spirometry, and texture features, with the later yielding the highest classification accuracy of 81%. The combined model (trained on all available measurements) achieved the overall best classification accuracy of 82%; however, it was not significantly different from the model trained on MRI texture features alone.</p><p><strong>Conclusion: </strong>For the first time, we have employed hyperpolarized <math> <mrow> <mmultiscripts><mrow><mi>He</mi></mrow> <mprescripts></mprescripts> <none></none> <mrow><mn>3</mn></mrow> </mmultiscripts> </mrow> </math> MRI ventilation texture features and machine-learning to identify ex-smokers with accelerated decline in <math> <mrow><msub><mi>FEV
{"title":"Machine learning and magnetic resonance image texture analysis predicts accelerated lung function decline in ex-smokers with and without chronic obstructive pulmonary disease.","authors":"Maksym Sharma, Miranda Kirby, Aaron Fenster, David G McCormack, Grace Parraga","doi":"10.1117/1.JMI.11.4.046001","DOIUrl":"10.1117/1.JMI.11.4.046001","url":null,"abstract":"<p><strong>Purpose: </strong>Our objective was to train machine-learning algorithms on hyperpolarized <math> <mrow> <mmultiscripts><mrow><mi>He</mi></mrow> <mprescripts></mprescripts> <none></none> <mrow><mn>3</mn></mrow> </mmultiscripts> </mrow> </math> magnetic resonance imaging (MRI) datasets to generate models of accelerated lung function decline in participants with and without chronic-obstructive-pulmonary-disease. We hypothesized that hyperpolarized gas MRI ventilation, machine-learning, and multivariate modeling could be combined to predict clinically-relevant changes in forced expiratory volume in 1 s ( <math> <mrow><msub><mi>FEV</mi> <mn>1</mn></msub> </mrow> </math> ) across 3 years.</p><p><strong>Approach: </strong>Hyperpolarized <math> <mrow> <mmultiscripts><mrow><mi>He</mi></mrow> <mprescripts></mprescripts> <none></none> <mrow><mn>3</mn></mrow> </mmultiscripts> </mrow> </math> MRI was acquired using a coronal Cartesian fast gradient recalled echo sequence with a partial echo and segmented using a k-means clustering algorithm. A maximum entropy mask was used to generate a region-of-interest for texture feature extraction using a custom-developed algorithm and the PyRadiomics platform. The principal component and Boruta analyses were used for feature selection. Ensemble-based and single machine-learning classifiers were evaluated using area-under-the-receiver-operator-curve and sensitivity-specificity analysis.</p><p><strong>Results: </strong>We evaluated 88 ex-smoker participants with <math><mrow><mn>31</mn> <mo>±</mo> <mn>7</mn></mrow> </math> months follow-up data, 57 of whom (22 females/35 males, <math><mrow><mn>70</mn> <mo>±</mo> <mn>9</mn></mrow> </math> years) had negligible changes in <math> <mrow><msub><mi>FEV</mi> <mn>1</mn></msub> </mrow> </math> and 31 participants (7 females/24 males, <math><mrow><mn>68</mn> <mo>±</mo> <mn>9</mn></mrow> </math> years) with worsening <math> <mrow> <msub><mrow><mi>FEV</mi></mrow> <mrow><mn>1</mn></mrow> </msub> <mo>≥</mo> <mn>60</mn> <mtext> </mtext> <mi>mL</mi> <mo>/</mo> <mtext>year</mtext></mrow> </math> . In addition, 3/88 ex-smokers reported a change in smoking status. We generated machine-learning models to predict <math> <mrow><msub><mi>FEV</mi> <mn>1</mn></msub> </mrow> </math> decline using demographics, spirometry, and texture features, with the later yielding the highest classification accuracy of 81%. The combined model (trained on all available measurements) achieved the overall best classification accuracy of 82%; however, it was not significantly different from the model trained on MRI texture features alone.</p><p><strong>Conclusion: </strong>For the first time, we have employed hyperpolarized <math> <mrow> <mmultiscripts><mrow><mi>He</mi></mrow> <mprescripts></mprescripts> <none></none> <mrow><mn>3</mn></mrow> </mmultiscripts> </mrow> </math> MRI ventilation texture features and machine-learning to identify ex-smokers with accelerated decline in <math> <mrow><msub><mi>FEV","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 4","pages":"046001"},"PeriodicalIF":1.9,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11259551/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141735410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}