Objectives: The aim of this study was to evaluate the optimal energy level of virtual monoenergetic images (VMIs) from photon-counting detector computed tomography (CT) for the detection of liver lesions as a function of phantom size and radiation dose.
Materials and methods: An anthropomorphic abdominal phantom with liver parenchyma and lesions was imaged on a dual-source photon-counting detector CT at 120 kVp. Five hypoattenuating lesions with a lesion-to-background contrast difference of -30 HU and -45 HU and 3 hyperattenuating lesions with +30 HU and +90 HU were used. The lesion diameter was 5-10 mm. Rings of fat-equivalent material were added to emulate medium- or large-sized patients. The medium size was imaged at a volume CT dose index of 5, 2.5, and 1.25 mGy and the large size at 5 and 2.5 mGy, respectively. Each setup was imaged 10 times. For each setup, VMIs from 40 to 80 keV at 5 keV increments were reconstructed with quantum iterative reconstruction at a strength level of 4 (QIR-4). Lesion detectability was measured as area under the receiver operating curve (AUC) using a channelized Hotelling model observer with 10 dense differences of Gaussian channels.
Results: Overall, highest detectability was found at 65 and 70 keV for both hypoattenuating and hyperattenuating lesions in the medium and large phantom independent of radiation dose (AUC range, 0.91-1.0 for the medium and 0.94-0.99 for the large phantom, respectively). The lowest detectability was found at 40 keV irrespective of the radiation dose and phantom size (AUC range, 0.78-0.99). A more pronounced reduction in detectability was apparent at 40-50 keV as compared with 65-75 keV when radiation dose was decreased. At equal radiation dose, detection as a function of VMI energy differed stronger for the large size as compared with the medium-sized phantom (12% vs 6%).
Conclusions: Detectability of hypoattenuating and hyperattenuating liver lesions differed between VMI energies for different phantom sizes and radiation doses. Virtual monoenergetic images at 65 and 70 keV yielded highest detectability independent of phantom size and radiation dose.
Objectives: The Centers for Medicare and Medicaid Services funded the development of a computed tomography (CT) quality measure for use in pay-for-performance programs, which balances automated assessments of radiation dose with image quality to incentivize dose reduction without compromising the diagnostic utility of the tests. However, no existing quantitative method for assessing CT image quality has been validated against radiologists' image quality assessments on a large number of CT examinations. Thus to develop an automated measure of image quality, we tested the relationship between radiologists' subjective ratings of image quality with measurements of radiation dose and image noise.
Materials and methods: Board-certified, posttraining, clinically active radiologists rated the image quality of 200 diagnostic CT examinations from a set of 734, representing 14 CT categories. Examinations with significant distractions, motion, or artifact were excluded. Radiologists rated diagnostic image quality as excellent, adequate, marginally acceptable, or poor; the latter 2 were considered unacceptable for rendering diagnoses. We quantified the relationship between ratings and image noise and radiation dose, by category, by analyzing the odds of an acceptable rating per standard deviation (SD) increase in noise or geometric SD (gSD) in dose.
Results: One hundred twenty-five radiologists contributed 24,800 ratings. Most (89%) were acceptable. The odds of an examination being rated acceptable statistically significantly increased per gSD increase in dose and decreased per SD increase in noise for most categories, including routine dose head, chest, and abdomen-pelvis, which together comprise 60% of examinations performed in routine practice. For routine dose abdomen-pelvis, the most common category, each gSD increase in dose raised the odds of an acceptable rating (2.33; 95% confidence interval, 1.98-3.24), whereas each SD increase in noise decreased the odds (0.90; 0.79-0.99). For only 2 CT categories, high-dose head and neck/cervical spine, neither dose nor noise was associated with ratings.
Conclusions: Radiation dose and image noise correlate with radiologists' image quality assessments for most CT categories, making them suitable as automated metrics in quality programs incentivizing reduction of excessive radiation doses.
Background: Neoadjuvant therapy regimens have significantly improved the prognosis of GEJ (gastroesophageal junction) cancer; however, there are a significant percentage of patients who benefit from earlier resection or adapted therapy regimens, and the true response rate can only be determined histopathologically. Methods that allow preoperative assessment of response are lacking.
Purpose: The purpose of this retrospective study is to assess the potential of pretherapeutic and posttherapeutic spectral CT iodine density (IoD) in predicting histopathological response to neoadjuvant chemotherapy in patients diagnosed with adenocarcinoma of the GEJ.
Methods: In this retrospective cohort study, a total of 62 patients with GEJ carcinoma were studied. Patients received a multiphasic CT scan at diagnosis and preoperatively. Iodine-density maps were generated based on spectral CT data. All tumors were histopathologically analyzed, and the tumor regression grade (TRG) according to Becker et al ( Cancer . 2003;98:1521-1530) was determined. Two experienced radiologists blindly placed 5 defined ROIs in the tumor region of highest density, and the maximum value was used for further analysis. Iodine density was normalized to the aortic iodine uptake. In addition, tumor response was assessed according to standard RECIST measurement. After assessing interrater reliability, the correlation of IoD values with treatment response and with histopathologic TRG was evaluated.
Results: The normalized ΔIoD (IoD at diagnosis - IoD after neoadjuvant treatment) and the normalized IoD after neoadjuvant treatment correlated significantly with the TRG. For the detection of responders and nonresponders, the receiver operating characteristic (ROC) curve for normalized ΔIoD yielded the highest area under the curve of 0.95 and achieved a sensitivity and specificity of 92.3% and 92.1%, respectively. Iodine density after neoadjuvant treatment achieved an area under the curve of 0.88 and a sensitivity and specificity of 86.8% and 84.6%, respectively (cutoff, 0.266). Iodine density at diagnosis and RECIST did not provide information to distinguish responders from nonresponders. Using the cutoff value for IoD after neoadjuvant treatment, a reliable classification of responders and nonresponders was achieved for both readers in a test set of 11 patients. Intraclass correlation coefficient revealed excellent interrater reliability (intraclass correlation coefficient, >0.9). Lastly, using the cutoff value for normalized ΔIoD as a definition for treatment response, a significantly longer survival of responders was shown.
Conclusions: Changes in IoD after neoadjuvant treatment of GEJ cancer may be a potential surrogate for therapy response.
Objectives: Reducing gadolinium-based contrast agents to lower costs, the environmental impact of gadolinium-containing wastewater, and patient exposure is still an unresolved issue. Published methods have never been compared. The purpose of this study was to compare the performance of 2 reimplemented state-of-the-art deep learning methods (settings A and B) and a proposed method for contrast signal extraction (setting C) to synthesize artificial T1-weighted full-dose images from corresponding noncontrast and low-dose images.
Materials and methods: In this prospective study, 213 participants received magnetic resonance imaging of the brain between August and October 2021 including low-dose (0.02 mmol/kg) and full-dose images (0.1 mmol/kg). Fifty participants were randomly set aside as test set before training (mean age ± SD, 52.6 ± 15.3 years; 30 men). Artificial and true full-dose images were compared using a reader-based study. Two readers noted all false-positive lesions and scored the overall interchangeability in regard to the clinical conclusion. Using a 5-point Likert scale (0 being the worst), they scored the contrast enhancement of each lesion and its conformity to the respective reference in the true image.
Results: The average counts of false-positives per participant were 0.33 ± 0.93, 0.07 ± 0.33, and 0.05 ± 0.22 for settings A-C, respectively. Setting C showed a significantly higher proportion of scans scored as fully or mostly interchangeable (70/100) than settings A (40/100, P < 0.001) and B (57/100, P < 0.001), and generated the smallest mean enhancement reduction of scored lesions (-0.50 ± 0.55) compared with the true images (setting A: -1.10 ± 0.98; setting B: -0.91 ± 0.67, both P < 0.001). The average scores of conformity of the lesion were 1.75 ± 1.07, 2.19 ± 1.04, and 2.48 ± 0.91 for settings A-C, respectively, with significant differences among all settings (all P < 0.001).
Conclusions: The proposed method for contrast signal extraction showed significant improvements in synthesizing postcontrast images. A relevant proportion of images showing inadequate interchangeability with the reference remains at this dosage.
Objectives: The aim of this study was to evaluate the potential use of simulated radiation doses from a dual-split CT scan for dose optimization by comparing their lesion detectability to dose-matched single-energy CT acquisitions at different radiation dose levels using a mathematical model observer.
Materials and methods: An anthropomorphic abdominal phantom with liver lesions (5-10 mm, both hyperattenuating and hypoattenuating) was imaged using a third-generation dual-source CT in single-energy dual-source mode at 100 kVp and 3 radiation doses (5, 2.5, 1.25 mGy). The tube current was 67% for tube A and 33% for tube B. For each dose, 5 simulated radiation doses (100%, 67%, 55%, 45%, 39%, and 33%) were generated through linear image blending. The phantom was also imaged using traditional single-source single-energy mode at equivalent doses. Each setup was repeated 10 times. Image noise texture was evaluated by the average spatial frequency (fav) of the noise power spectrum. Liver lesion detection was measured by the area under the receiver operating curve (AUC), using a channelized Hotelling model observer with 10 dense Gaussian channels.
Results: Fav decreased at lower radiation doses and differed between simulated and single-energy images (eg, 0.16 mm-1 vs 0.14 mm-1 for simulated and single-energy images at 1.25 mGy), indicating slightly blotchier noise texture for dual-split CT. For hyperattenuating lesions, the mean AUC ranged between 0.92-0.99, 0.81-0.96, and 0.68-0.89 for single-energy, and between 0.91-0.99, 0.78-0.91, and 0.70-0.85 for dual-split at 5 mGy, 2.5 mGy, and 1.25 mGy, respectively. For hypoattenuating lesions, the AUC ranged between 0.90-0.98, 0.75-0.93, and 0.69-0.86 for the single-energy, and between 0.92-0.99, 0.76-0.87, and 0.67-0.81 for dual-split at 5 mGy, 2.5 mGy, and 1.25 mGy, respectively. AUC values were similar between both modes at 5 mGy, and slightly lower, albeit not significantly, for the dual-split mode at 2.5 and 1.25 mGy.
Conclusions: Lesion detectability was comparable between multiple simulated radiation doses from a dual-split CT scan and dose-matched single-energy CT. Noise texture was slightly blotchier in the simulated images. Simulated doses using dual-split CT can be used to assess the impact of radiation dose reduction on lesion detectability without the need for repeated patient scans.
Abstract: In children and adults, quantitative imaging examinations determine the effectiveness of treatment for liver disease. However, pediatric liver disease differs in presentation from liver disease in adults. Children also needed to be followed for a longer period from onset and have less control of their bodies, showing more movement than adults during imaging examinations, which leads to a greater need for sedation. Thus, it is essential to appropriately tailor and accurately perform noninvasive imaging tests in these younger patients. This article is an overview of updated imaging techniques used to assess liver disease quantitatively in children. The common initial imaging study for diffuse liver disease in pediatric patients is ultrasound. In addition to preexisting echo analysis, newly developed attenuation imaging techniques have been introduced to evaluate fatty liver. Ultrasound elastography is also now actively used to evaluate liver conditions, and the broad age spectrum of the pediatric population requires caution to be taken even in the selection of probes. Magnetic resonance imaging (MRI) is another important imaging tool used to evaluate liver disease despite requiring sedation or anesthesia in young children because it allows quantitative analysis with sequences such as fat analysis and MR elastography. In addition to ultrasound and MRI, we review quantitative imaging methods specifically for fatty liver, Wilson disease, biliary atresia, hepatic fibrosis, Fontan-associated liver disease, autoimmune hepatitis, sinusoidal obstruction syndrome, and the transplanted liver. Lastly, concerns such as growth and motion that need to be addressed specifically for children are summarized.
Background: Computed tomography (CT) captures the quantity, density, and distribution of subcutaneous and visceral (SAT and VAT) adipose tissue compartments. These metrics may change with age and sex.
Objective: The study aims to provide age-, sex-, and vertebral level-specific reference values for SAT on chest CT and for SAT and VAT on abdomen CT.
Materials and methods: This secondary analysis of an observational study describes SAT and VAT measurements in participants of the Framingham Heart Study without known cancer diagnosis who underwent at least 1 of 2 CT examinations between 2002 and 2011. We used a previously validated machine learning-assisted pipeline and rigorous quality assurance to segment SAT at the fifth, eighth, and tenth thoracic vertebra (T5, T8, T10) and SAT and VAT at the third lumbar vertebra (L3). For each metric, we measured cross-sectional area (cm2) and mean attenuation (Hounsfield units [HU]) and calculated index (area/height2) (cm2/m2) and gauge (attenuation × index) (HU × cm2/m2). We summarized body composition metrics by age and sex and modeled sex-, age-, and vertebral level-specific reference curves.
Results: We included 14,898 single-level measurements from up to 4 vertebral levels of 3797 scans of 3730 Framingham Heart Study participants (1889 [51%] male with a mean [standard deviation] age of 55.6 ± 10.6 years; range, 38-81 years). The mean VAT index increased with age from 65 (cm2/m2) in males and 29 (cm2/m2) in females in the <45-year-old age group to 99 (cm2/m2) in males and 60 (cm2/m2) in females in >75-year-old age group. The increase of SAT with age was less pronounced, resulting in the VAT/SAT ratio increasing with age. A free R package and online interactive visual web interface allow access to reference values.
Conclusions: This study establishes age-, sex-, and vertebral level-specific reference values for CT-assessed SAT at vertebral levels T5, T8, T10, and L3 and VAT at vertebral level L3.
Abstract: Immunotherapy is likely the most remarkable advancement in lung cancer treatment during the past decade. Although immunotherapy provides substantial benefits, their therapeutic responses differ from those of conventional chemotherapy and targeted therapy, and some patients present unique immunotherapy response patterns that cannot be judged under the current measurement standards. Therefore, the response monitoring of immunotherapy can be challenging, such as the differentiation between real response and pseudo-response. This review outlines the various tumor response patterns to immunotherapy and discusses methods for quantifying computed tomography (CT) and 18F-fluorodeoxyglucose positron emission tomography (PET) in the field of lung cancer. Emerging technologies in magnetic resonance imaging (MRI) and non-FDG PET tracers are also explored. With immunotherapy responses, the role for imaging is essential in both anatomical radiological responses (CT/MRI) and molecular changes (PET imaging). Multiple aspects must be considered when assessing treatment responses using CT and PET. Finally, we introduce multimodal approaches that integrate imaging and nonimaging data, and we discuss future directions for the assessment and prediction of lung cancer responses to immunotherapy.