Pub Date : 2026-03-01Epub Date: 2026-03-10DOI: 10.1117/1.JMI.13.2.027501
Yurim Lee, Maxwell J Kiernan, Carol C Mitchell, Shahriar Salamat, Stephanie M Wilbrand, Robert J Dempsey, Tomy Varghese
Purpose: Characterizing carotid plaque specimens based on two-dimensional (2D) "representative" histology sections is considered standard practice in clinics. In comparison, three-dimensional (3D) histology has the potential to provide much more useful, volumetric information about carotid plaques. Despite this, due to the increased requirement for manual labor, 3D histology has been less employed for carotid plaque characterization. Evaluating the representativeness of 2D histology and exploring clinical applications of 3D carotid plaque histology, particularly in the arena of registration to and correlations with in vivo ultrasound, could be insightful.
Approach: Using 3D carotid plaque histology models, we evaluated the representativeness of 2D histology by comparing the predicted specimen composition based on 2D histology to the actual specimen composition based on 3D histology. We introduced a workflow that properly orients 3D carotid plaque histology based on transverse ultrasound and takes virtual histology slices at an angle to register histology to the longitudinal ultrasound. We correlated 3D histology composition to in vivo ultrasound parameters such as strain and grayscale features.
Results: The 2D histology successfully predicted specimen composition (to within 3%) for 11 specimens out of 34. The 2D representative slice predictions generally overestimated calcification for more calcified specimens ( calcified). Using 3D histology, we registered virtual histology to in vivo longitudinal ultrasound B-mode and strain. For B-mode, the registrations had higher IoU with respect to the ultrasonographer's annotations ( ) compared with the registrations with conventional 2D histology ( ). 3D histology composition was loosely related to all strain indices and grayscale features used in the study. In one of the cases, we note that hemorrhage corresponded to opposing strains.
Conclusions: 3D histology can be helpful for carotid plaque characterization as it enables a better understanding of plaque composition and better histology to in vivo ultrasound imaging registration.
{"title":"Comparison of 2D and 3D carotid plaque analysis and longitudinal <i>in vivo</i> ultrasound registration using 3D histology.","authors":"Yurim Lee, Maxwell J Kiernan, Carol C Mitchell, Shahriar Salamat, Stephanie M Wilbrand, Robert J Dempsey, Tomy Varghese","doi":"10.1117/1.JMI.13.2.027501","DOIUrl":"https://doi.org/10.1117/1.JMI.13.2.027501","url":null,"abstract":"<p><strong>Purpose: </strong>Characterizing carotid plaque specimens based on two-dimensional (2D) \"representative\" histology sections is considered standard practice in clinics. In comparison, three-dimensional (3D) histology has the potential to provide much more useful, volumetric information about carotid plaques. Despite this, due to the increased requirement for manual labor, 3D histology has been less employed for carotid plaque characterization. Evaluating the representativeness of 2D histology and exploring clinical applications of 3D carotid plaque histology, particularly in the arena of registration to and correlations with <i>in vivo</i> ultrasound, could be insightful.</p><p><strong>Approach: </strong>Using 3D carotid plaque histology models, we evaluated the representativeness of 2D histology by comparing the predicted specimen composition based on 2D histology to the actual specimen composition based on 3D histology. We introduced a workflow that properly orients 3D carotid plaque histology based on transverse ultrasound and takes virtual histology slices at an angle to register histology to the longitudinal ultrasound. We correlated 3D histology composition to <i>in vivo</i> ultrasound parameters such as strain and grayscale features.</p><p><strong>Results: </strong>The 2D histology successfully predicted specimen composition (to within 3%) for 11 specimens out of 34. The 2D representative slice predictions generally overestimated calcification for more calcified specimens ( <math><mrow><mo>≳</mo> <mn>30</mn> <mo>%</mo></mrow> </math> calcified). Using 3D histology, we registered virtual histology to <i>in vivo</i> longitudinal ultrasound B-mode and strain. For B-mode, the registrations had higher IoU with respect to the ultrasonographer's annotations ( <math><mrow><mn>0.54</mn> <mo>±</mo> <mn>0.05</mn></mrow> </math> ) compared with the registrations with conventional 2D histology ( <math><mrow><mn>0.30</mn> <mo>±</mo> <mn>0.08</mn></mrow> </math> ). 3D histology composition was loosely related to all strain indices and grayscale features used in the study. In one of the cases, we note that hemorrhage corresponded to opposing strains.</p><p><strong>Conclusions: </strong>3D histology can be helpful for carotid plaque characterization as it enables a better understanding of plaque composition and better histology to <i>in vivo</i> ultrasound imaging registration.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 2","pages":"027501"},"PeriodicalIF":1.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12973674/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147436648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Accurate artery-vein (AV) differentiation in small-field macular optical coherence tomography angiography (OCTA) remains challenging due to a lack of standardized guidelines. We propose and validate criteria for ( on Spectralis; ) macular scans.
Approach: Small field-of-view (FOV) OCTA scans were analyzed using established AV criteria for large-field ( ) OCTA, as applied by two masked readers and validated against color fundus photographs (CFPs) and near-infrared reflectance (NIR) images. Accuracy and reliability (Cohen's ) were assessed. Pixel-level AV masks were annotated with a standardized threshold. Vessel diameters and intensities were compared within our dataset and in the publicly available OCTA-500 dataset to assess whether intrinsic vessel features support AV differentiation.
Results: A total of 465 vessels from 20 healthy eyes were evaluated across 3 pseudo-branching orders using the criteria for OCTA. Annotators achieved high accuracy (95.1%, 92.3%) and strong intra/inter-rater reliability ( ) with similarly high AV classification accuracy within pseudo-third-order vessels (97.15%). No significant AV diameter differences were observed in either dataset ( and 0.442). The mean intensity was similar in our dataset ( ; , 1.45% relative difference) but higher for veins in OCTA-500 ( ; , 1.63% relative difference).
Conclusions: Accurate and reproducible AV labeling is feasible in scans, with strong inter- and intra-rater agreement. Vessel diameter and intensity add limited value. NIR-based alignment of OCTA with CFP provides reliable ground truth, supporting consistent manual labeling and OCTA segmentation.
{"title":"Accuracy and reliability of artery-vein differentiation in small-field macular OCT angiography.","authors":"Haneen Alfauri, Tugce Ilayda Turer, Cyriac Manjaly, Aditya Santoki, Senyue Hao, Marin Woronets, Chao Zhou, Rithwick Rajagopal","doi":"10.1117/1.JMI.13.2.025501","DOIUrl":"https://doi.org/10.1117/1.JMI.13.2.025501","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate artery-vein (AV) differentiation in small-field macular optical coherence tomography angiography (OCTA) remains challenging due to a lack of standardized guidelines. We propose and validate criteria for <math><mrow><mn>3</mn> <mo>×</mo> <mn>3</mn> <mtext> </mtext> <msup><mrow><mi>mm</mi></mrow> <mrow><mn>2</mn></mrow> </msup> </mrow> </math> ( <math><mrow><mn>10</mn> <mtext> </mtext> <mi>deg</mi> <mo>×</mo> <mtext> </mtext> <mn>10</mn> <mtext> </mtext> <mi>deg</mi></mrow> </math> on Spectralis; <math><mrow><mo>∼</mo> <mn>2.9</mn> <mo>×</mo> <mn>2.9</mn> <mtext> </mtext> <msup><mrow><mi>mm</mi></mrow> <mrow><mn>2</mn></mrow> </msup> </mrow> </math> ) macular scans.</p><p><strong>Approach: </strong>Small field-of-view (FOV) OCTA scans were analyzed using established AV criteria for large-field ( <math><mrow><mn>12</mn> <mo>×</mo> <mn>12</mn> <mtext> </mtext> <msup><mrow><mi>mm</mi></mrow> <mrow><mn>2</mn></mrow> </msup> </mrow> </math> ) OCTA, as applied by two masked readers and validated against color fundus photographs (CFPs) and near-infrared reflectance (NIR) images. Accuracy and reliability (Cohen's <math><mrow><mi>κ</mi></mrow> </math> ) were assessed. Pixel-level AV masks were annotated with a standardized threshold. Vessel diameters and intensities were compared within our dataset and in the publicly available OCTA-500 dataset to assess whether intrinsic vessel features support AV differentiation.</p><p><strong>Results: </strong>A total of 465 vessels from 20 healthy eyes were evaluated across 3 pseudo-branching orders using the criteria for OCTA. Annotators achieved high accuracy (95.1%, 92.3%) and strong intra/inter-rater reliability ( <math><mrow><mi>κ</mi> <mo>=</mo> <mn>0.84</mn></mrow> </math> ) with similarly high AV classification accuracy within pseudo-third-order vessels (97.15%). No significant AV diameter differences were observed in either dataset ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.261</mn></mrow> </math> and 0.442). The mean intensity was similar in our dataset ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.277</mn></mrow> </math> ; <math><mrow><mo>|</mo> <mi>Δ</mi> <mo>|</mo> <mo>=</mo> <mn>3.28</mn></mrow> </math> , 1.45% relative difference) but higher for veins in OCTA-500 ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.0001</mn></mrow> </math> ; <math><mrow><mo>|</mo> <mi>Δ</mi> <mo>|</mo> <mo>=</mo> <mn>3.42</mn></mrow> </math> , 1.63% relative difference).</p><p><strong>Conclusions: </strong>Accurate and reproducible AV labeling is feasible in <math><mrow><mn>3</mn> <mo>×</mo> <mn>3</mn> <mtext> </mtext> <msup><mrow><mi>mm</mi></mrow> <mrow><mn>2</mn></mrow> </msup> </mrow> </math> scans, with strong inter- and intra-rater agreement. Vessel diameter and intensity add limited value. NIR-based alignment of OCTA with CFP provides reliable ground truth, supporting consistent manual labeling and OCTA segmentation.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 2","pages":"025501"},"PeriodicalIF":1.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13004414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147500362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-03-19DOI: 10.1117/1.JMI.13.2.024002
Yannuo Wen, Kathleen M Curran, Xinzhu Wang, Nuala A Healy, John J Healy
Purpose: Breast ultrasound is widely used for cancer screening, but data scarcity and annotation challenges hinder deep learning adoption. Synthetic image generation offers a promising solution to enhance training datasets while preserving patient privacy. However, problems such as inadequate quality of synthesized images and the need for large amounts of data to train the synthesis models remain significant.
Approach: We propose a three-stage latent diffusion model (LDM) workflow-enhanced by Vision Transformers and fine-tuned with low-rank adaptation-that synthesizes realistic malignant and benign breast ultrasound images directly from healthy samples while simultaneously generating accurate segmentation masks. Stage division significantly reduces the task complexity of a single synthesis model. Applied to the BUSI dataset (133 healthy, 487 benign, and 210 malignant images), the method generates synthetic cases of each tumor type.
Results: A ResNet101 classifier could not reliably distinguish synthetic from real images (AUC = 0.563), indicating high visual plausibility. Quantitative metrics confirmed strong fidelity: Fréchet inception distance = 15.2 and inception score = 1.79, indicating low distributional divergence in feature space and high similarity to real data. When used for training a U-Net segmentation model, the augmented dataset improved the -score from 0.870 to 0.896, demonstrating substantial gains in diagnostic accuracy.
Conclusions: These results show that the proposed three-stage LDM can generate high-quality, anatomically coherent breast cancer images from healthy controls, effectively alleviating data scarcity and enabling more robust training of medical AI systems without compromising clinical realism.
{"title":"Synthesizing breast cancer ultrasound images from healthy samples using latent diffusion models.","authors":"Yannuo Wen, Kathleen M Curran, Xinzhu Wang, Nuala A Healy, John J Healy","doi":"10.1117/1.JMI.13.2.024002","DOIUrl":"https://doi.org/10.1117/1.JMI.13.2.024002","url":null,"abstract":"<p><strong>Purpose: </strong>Breast ultrasound is widely used for cancer screening, but data scarcity and annotation challenges hinder deep learning adoption. Synthetic image generation offers a promising solution to enhance training datasets while preserving patient privacy. However, problems such as inadequate quality of synthesized images and the need for large amounts of data to train the synthesis models remain significant.</p><p><strong>Approach: </strong>We propose a three-stage latent diffusion model (LDM) workflow-enhanced by Vision Transformers and fine-tuned with low-rank adaptation-that synthesizes realistic malignant and benign breast ultrasound images directly from healthy samples while simultaneously generating accurate segmentation masks. Stage division significantly reduces the task complexity of a single synthesis model. Applied to the BUSI dataset (133 healthy, 487 benign, and 210 malignant images), the method generates synthetic cases of each tumor type.</p><p><strong>Results: </strong>A ResNet101 classifier could not reliably distinguish synthetic from real images (AUC = 0.563), indicating high visual plausibility. Quantitative metrics confirmed strong fidelity: Fréchet inception distance = 15.2 and inception score = 1.79, indicating low distributional divergence in feature space and high similarity to real data. When used for training a U-Net segmentation model, the augmented dataset improved the <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> -score from 0.870 to 0.896, demonstrating substantial gains in diagnostic accuracy.</p><p><strong>Conclusions: </strong>These results show that the proposed three-stage LDM can generate high-quality, anatomically coherent breast cancer images from healthy controls, effectively alleviating data scarcity and enabling more robust training of medical AI systems without compromising clinical realism.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 2","pages":"024002"},"PeriodicalIF":1.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12999972/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147500449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-03-18DOI: 10.1117/1.JMI.13.2.027001
Jakob Schäfer, Charlotte Herzog, Tina Gabriel, Julian Kober, Toennis Trittler, Edgar Dorausch, Omid Chaghaneh, Thomas Karlas, Cornelius Kühnöl, Antje Naas, Gerhard Fettweis, Franz Brinkmann, Nicole Kampfrath, Jochen Hampe, Carolin Schneider, Moritz Herzog
Purpose: Assessment of liver steatosis is primarily performed through visual evaluation during ultrasound examinations. A more objective approach relies on quantifying ultrasound attenuation, typically using devices such as the FibroScan® or elastography integrated into high-end ultrasound systems-which offer limited accessibility. By contrast, handheld ultrasound devices (HHUDs) are more affordable and widely available. Using raw ultrasound data to get deeper insights into liver tissue characteristics could turn HHUDs into valuable diagnostic tools. We hypothesized that the frequency-specific attenuation of raw ultrasound data acquired with handheld devices correlates with the controlled attenuation parameter (CAP) obtained through vibration-controlled transient elastography via FibroScan.
Approach: In an exploratory, single-center study, raw data from 395 participants scheduled for CAP measurement were collected using HHUDs. Of these, 304 participants were included in the final analysis; 91 were excluded due to incomplete data. Using the raw data from the HHUDs, a method based on short-time fast Fourier transform was applied to calculate the frequency-specific attenuation. The results were then correlated with the CAP values.
Results: Overall, the attenuation of the radiofrequency data showed a strong linear correlation with CAP values ( , ), although the strength of correlation varied significantly across frequencies (r_min = 0.443 at 0.75 MHz, r_max = 0.721 at 3.75 MHz), with the highest correlation, equaling results from studies with high-end ultrasound devices.
Conclusion: HHUDs capable of acquiring raw data may serve as objective and accessible screening tools for liver steatosis, potentially improving treatment monitoring.
{"title":"Estimation of controlled attenuation parameter-based liver steatosis via raw ultrasound data from handheld devices.","authors":"Jakob Schäfer, Charlotte Herzog, Tina Gabriel, Julian Kober, Toennis Trittler, Edgar Dorausch, Omid Chaghaneh, Thomas Karlas, Cornelius Kühnöl, Antje Naas, Gerhard Fettweis, Franz Brinkmann, Nicole Kampfrath, Jochen Hampe, Carolin Schneider, Moritz Herzog","doi":"10.1117/1.JMI.13.2.027001","DOIUrl":"https://doi.org/10.1117/1.JMI.13.2.027001","url":null,"abstract":"<p><strong>Purpose: </strong>Assessment of liver steatosis is primarily performed through visual evaluation during ultrasound examinations. A more objective approach relies on quantifying ultrasound attenuation, typically using devices such as the FibroScan® or elastography integrated into high-end ultrasound systems-which offer limited accessibility. By contrast, handheld ultrasound devices (HHUDs) are more affordable and widely available. Using raw ultrasound data to get deeper insights into liver tissue characteristics could turn HHUDs into valuable diagnostic tools. We hypothesized that the frequency-specific attenuation of raw ultrasound data acquired with handheld devices correlates with the controlled attenuation parameter (CAP) obtained through vibration-controlled transient elastography via FibroScan.</p><p><strong>Approach: </strong>In an exploratory, single-center study, raw data from 395 participants scheduled for CAP measurement were collected using HHUDs. Of these, 304 participants were included in the final analysis; 91 were excluded due to incomplete data. Using the raw data from the HHUDs, a method based on short-time fast Fourier transform was applied to calculate the frequency-specific attenuation. The results were then correlated with the CAP values.</p><p><strong>Results: </strong>Overall, the attenuation of the radiofrequency data showed a strong linear correlation with CAP values ( <math><mrow><mi>r</mi> <mo>=</mo> <mn>0.672</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo><</mo> <mn>0.001</mn></mrow> </math> ), although the strength of correlation varied significantly across frequencies (r_min = 0.443 at 0.75 MHz, r_max = 0.721 at 3.75 MHz), with the highest correlation, equaling results from studies with high-end ultrasound devices.</p><p><strong>Conclusion: </strong>HHUDs capable of acquiring raw data may serve as objective and accessible screening tools for liver steatosis, potentially improving treatment monitoring.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 2","pages":"027001"},"PeriodicalIF":1.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12997074/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147487893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-03-03DOI: 10.1117/1.JMI.13.2.024501
Naveen Paluru, Mehak Arora, Phaneendra K Yalavarthy
Purpose: To introduce a filter design element called rolling convolution filters for developing lightweight convolutional neural networks (CNNs) in medical image analysis, aiming to reduce model complexity and memory footprint without compromising performance.
Approach: Rolling convolution filters were generated by performing a channel-wise rolling operation on a single base filter, creating unique filters while restricting the learnable parameters. The method was applied to various two- and three-dimensional medical image analysis tasks, including reconstruction, segmentation, and classification across MRI, CT, and OCT modalities. The performance was compared with that of standard CNNs and other lightweight architectures.
Results: The proposed rolling convolution filters substantially reduced the number of parameters and model size compared with standard CNNs, with a negligible increase in performance error. For quantitative susceptibility mapping, the rolling filter approach achieved results comparable to those of state-of-the-art methods with 6× fewer parameters. In COVID-19 anomaly segmentation, rolling filters performed on par with existing lightweight models while having fewer parameters. For OCT classification, rolling filters maintained accuracy while significantly reducing the model size (49×).
Conclusions: Rolling convolution filters offer an effective approach for designing lightweight CNNs for medical image analysis tasks, providing substantial reductions in model complexity and memory requirements while maintaining a performance comparable to that of larger models. This method can be easily incorporated into existing architectures and shows promise for deploying efficient deep learning models in resource-constrained medical imaging settings.
{"title":"Rolling convolution filters for lightweight neural networks in medical image analysis.","authors":"Naveen Paluru, Mehak Arora, Phaneendra K Yalavarthy","doi":"10.1117/1.JMI.13.2.024501","DOIUrl":"https://doi.org/10.1117/1.JMI.13.2.024501","url":null,"abstract":"<p><strong>Purpose: </strong>To introduce a filter design element called rolling convolution filters for developing lightweight convolutional neural networks (CNNs) in medical image analysis, aiming to reduce model complexity and memory footprint without compromising performance.</p><p><strong>Approach: </strong>Rolling convolution filters were generated by performing a channel-wise rolling operation on a single base filter, creating unique filters while restricting the learnable parameters. The method was applied to various two- and three-dimensional medical image analysis tasks, including reconstruction, segmentation, and classification across MRI, CT, and OCT modalities. The performance was compared with that of standard CNNs and other lightweight architectures.</p><p><strong>Results: </strong>The proposed rolling convolution filters substantially reduced the number of parameters and model size compared with standard CNNs, with a negligible increase in performance error. For quantitative susceptibility mapping, the rolling filter approach achieved results comparable to those of state-of-the-art methods with 6× fewer parameters. In COVID-19 anomaly segmentation, rolling filters performed on par with existing lightweight models while having <math><mrow><mo>∼</mo> <mn>68</mn> <mo>×</mo></mrow> </math> fewer parameters. For OCT classification, rolling filters maintained accuracy while significantly reducing the model size (49×).</p><p><strong>Conclusions: </strong>Rolling convolution filters offer an effective approach for designing lightweight CNNs for medical image analysis tasks, providing substantial reductions in model complexity and memory requirements while maintaining a performance comparable to that of larger models. This method can be easily incorporated into existing architectures and shows promise for deploying efficient deep learning models in resource-constrained medical imaging settings.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 2","pages":"024501"},"PeriodicalIF":1.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12956260/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147357113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-01Epub Date: 2026-03-04DOI: 10.1117/1.JMI.13.2.024001
Samuel W Remedios, Shuwen Wei, Shuo Han, Jinwei Zhang, Aaron Carass, Kurt G Schilling, Dzung L Pham, Jerry L Prince, Blake E Dewey
Purpose: In clinical imaging, magnetic resonance (MR) image volumes are often acquired as stacks of 2D slices with decreased scan times, improved signal-to-noise ratio, and image contrasts unique to 2D MR pulse sequences. Although this is sufficient for clinical evaluation, automated algorithms designed for 3D analysis perform poorly on multislice 2D MR volumes, especially those with thick slices and gaps between slices. Superresolution (SR) methods aim to address this problem, but previous methods do not address all of the following: slice profile shape estimation, slice gap, domain shift, and noninteger or arbitrary upsampling factors.
Approach: We propose ECLARE (Efficient Cross-planar Learning for Anisotropic Resolution Enhancement), a self-SR method that addresses each of these factors. ECLARE uses a slice profile estimated from the multislice 2D MR volume, trains a network to learn the mapping from low-resolution to high-resolution in-plane patches from the same volume, performs SR with antialiasing, and respects the image FOV during resampling. We compared ECLARE with cubic B-spline interpolation, SMORE, and other contemporary SR methods. We used realistic and representative simulations on human head MR volumes so that quantitative performance against ground truth can be computed. Specifically, healthy -w and people with MS -w FLAIR datasets were used for evaluations. We used the peak signal-to-noise ratio and structural similarity index measure as signal recovery metrics. We additionally used two independent brain parcellation algorithms, SLANT and SynthSeg, to compute the consistency Dice similarity coefficient and the coefficient of determination, respectively, as comparison metrics.
Results: For images with up to 5 mm of slice thickness and up to 1.5 mm of gap, ECLARE achieves greater mean PSNR and SSIM compared with other methods. In representative regions of interest, such as the ventricles, caudate, cerebral white matter, and cerebellar white matter, ECLARE performs comparably or better than other approaches. These trends are similar for both investigated datasets.
Conclusions: The use of slice profile estimation, FOV-aware resampling, and self-SR allowed ECLARE to robustly superresolve anisotropic images without the need for external training data. Future work will investigate the utility of ECLARE on other organs, species, modalities, and resolutions. Our code is open-source and available at https://www.github.com/sremedios/eclare.
{"title":"ECLARE: efficient cross-planar learning for anisotropic resolution enhancement.","authors":"Samuel W Remedios, Shuwen Wei, Shuo Han, Jinwei Zhang, Aaron Carass, Kurt G Schilling, Dzung L Pham, Jerry L Prince, Blake E Dewey","doi":"10.1117/1.JMI.13.2.024001","DOIUrl":"https://doi.org/10.1117/1.JMI.13.2.024001","url":null,"abstract":"<p><strong>Purpose: </strong>In clinical imaging, magnetic resonance (MR) image volumes are often acquired as stacks of 2D slices with decreased scan times, improved signal-to-noise ratio, and image contrasts unique to 2D MR pulse sequences. Although this is sufficient for clinical evaluation, automated algorithms designed for 3D analysis perform poorly on multislice 2D MR volumes, especially those with thick slices and gaps between slices. Superresolution (SR) methods aim to address this problem, but previous methods do not address all of the following: slice profile shape estimation, slice gap, domain shift, and noninteger or arbitrary upsampling factors.</p><p><strong>Approach: </strong>We propose ECLARE (Efficient Cross-planar Learning for Anisotropic Resolution Enhancement), a self-SR method that addresses each of these factors. ECLARE uses a slice profile estimated from the multislice 2D MR volume, trains a network to learn the mapping from low-resolution to high-resolution in-plane patches from the same volume, performs SR with antialiasing, and respects the image FOV during resampling. We compared ECLARE with cubic B-spline interpolation, SMORE, and other contemporary SR methods. We used realistic and representative simulations on human head MR volumes so that quantitative performance against ground truth can be computed. Specifically, healthy <math> <mrow><msub><mi>T</mi> <mn>1</mn></msub> </mrow> </math> -w and people with MS <math> <mrow><msub><mi>T</mi> <mn>2</mn></msub> </mrow> </math> -w FLAIR datasets were used for evaluations. We used the peak signal-to-noise ratio and structural similarity index measure as signal recovery metrics. We additionally used two independent brain parcellation algorithms, SLANT and SynthSeg, to compute the consistency Dice similarity coefficient and the <math> <mrow><msup><mi>R</mi> <mn>2</mn></msup> </mrow> </math> coefficient of determination, respectively, as comparison metrics.</p><p><strong>Results: </strong>For images with up to 5 mm of slice thickness and up to 1.5 mm of gap, ECLARE achieves greater mean PSNR and SSIM compared with other methods. In representative regions of interest, such as the ventricles, caudate, cerebral white matter, and cerebellar white matter, ECLARE performs comparably or better than other approaches. These trends are similar for both investigated datasets.</p><p><strong>Conclusions: </strong>The use of slice profile estimation, FOV-aware resampling, and self-SR allowed ECLARE to robustly superresolve anisotropic images without the need for external training data. Future work will investigate the utility of ECLARE on other organs, species, modalities, and resolutions. Our code is open-source and available at https://www.github.com/sremedios/eclare.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 2","pages":"024001"},"PeriodicalIF":1.7,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12959970/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147366995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-02-15DOI: 10.1117/1.JMI.13.S1.S11202
Chloe Cho, Yihao Liu, Bohan Jiang, Andrew J McNeil, Benoit M Dawant, Bennett A Landman, Eric R Tkaczyk
Purpose: Clinical photographs play an integral role across medical fields. Since the mid-20th century, deidentification has consisted of black bars covering specific facial features, typically the eyes alone. Although increasingly questioned, this practice persists in clinical and academic settings.
Approach: A barrier to standardized deidentification guideline development is the unknown risk of artificial intelligence (AI) to reconstruct faces from partially obscured photos. We evaluate the ability of generative AI to reconstruct 10,000 facial images in the Synthetic Faces High Quality dataset across 14 regional masking strategies.
Results: Covering the eyes or any other single facial feature resulted in highly identifiable reconstructions, demonstrated by low face mesh distortion (0.14 to 0.18 relative to whole-face masking; absolute total face mesh distortion 8.34 to 10.19) and high structural similarity index to the original face (1.24 to 1.25 relative to whole-face masking; absolute SSIM 0.91 to 0.92). An open-source face verification model using Dlib was able to match 97.98% to 99.93% of these reconstructed images with the original image prior to single feature masking. Removing all major facial features (eyebrows, eyes, nose, and mouth) resulted in a threefold reduction in face verification rates compared with eyes alone, from 98.87% (95% CI [98.63%, 99.07%]) to 33.93% (95% CI [32.95%, 34.94%]).
Conclusions: We provide quantitative metrics of the reidentification risk that modern generative AI technology poses for partially obscured facial images.
{"title":"How much of a face is a face: exploring reidentification potential with generative AI.","authors":"Chloe Cho, Yihao Liu, Bohan Jiang, Andrew J McNeil, Benoit M Dawant, Bennett A Landman, Eric R Tkaczyk","doi":"10.1117/1.JMI.13.S1.S11202","DOIUrl":"https://doi.org/10.1117/1.JMI.13.S1.S11202","url":null,"abstract":"<p><strong>Purpose: </strong>Clinical photographs play an integral role across medical fields. Since the mid-20th century, deidentification has consisted of black bars covering specific facial features, typically the eyes alone. Although increasingly questioned, this practice persists in clinical and academic settings.</p><p><strong>Approach: </strong>A barrier to standardized deidentification guideline development is the unknown risk of artificial intelligence (AI) to reconstruct faces from partially obscured photos. We evaluate the ability of generative AI to reconstruct 10,000 facial images in the Synthetic Faces High Quality dataset across 14 regional masking strategies.</p><p><strong>Results: </strong>Covering the eyes or any other single facial feature resulted in highly identifiable reconstructions, demonstrated by low face mesh distortion (0.14 to 0.18 relative to whole-face masking; absolute total face mesh distortion 8.34 to 10.19) and high structural similarity index to the original face (1.24 to 1.25 relative to whole-face masking; absolute SSIM 0.91 to 0.92). An open-source face verification model using Dlib was able to match 97.98% to 99.93% of these reconstructed images with the original image prior to single feature masking. Removing all major facial features (eyebrows, eyes, nose, and mouth) resulted in a threefold reduction in face verification rates compared with eyes alone, from 98.87% (95% CI [98.63%, 99.07%]) to 33.93% (95% CI [32.95%, 34.94%]).</p><p><strong>Conclusions: </strong>We provide quantitative metrics of the reidentification risk that modern generative AI technology poses for partially obscured facial images.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 Suppl 1","pages":"S11202"},"PeriodicalIF":1.7,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12906867/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146208138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-02-14DOI: 10.1117/1.JMI.13.S1.S11203
Dayvison Gomes de Oliveira, Franklin Anthony Ramos Coêlho, Thaís Gaudencio do Rêgo, Yuri de Almeida Malheiros Barbosa, Telmo de Menezes Silva Filho, Bruno Barufaldi
Purpose: We investigate the use of latent diffusion models (LDMs) for synthesizing and enhancing photon-counting chest computed tomography (CT) images. We evaluate the models' capabilities in two main tasks: image generation for dataset augmentation and super-resolution (SR) for improving image quality, aiming to support diagnostic accuracy and accessibility to high-resolution data.
Approach: The proposed framework combines a variational autoencoder-based latent encoder (AutoencoderKL) and a denoising diffusion model, trained under multiple conditioning tests. Eight experiments were conducted across generative and SR tasks, exploring the effects of different conditioning strategies, including segmentation masks and class labels (e.g., lung versus soft tissue), as well as varying loss functions.
Results: Unconditioned LDMs produced hallucinated anatomy, lacking clinical interpretability. Conditioning with segmentation masks and anatomical labels considerably improved structural fidelity. The best results for image generation achieved a multiscale structural similarity index measure (MS-SSIM) = 0.7135 and peak signal-to-noise ratio (PSNR) = 24.53, whereas SR tasks reached MS-SSIM = 0.85 and PSNR = 27.31, comparable to recent diffusion-based benchmarks.
Conclusions: LDMs show strong potential for both augmentation and SR of photon-counting chest CT images. When guided by segmentation masks and class labels, these models preserve anatomical structure and reduce hallucination risks. The results support their use in clinically relevant scenarios, providing controllable and high-fidelity image synthesis.
{"title":"Importance of conditioning in latent diffusion models for image generation and super-resolution.","authors":"Dayvison Gomes de Oliveira, Franklin Anthony Ramos Coêlho, Thaís Gaudencio do Rêgo, Yuri de Almeida Malheiros Barbosa, Telmo de Menezes Silva Filho, Bruno Barufaldi","doi":"10.1117/1.JMI.13.S1.S11203","DOIUrl":"https://doi.org/10.1117/1.JMI.13.S1.S11203","url":null,"abstract":"<p><strong>Purpose: </strong>We investigate the use of latent diffusion models (LDMs) for synthesizing and enhancing photon-counting chest computed tomography (CT) images. We evaluate the models' capabilities in two main tasks: image generation for dataset augmentation and super-resolution (SR) for improving image quality, aiming to support diagnostic accuracy and accessibility to high-resolution data.</p><p><strong>Approach: </strong>The proposed framework combines a variational autoencoder-based latent encoder (AutoencoderKL) and a denoising diffusion model, trained under multiple conditioning tests. Eight experiments were conducted across generative and SR tasks, exploring the effects of different conditioning strategies, including segmentation masks and class labels (e.g., lung versus soft tissue), as well as varying loss functions.</p><p><strong>Results: </strong>Unconditioned LDMs produced hallucinated anatomy, lacking clinical interpretability. Conditioning with segmentation masks and anatomical labels considerably improved structural fidelity. The best results for image generation achieved a multiscale structural similarity index measure (MS-SSIM) = 0.7135 and peak signal-to-noise ratio (PSNR) = 24.53, whereas SR tasks reached MS-SSIM = 0.85 and PSNR = 27.31, comparable to recent diffusion-based benchmarks.</p><p><strong>Conclusions: </strong>LDMs show strong potential for both augmentation and SR of photon-counting chest CT images. When guided by segmentation masks and class labels, these models preserve anatomical structure and reduce hallucination risks. The results support their use in clinically relevant scenarios, providing controllable and high-fidelity image synthesis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 Suppl 1","pages":"S11203"},"PeriodicalIF":1.7,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12904813/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146203298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2026-02-15DOI: 10.1117/1.JMI.13.S1.S11204
Xin Wang, Gengxin Shi, Peiqin Teng, Aswath Sivakumar, Tianyi Ye, Adam D Sylvester, J Webster Stayman, Wojciech B Zbijewski
<p><strong>Purpose: </strong>We aim to develop a conditional generative diffusion model capable of producing three-dimensional (3D) trabecular bone samples that can be tuned to achieve specific structural characteristics prescribed in terms of three geometric metrics of trabecular microarchitecture: bone volume fraction (BV/TV), trabecular thickness (Tb.Th), and spacing (Tb.Sp).</p><p><strong>Approach: </strong>The generative model is based on 3D latent diffusion. The latent representation of trabecular patches is obtained by a dedicated variational autoencoder (VAE). To control the microstructure characteristics of the synthetic samples, the model is conditioned on BV/TV, Tb.Th, and Tb.Sp. In addition, a shifting slab inference method is employed to generate extended volumes with locally tunable microstructure in a computationally efficient manner. The training data involved 3551 <math><mrow><mn>128</mn> <mo>×</mo> <mn>128</mn> <mo>×</mo> <mn>128</mn></mrow> </math> volumes of interest (VOIs) extracted from micro-CT volumes ( <math><mrow><mn>50</mn> <mtext> </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> voxel size) of 20 femoral bone specimens, paired with trabecular metrics computed within each VOI; the split for training and validation data was 9:1. For testing, 2000 synthetic bone samples were generated using single slab inference over a wide range of condition (target) microstructure metrics. Results were evaluated in terms of (i) consistency across multiple realizations of reverse diffusion for a fixed condition, measured by the coefficient of variation (CV) of trabecular measurements; (ii) agreement between BV/TV, Tb.Th, and Tb.Sp values provided as a condition and those measured in the corresponding synthetic samples, assessed using Pearson correlation coefficient (PCC); and (iii) overlap between the distributions of trabecular parameters of real and synthetic bone patches; this coverage analysis included both the conditioning parameters of BV/TV, Tb.Th, and Tb.Sp, as well as unconditioned metrics of degree of anisotropy, ellipsoid factor, and connectivity. Further, extended volumes ( <math><mrow><mn>128</mn> <mo>×</mo> <mn>128</mn> <mo>×</mo> <mn>256</mn> <mrow><mtext> </mtext></mrow> <mrow><mtext>voxels</mtext></mrow> </mrow> </math> ) were generated using shifting-slab inference with spatially invariant and spatially varying conditioning and evaluated in terms of local agreement between the prescribed and achieved trabecular parameters.</p><p><strong>Results: </strong>Visually, the synthesized cancellous bone patches appear highly similar to the training micro-CT data. The conditioned parameters of the generated volumes agree well with their target values (PCC of 0.99, 0.97, and 0.95 for BV/TV, Tb.Th, and Tb.Sp, respectively). There is a trend toward generating trabeculae that are slightly thicker than prescribed, but this bias is typically on the order of one voxel ( <math><mrow><mn>50</mn> <mtext> </mtext> <mi>μ</mi> <mi>m</mi></mr
{"title":"Conditional generative diffusion model for 3D trabecular bone synthesis with tunable microstructure.","authors":"Xin Wang, Gengxin Shi, Peiqin Teng, Aswath Sivakumar, Tianyi Ye, Adam D Sylvester, J Webster Stayman, Wojciech B Zbijewski","doi":"10.1117/1.JMI.13.S1.S11204","DOIUrl":"https://doi.org/10.1117/1.JMI.13.S1.S11204","url":null,"abstract":"<p><strong>Purpose: </strong>We aim to develop a conditional generative diffusion model capable of producing three-dimensional (3D) trabecular bone samples that can be tuned to achieve specific structural characteristics prescribed in terms of three geometric metrics of trabecular microarchitecture: bone volume fraction (BV/TV), trabecular thickness (Tb.Th), and spacing (Tb.Sp).</p><p><strong>Approach: </strong>The generative model is based on 3D latent diffusion. The latent representation of trabecular patches is obtained by a dedicated variational autoencoder (VAE). To control the microstructure characteristics of the synthetic samples, the model is conditioned on BV/TV, Tb.Th, and Tb.Sp. In addition, a shifting slab inference method is employed to generate extended volumes with locally tunable microstructure in a computationally efficient manner. The training data involved 3551 <math><mrow><mn>128</mn> <mo>×</mo> <mn>128</mn> <mo>×</mo> <mn>128</mn></mrow> </math> volumes of interest (VOIs) extracted from micro-CT volumes ( <math><mrow><mn>50</mn> <mtext> </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> voxel size) of 20 femoral bone specimens, paired with trabecular metrics computed within each VOI; the split for training and validation data was 9:1. For testing, 2000 synthetic bone samples were generated using single slab inference over a wide range of condition (target) microstructure metrics. Results were evaluated in terms of (i) consistency across multiple realizations of reverse diffusion for a fixed condition, measured by the coefficient of variation (CV) of trabecular measurements; (ii) agreement between BV/TV, Tb.Th, and Tb.Sp values provided as a condition and those measured in the corresponding synthetic samples, assessed using Pearson correlation coefficient (PCC); and (iii) overlap between the distributions of trabecular parameters of real and synthetic bone patches; this coverage analysis included both the conditioning parameters of BV/TV, Tb.Th, and Tb.Sp, as well as unconditioned metrics of degree of anisotropy, ellipsoid factor, and connectivity. Further, extended volumes ( <math><mrow><mn>128</mn> <mo>×</mo> <mn>128</mn> <mo>×</mo> <mn>256</mn> <mrow><mtext> </mtext></mrow> <mrow><mtext>voxels</mtext></mrow> </mrow> </math> ) were generated using shifting-slab inference with spatially invariant and spatially varying conditioning and evaluated in terms of local agreement between the prescribed and achieved trabecular parameters.</p><p><strong>Results: </strong>Visually, the synthesized cancellous bone patches appear highly similar to the training micro-CT data. The conditioned parameters of the generated volumes agree well with their target values (PCC of 0.99, 0.97, and 0.95 for BV/TV, Tb.Th, and Tb.Sp, respectively). There is a trend toward generating trabeculae that are slightly thicker than prescribed, but this bias is typically on the order of one voxel ( <math><mrow><mn>50</mn> <mtext> </mtext> <mi>μ</mi> <mi>m</mi></mr","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 Suppl 1","pages":"S11204"},"PeriodicalIF":1.7,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12907505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146214560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2026-02-23DOI: 10.1117/1.JMI.13.1.013502
Nicholas P Gruszauskas, Joseph Steiner, Krista Dillingham
Purpose: Advancements in radionuclide imaging and therapy techniques have created a groundswell of enthusiasm in the recently designated field of theranostics. This has increased the need for facilities that are able to participate in clinical trials for investigational theranostic agents. Theranostics clinical trials present several unique challenges that will tax the resources and staff of most medical centers. Our purpose is to describe the unique logistical and administrative challenges associated with theranostics clinical trials, propose strategies for addressing them, and make recommendations regarding trial conduct to the community at large.
Approach: The authors' experiences reviewing, implementing, and managing theranostics trials at their institution were used to identify common activities and challenges.
Results: Several key categories of requirements and challenges were identified. Multidisciplinary teams consisting of nuclear medicine, oncology, nursing, clinical research, and administrative staff are necessary to adequately perform all trial-related activities. Strategies are proposed to address these challenges and activities at the institutional and industry levels.
Conclusion: The unique challenges inherent to theranostics clinical trials require a focused investment of time, effort, and resources from all stakeholders. Institutions that wish to participate in these trials must develop the infrastructure necessary to fully support the breadth of activities they require. Implementation of the strategies and recommendations presented here will ensure the successful conduct of these trials and will improve efficiency across the community.
{"title":"Challenges to the management of oncologic theranostics clinical trials: recommendations for the conduct of theranostics trials at investigational sites.","authors":"Nicholas P Gruszauskas, Joseph Steiner, Krista Dillingham","doi":"10.1117/1.JMI.13.1.013502","DOIUrl":"10.1117/1.JMI.13.1.013502","url":null,"abstract":"<p><strong>Purpose: </strong>Advancements in radionuclide imaging and therapy techniques have created a groundswell of enthusiasm in the recently designated field of theranostics. This has increased the need for facilities that are able to participate in clinical trials for investigational theranostic agents. Theranostics clinical trials present several unique challenges that will tax the resources and staff of most medical centers. Our purpose is to describe the unique logistical and administrative challenges associated with theranostics clinical trials, propose strategies for addressing them, and make recommendations regarding trial conduct to the community at large.</p><p><strong>Approach: </strong>The authors' experiences reviewing, implementing, and managing theranostics trials at their institution were used to identify common activities and challenges.</p><p><strong>Results: </strong>Several key categories of requirements and challenges were identified. Multidisciplinary teams consisting of nuclear medicine, oncology, nursing, clinical research, and administrative staff are necessary to adequately perform all trial-related activities. Strategies are proposed to address these challenges and activities at the institutional and industry levels.</p><p><strong>Conclusion: </strong>The unique challenges inherent to theranostics clinical trials require a focused investment of time, effort, and resources from all stakeholders. Institutions that wish to participate in these trials must develop the infrastructure necessary to fully support the breadth of activities they require. Implementation of the strategies and recommendations presented here will ensure the successful conduct of these trials and will improve efficiency across the community.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"13 1","pages":"013502"},"PeriodicalIF":1.7,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12928531/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147285697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}