Pub Date : 2024-12-01Epub Date: 2024-10-15DOI: 10.1117/1.JMI.11.S1.S12808
Hadley DeBrosse, Giavanna Jadick, Ling Jian Meng, Patrick La Rivière
Purpose: We provide a comparison of X-ray fluorescence emission tomography (XFET) and computed tomography (CT) for detecting low concentrations of gold nanoparticles (GNPs) in soft tissue and characterize the conditions under which XFET outperforms energy-integrating CT (EICT) and photon-counting CT (PCCT).
Approach: We compared dose-matched Monte Carlo XFET simulations and analytical fan-beam EICT and PCCT simulations. Each modality was used to image a numerical mouse phantom and contrast-depth phantom containing GNPs ranging from 0.05% to 4% by weight in soft tissue. Contrast-to-noise ratios (CNRs) of gold regions were compared among the three modalities, and XFET's detection limit was quantified based on the Rose criterion. A partial field-of-view (FOV) image was acquired for the phantom region containing 0.05% GNPs.
Results: For the mouse phantom, XFET produced superior CNR values ( , 21.6, and 3.4) compared with CT images obtained with both energy-integrating ( , 4.6, and 1.5) and photon-counting ( , 7.7, and 2.0) detection systems. More generally, XFET outperformed CT for superficial imaging depths ( ) for gold concentrations at and above 0.5%. XFET's surface detection limit was quantified as 0.44% for an average phantom dose of 16 mGy compatible with in vivo imaging. XFET's ability to image partial FOVs was demonstrated, and 0.05% gold was easily detected with an estimated dose of to a localized region of interest.
Conclusions: We demonstrate a proof of XFET's benefit for imaging low concentrations of gold at superficial depths and the feasibility of XFET for in vivo metal mapping in preclinical imaging tasks.
{"title":"Contrast-to-noise ratio comparison between X-ray fluorescence emission tomography and computed tomography.","authors":"Hadley DeBrosse, Giavanna Jadick, Ling Jian Meng, Patrick La Rivière","doi":"10.1117/1.JMI.11.S1.S12808","DOIUrl":"https://doi.org/10.1117/1.JMI.11.S1.S12808","url":null,"abstract":"<p><strong>Purpose: </strong>We provide a comparison of X-ray fluorescence emission tomography (XFET) and computed tomography (CT) for detecting low concentrations of gold nanoparticles (GNPs) in soft tissue and characterize the conditions under which XFET outperforms energy-integrating CT (EICT) and photon-counting CT (PCCT).</p><p><strong>Approach: </strong>We compared dose-matched Monte Carlo XFET simulations and analytical fan-beam EICT and PCCT simulations. Each modality was used to image a numerical mouse phantom and contrast-depth phantom containing GNPs ranging from 0.05% to 4% by weight in soft tissue. Contrast-to-noise ratios (CNRs) of gold regions were compared among the three modalities, and XFET's detection limit was quantified based on the Rose criterion. A partial field-of-view (FOV) image was acquired for the phantom region containing 0.05% GNPs.</p><p><strong>Results: </strong>For the mouse phantom, XFET produced superior CNR values ( <math><mrow><mi>CNRs</mi> <mo>=</mo> <mn>24.5</mn></mrow> </math> , 21.6, and 3.4) compared with CT images obtained with both energy-integrating ( <math><mrow><mi>CNR</mi> <mo>=</mo> <mn>4.4</mn></mrow> </math> , 4.6, and 1.5) and photon-counting ( <math><mrow><mi>CNR</mi> <mo>=</mo> <mn>6.5</mn></mrow> </math> , 7.7, and 2.0) detection systems. More generally, XFET outperformed CT for superficial imaging depths ( <math><mrow><mo><</mo> <mn>28.75</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> ) for gold concentrations at and above 0.5%. XFET's surface detection limit was quantified as 0.44% for an average phantom dose of 16 mGy compatible with <i>in vivo</i> imaging. XFET's ability to image partial FOVs was demonstrated, and 0.05% gold was easily detected with an estimated dose of <math><mrow><mo>∼</mo> <mn>81.6</mn> <mtext> </mtext> <mi>cGy</mi></mrow> </math> to a localized region of interest.</p><p><strong>Conclusions: </strong>We demonstrate a proof of XFET's benefit for imaging low concentrations of gold at superficial depths and the feasibility of XFET for <i>in vivo</i> metal mapping in preclinical imaging tasks.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 Suppl 1","pages":"S12808"},"PeriodicalIF":1.9,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11478016/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-12-27DOI: 10.1117/1.JMI.11.S1.S12810
J Carlos Rodriguez Luna, Mini Das
<p><strong>Purpose: </strong>Photon counting detectors offer promising advancements in computed tomography (CT) imaging by enabling the quantification and three-dimensional imaging of contrast agents and tissue types through simultaneous multi-energy projections from broad X-ray spectra. However, the accuracy of these decomposition methods hinges on precise composite spectral attenuation values that one must reconstruct from spectral micro-CT. Errors in such estimations could be due to effects such as beam hardening, object scatter, or detector sensor-related spectral distortions such as fluorescence. Even if accurate spectral correction is done, multi-material separation within a volume remains a challenge. Increasing the number of energy bins in material decomposition problems often comes with a significant noise penalty but with minimal decomposition benefits.</p><p><strong>Approach: </strong>We begin with an empirical spectral correction method executed in the tomographic domain that accounts for distortions in estimated spectral attenuation for each voxel. This is followed by our proposed iterative clustering material decomposition (ICMD) where clustering of voxels is used to reduce the number of basis materials to be resolved for each cluster. Using a larger number of energy bins for the clustering step shows distinct advantages in excellent classification to a larger number of clusters with accurate cluster centers when compared with the National Institute of Standards and Technology attenuation values. The decomposition step is applied to each cluster separately where each cluster has fewer basis materials compared with the entire volume. This is shown to reduce the need for the number of energy bins required in each decomposition step for the clusters. This approach significantly increases the total number of materials that can be decomposed within the volume with high accuracy and with excellent noise properties.</p><p><strong>Results: </strong>Utilizing a (cadmium telluride 1-mm-thick sensor) Medipix detector with a <math><mrow><mn>55</mn> <mtext>-</mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> pitch, we demonstrate the quantitatively accurate decomposition of several materials in a phantom study, where the sample includes mixtures of soft materials such as water and poly-methyl methacrylate along with contrast-enhancing materials. We show improved accuracy and lower noise when all five energy bins were used to yield effective classification of voxels into multiple accurate fundamental clusters which was followed by the decomposition step applied to each cluster using just two energy bins. We also show an example of biological sample imaging and separating three distinct types of tissue in mice: muscle, fat, and bone. Our experimental results show that the combination of effective and practical spectral correction and high-dimensional data clustering enhances decomposition accuracy and reduces noise in micro-CT.</p><p><strong>Conclusions
目的:光子计数探测器在计算机断层扫描(CT)成像中提供了有希望的进步,通过同时从宽x射线光谱中进行多能投射,实现造影剂和组织类型的量化和三维成像。然而,这些分解方法的准确性取决于精确的复合光谱衰减值,必须从光谱微ct中重建。这种估计中的误差可能是由于光束硬化、物体散射或探测器传感器相关的光谱畸变(如荧光)等影响造成的。即使进行了精确的光谱校正,在一个体积内的多材料分离仍然是一个挑战。在材料分解问题中,增加能量箱的数量通常会带来显著的噪音惩罚,但分解效益却微乎其微。方法:我们从在层析域中执行的经验光谱校正方法开始,该方法考虑了每个体素估计的光谱衰减的扭曲。接下来是我们提出的迭代聚类材料分解(ICMD),其中使用体素聚类来减少每个聚类需要解析的基材料的数量。与美国国家标准与技术研究院(National Institute of Standards and Technology)的衰减值相比,在聚类步骤中使用更大数量的能量桶,对于具有准确聚类中心的更大数量的聚类具有明显的优势。分解步骤分别应用于每个簇,每个簇与整个体积相比具有更少的基材料。这可以减少对簇的每个分解步骤所需的能量箱数量的需求。这种方法显著增加了可以在体积内分解的材料总数,具有高精度和优异的噪声特性。结果:利用55 μ m间距的(1 mm厚的碲化镉传感器)Medipix探测器,我们在模拟研究中展示了几种材料的定量准确分解,其中样品包括软材料(如水和聚甲基丙烯酸甲酯)以及对比度增强材料的混合物。当使用所有五个能量桶将体素有效分类为多个准确的基本聚类时,我们显示出更高的准确性和更低的噪声,然后仅使用两个能量桶对每个聚类应用分解步骤。我们还展示了一个生物样本成像的例子,并在小鼠中分离了三种不同类型的组织:肌肉、脂肪和骨骼。实验结果表明,有效实用的光谱校正与高维数据聚类相结合,提高了微ct分解精度,降低了噪声。结论:该ICMD可以定量分离多种材料,包括混合物,也可以有效分离多种造影剂。
{"title":"Iterative clustering material decomposition aided by empirical spectral correction for photon counting detectors in micro-CT.","authors":"J Carlos Rodriguez Luna, Mini Das","doi":"10.1117/1.JMI.11.S1.S12810","DOIUrl":"10.1117/1.JMI.11.S1.S12810","url":null,"abstract":"<p><strong>Purpose: </strong>Photon counting detectors offer promising advancements in computed tomography (CT) imaging by enabling the quantification and three-dimensional imaging of contrast agents and tissue types through simultaneous multi-energy projections from broad X-ray spectra. However, the accuracy of these decomposition methods hinges on precise composite spectral attenuation values that one must reconstruct from spectral micro-CT. Errors in such estimations could be due to effects such as beam hardening, object scatter, or detector sensor-related spectral distortions such as fluorescence. Even if accurate spectral correction is done, multi-material separation within a volume remains a challenge. Increasing the number of energy bins in material decomposition problems often comes with a significant noise penalty but with minimal decomposition benefits.</p><p><strong>Approach: </strong>We begin with an empirical spectral correction method executed in the tomographic domain that accounts for distortions in estimated spectral attenuation for each voxel. This is followed by our proposed iterative clustering material decomposition (ICMD) where clustering of voxels is used to reduce the number of basis materials to be resolved for each cluster. Using a larger number of energy bins for the clustering step shows distinct advantages in excellent classification to a larger number of clusters with accurate cluster centers when compared with the National Institute of Standards and Technology attenuation values. The decomposition step is applied to each cluster separately where each cluster has fewer basis materials compared with the entire volume. This is shown to reduce the need for the number of energy bins required in each decomposition step for the clusters. This approach significantly increases the total number of materials that can be decomposed within the volume with high accuracy and with excellent noise properties.</p><p><strong>Results: </strong>Utilizing a (cadmium telluride 1-mm-thick sensor) Medipix detector with a <math><mrow><mn>55</mn> <mtext>-</mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> pitch, we demonstrate the quantitatively accurate decomposition of several materials in a phantom study, where the sample includes mixtures of soft materials such as water and poly-methyl methacrylate along with contrast-enhancing materials. We show improved accuracy and lower noise when all five energy bins were used to yield effective classification of voxels into multiple accurate fundamental clusters which was followed by the decomposition step applied to each cluster using just two energy bins. We also show an example of biological sample imaging and separating three distinct types of tissue in mice: muscle, fat, and bone. Our experimental results show that the combination of effective and practical spectral correction and high-dimensional data clustering enhances decomposition accuracy and reduces noise in micro-CT.</p><p><strong>Conclusions","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 Suppl 1","pages":"S12810"},"PeriodicalIF":1.9,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11676343/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Self-supervised pre-training can reduce the amount of labeled training data needed by pre-learning fundamental visual characteristics of the medical imaging data. We investigate several self-supervised training strategies for chest computed tomography exams and their effects on downstream applications.
Approach: We benchmark five well-known self-supervision strategies (masked image region prediction, next slice prediction, rotation prediction, flip prediction, and denoising) on 15 M chest computed tomography (CT) slices collected from four sites of the Mayo Clinic enterprise, United States. These models were evaluated for two downstream tasks on public datasets: pulmonary embolism (PE) detection (classification) and lung nodule segmentation. Image embeddings generated by these models were also evaluated for prediction of patient age, race, and gender to study inherent biases in models' understanding of chest CT exams.
Results: The use of pre-training weights especially masked region prediction-based weights, improved performance, and reduced computational effort needed for downstream tasks compared with task-specific state-of-the-art (SOTA) models. Performance improvement for PE detection was observed for training dataset sizes as large as with a maximum gain of 5% over SOTA. The segmentation model initialized with pre-training weights learned twice as fast as the randomly initialized model. While gender and age predictors built using self-supervised training weights showed no performance improvement over randomly initialized predictors, the race predictor experienced a 10% performance boost when using self-supervised training weights.
Conclusion: We released self-supervised models and weights under an open-source academic license. These models can then be fine-tuned with limited task-specific annotated data for a variety of downstream imaging tasks, thus accelerating research in biomedical imaging informatics.
目的:自监督预训练可以通过预学习医学影像数据的基本视觉特征来减少所需的标记训练数据量。我们研究了胸部计算机断层扫描检查的几种自监督训练策略及其对下游应用的影响:我们在从美国梅奥诊所企业的四个站点收集的 1500 万张胸部计算机断层扫描(CT)切片上,对五种著名的自监督策略(遮蔽图像区域预测、下一切片预测、旋转预测、翻转预测和去噪)进行了基准测试。这些模型针对公共数据集上的两项下游任务进行了评估:肺栓塞(PE)检测(分类)和肺结节分割。此外,还对这些模型生成的图像嵌入进行了评估,以预测患者的年龄、种族和性别,从而研究模型对胸部 CT 检查的理解是否存在固有偏差:结果:与针对特定任务的最先进模型(SOTA)相比,使用预训练权重(尤其是基于掩蔽区域预测的权重)提高了性能,并减少了下游任务所需的计算工作量。当训练数据集的大小达到 380 K 时,PE 检测的性能有所提高,与 SOTA 相比最大提高了 5%。使用预训练权重初始化的分割模型的学习速度是随机初始化模型的两倍。与随机初始化的预测器相比,使用自我监督训练权重构建的性别和年龄预测器的性能没有提高,但使用自我监督训练权重的种族预测器的性能提高了 10%:我们以开源学术许可证的形式发布了自监督模型和权重。结论:我们以开源学术许可证的形式发布了自监督模型和权重,然后可以利用有限的特定任务注释数据对这些模型进行微调,以用于各种下游成像任务,从而加速生物医学成像信息学的研究。
{"title":"Self-supervised learning for chest computed tomography: training strategies and effect on downstream applications.","authors":"Amara Tariq, Gokul Ramasamy, Bhavik Patel, Imon Banerjee","doi":"10.1117/1.JMI.11.6.064003","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.064003","url":null,"abstract":"<p><strong>Purpose: </strong>Self-supervised pre-training can reduce the amount of labeled training data needed by pre-learning fundamental visual characteristics of the medical imaging data. We investigate several self-supervised training strategies for chest computed tomography exams and their effects on downstream applications.</p><p><strong>Approach: </strong>We benchmark five well-known self-supervision strategies (masked image region prediction, next slice prediction, rotation prediction, flip prediction, and denoising) on 15 M chest computed tomography (CT) slices collected from four sites of the Mayo Clinic enterprise, United States. These models were evaluated for two downstream tasks on public datasets: pulmonary embolism (PE) detection (classification) and lung nodule segmentation. Image embeddings generated by these models were also evaluated for prediction of patient age, race, and gender to study inherent biases in models' understanding of chest CT exams.</p><p><strong>Results: </strong>The use of pre-training weights especially masked region prediction-based weights, improved performance, and reduced computational effort needed for downstream tasks compared with task-specific state-of-the-art (SOTA) models. Performance improvement for PE detection was observed for training dataset sizes as large as <math><mrow><mo>∼</mo> <mn>380</mn> <mtext> </mtext> <mi>K</mi></mrow> </math> with a maximum gain of 5% over SOTA. The segmentation model initialized with pre-training weights learned twice as fast as the randomly initialized model. While gender and age predictors built using self-supervised training weights showed no performance improvement over randomly initialized predictors, the race predictor experienced a 10% performance boost when using self-supervised training weights.</p><p><strong>Conclusion: </strong>We released self-supervised models and weights under an open-source academic license. These models can then be fine-tuned with limited task-specific annotated data for a variety of downstream imaging tasks, thus accelerating research in biomedical imaging informatics.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064003"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11550486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-11-05DOI: 10.1117/1.JMI.11.6.067501
Lucas W Remedios, Shunxing Bao, Samuel W Remedios, Ho Hin Lee, Leon Y Cai, Thomas Li, Ruining Deng, Nancy R Newlin, Adam M Saunders, Can Cui, Jia Li, Qi Liu, Ken S Lau, Joseph T Roland, Mary K Washington, Lori A Coburn, Keith T Wilson, Yuankai Huo, Bennett A Landman
Purpose: Cells are building blocks for human physiology; consequently, understanding the way cells communicate, co-locate, and interrelate is essential to furthering our understanding of how the body functions in both health and disease. Hematoxylin and eosin (H&E) is the standard stain used in histological analysis of tissues in both clinical and research settings. Although H&E is ubiquitous and reveals tissue microanatomy, the classification and mapping of cell subtypes often require the use of specialized stains. The recent CoNIC Challenge focused on artificial intelligence classification of six types of cells on colon H&E but was unable to classify epithelial subtypes (progenitor, enteroendocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), and connective subtypes (fibroblasts). We propose to use inter-modality learning to label previously un-labelable cell types on H&E.
Approach: We took advantage of the cell classification information inherent in multiplexed immunofluorescence (MxIF) histology to create cell-level annotations for 14 subclasses. Then, we performed style transfer on the MxIF to synthesize realistic virtual H&E. We assessed the efficacy of a supervised learning scheme using the virtual H&E and 14 subclass labels. We evaluated our model on virtual H&E and real H&E.
Results: On virtual H&E, we were able to classify helper T cells and epithelial progenitors with positive predictive values of (prevalence ) and (prevalence ), respectively, when using ground truth centroid information. On real H&E, we needed to compute bounded metrics instead of direct metrics because our fine-grained virtual H&E predicted classes had to be matched to the closest available parent classes in the coarser labels from the real H&E dataset. For the real H&E, we could classify bounded metrics for the helper T cells and epithelial progenitors with upper bound positive predictive values of (parent class prevalence 0.21) and (parent class prevalence 0.49) when using ground truth centroid information.
Conclusions: This is the first work to provide cell type classification for helper T and epithelial progenitor nuclei on H&E.
{"title":"Data-driven nucleus subclassification on colon hematoxylin and eosin using style-transferred digital pathology.","authors":"Lucas W Remedios, Shunxing Bao, Samuel W Remedios, Ho Hin Lee, Leon Y Cai, Thomas Li, Ruining Deng, Nancy R Newlin, Adam M Saunders, Can Cui, Jia Li, Qi Liu, Ken S Lau, Joseph T Roland, Mary K Washington, Lori A Coburn, Keith T Wilson, Yuankai Huo, Bennett A Landman","doi":"10.1117/1.JMI.11.6.067501","DOIUrl":"10.1117/1.JMI.11.6.067501","url":null,"abstract":"<p><strong>Purpose: </strong>Cells are building blocks for human physiology; consequently, understanding the way cells communicate, co-locate, and interrelate is essential to furthering our understanding of how the body functions in both health and disease. Hematoxylin and eosin (H&E) is the standard stain used in histological analysis of tissues in both clinical and research settings. Although H&E is ubiquitous and reveals tissue microanatomy, the classification and mapping of cell subtypes often require the use of specialized stains. The recent CoNIC Challenge focused on artificial intelligence classification of six types of cells on colon H&E but was unable to classify epithelial subtypes (progenitor, enteroendocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), and connective subtypes (fibroblasts). We propose to use inter-modality learning to label previously un-labelable cell types on H&E.</p><p><strong>Approach: </strong>We took advantage of the cell classification information inherent in multiplexed immunofluorescence (MxIF) histology to create cell-level annotations for 14 subclasses. Then, we performed style transfer on the MxIF to synthesize realistic virtual H&E. We assessed the efficacy of a supervised learning scheme using the virtual H&E and 14 subclass labels. We evaluated our model on virtual H&E and real H&E.</p><p><strong>Results: </strong>On virtual H&E, we were able to classify helper T cells and epithelial progenitors with positive predictive values of <math><mrow><mn>0.34</mn> <mo>±</mo> <mn>0.15</mn></mrow> </math> (prevalence <math><mrow><mn>0.03</mn> <mo>±</mo> <mn>0.01</mn></mrow> </math> ) and <math><mrow><mn>0.47</mn> <mo>±</mo> <mn>0.1</mn></mrow> </math> (prevalence <math><mrow><mn>0.07</mn> <mo>±</mo> <mn>0.02</mn></mrow> </math> ), respectively, when using ground truth centroid information. On real H&E, we needed to compute bounded metrics instead of direct metrics because our fine-grained virtual H&E predicted classes had to be matched to the closest available parent classes in the coarser labels from the real H&E dataset. For the real H&E, we could classify bounded metrics for the helper T cells and epithelial progenitors with upper bound positive predictive values of <math><mrow><mn>0.43</mn> <mo>±</mo> <mn>0.03</mn></mrow> </math> (parent class prevalence 0.21) and <math><mrow><mn>0.94</mn> <mo>±</mo> <mn>0.02</mn></mrow> </math> (parent class prevalence 0.49) when using ground truth centroid information.</p><p><strong>Conclusions: </strong>This is the first work to provide cell type classification for helper T and epithelial progenitor nuclei on H&E.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"067501"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11537205/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-11-06DOI: 10.1117/1.JMI.11.6.064001
Yihao Liu, Junyu Chen, Lianrui Zuo, Aaron Carass, Jerry L Prince
Purpose: Deformable image registration establishes non-linear spatial correspondences between fixed and moving images. Deep learning-based deformable registration methods have been widely studied in recent years due to their speed advantage over traditional algorithms as well as their better accuracy. Most existing deep learning-based methods require neural networks to encode location information in their feature maps and predict displacement or deformation fields through convolutional or fully connected layers from these high-dimensional feature maps. We present vector field attention (VFA), a novel framework that enhances the efficiency of the existing network design by enabling direct retrieval of location correspondences.
Approach: VFA uses neural networks to extract multi-resolution feature maps from the fixed and moving images and then retrieves pixel-level correspondences based on feature similarity. The retrieval is achieved with a novel attention module without the need for learnable parameters. VFA is trained end-to-end in either a supervised or unsupervised manner.
Results: We evaluated VFA for intra- and inter-modality registration and unsupervised and semi-supervised registration using public datasets as well as the Learn2Reg challenge. VFA demonstrated comparable or superior registration accuracy compared with several state-of-the-art methods.
Conclusions: VFA offers a novel approach to deformable image registration by directly retrieving spatial correspondences from feature maps, leading to improved performance in registration tasks. It holds potential for broader applications.
{"title":"Vector field attention for deformable image registration.","authors":"Yihao Liu, Junyu Chen, Lianrui Zuo, Aaron Carass, Jerry L Prince","doi":"10.1117/1.JMI.11.6.064001","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.064001","url":null,"abstract":"<p><strong>Purpose: </strong>Deformable image registration establishes non-linear spatial correspondences between fixed and moving images. Deep learning-based deformable registration methods have been widely studied in recent years due to their speed advantage over traditional algorithms as well as their better accuracy. Most existing deep learning-based methods require neural networks to encode location information in their feature maps and predict displacement or deformation fields through convolutional or fully connected layers from these high-dimensional feature maps. We present vector field attention (VFA), a novel framework that enhances the efficiency of the existing network design by enabling direct retrieval of location correspondences.</p><p><strong>Approach: </strong>VFA uses neural networks to extract multi-resolution feature maps from the fixed and moving images and then retrieves pixel-level correspondences based on feature similarity. The retrieval is achieved with a novel attention module without the need for learnable parameters. VFA is trained end-to-end in either a supervised or unsupervised manner.</p><p><strong>Results: </strong>We evaluated VFA for intra- and inter-modality registration and unsupervised and semi-supervised registration using public datasets as well as the Learn2Reg challenge. VFA demonstrated comparable or superior registration accuracy compared with several state-of-the-art methods.</p><p><strong>Conclusions: </strong>VFA offers a novel approach to deformable image registration by directly retrieving spatial correspondences from feature maps, leading to improved performance in registration tasks. It holds potential for broader applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064001"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540117/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142606811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-12-05DOI: 10.1117/1.JMI.11.6.062607
Taylor Kantor, Prashant Mahajan, Sarah Murthi, Candice Stegink, Barbara Brawn, Amitabh Varshney, Rishindra M Reddy
Purpose: eXtended Reality (XR) technology, including virtual reality (VR), augmented reality (AR), and mixed reality (MR), is a growing field in healthcare. Each modality offers unique benefits and drawbacks for medical education, simulation, and clinical care. We review current studies to understand how XR technology uses medical imaging to enhance surgical diagnostics, planning, and performance. We also highlight current limitations and future directions.
Approach: We reviewed the literature on immersive XR technologies for surgical planning and intraoperative augmentation, excluding studies on telemedicine and 2D video-based training. We cited publications highlighting XR's advantages and limitations in these categories.
Results: A review of 556 papers on XR for medical imaging in surgery yielded 155 relevant papers reviewed utilizing the aid of chatGPT. XR technology may improve procedural times, reduce errors, and enhance surgical workflows. It aids in preoperative planning, surgical navigation, and real-time data integration, improving surgeon ergonomics and enabling remote collaboration. However, adoption faces challenges such as high costs, infrastructure needs, and regulatory hurdles. Despite these, XR shows significant potential in advancing surgical care.
Conclusions: Immersive technologies in healthcare enhance visualization and understanding of medical conditions, promising better patient outcomes and innovative treatments but face adoption challenges such as cost, technological constraints, and regulatory hurdles. Addressing these requires strategic collaborations and improvements in image quality, hardware, integration, and training.
{"title":"Role of eXtended Reality use in medical imaging interpretation for pre-surgical planning and intraoperative augmentation.","authors":"Taylor Kantor, Prashant Mahajan, Sarah Murthi, Candice Stegink, Barbara Brawn, Amitabh Varshney, Rishindra M Reddy","doi":"10.1117/1.JMI.11.6.062607","DOIUrl":"10.1117/1.JMI.11.6.062607","url":null,"abstract":"<p><strong>Purpose: </strong>eXtended Reality (XR) technology, including virtual reality (VR), augmented reality (AR), and mixed reality (MR), is a growing field in healthcare. Each modality offers unique benefits and drawbacks for medical education, simulation, and clinical care. We review current studies to understand how XR technology uses medical imaging to enhance surgical diagnostics, planning, and performance. We also highlight current limitations and future directions.</p><p><strong>Approach: </strong>We reviewed the literature on immersive XR technologies for surgical planning and intraoperative augmentation, excluding studies on telemedicine and 2D video-based training. We cited publications highlighting XR's advantages and limitations in these categories.</p><p><strong>Results: </strong>A review of 556 papers on XR for medical imaging in surgery yielded 155 relevant papers reviewed utilizing the aid of chatGPT. XR technology may improve procedural times, reduce errors, and enhance surgical workflows. It aids in preoperative planning, surgical navigation, and real-time data integration, improving surgeon ergonomics and enabling remote collaboration. However, adoption faces challenges such as high costs, infrastructure needs, and regulatory hurdles. Despite these, XR shows significant potential in advancing surgical care.</p><p><strong>Conclusions: </strong>Immersive technologies in healthcare enhance visualization and understanding of medical conditions, promising better patient outcomes and innovative treatments but face adoption challenges such as cost, technological constraints, and regulatory hurdles. Addressing these requires strategic collaborations and improvements in image quality, hardware, integration, and training.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"062607"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11618384/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142802703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-12-03DOI: 10.1117/1.JMI.11.6.065502
John W Garrett, Kelly Capel, Laura Eisenmenger, Azam Ahmed, David Niemann, Yinsheng Li, Ke Li, Dalton Griner, Sebastian Schafer, Charles Strother, Guang-Hong Chen, Beverly Aagaard-Kienitz
Purpose: The critical time between stroke onset and treatment was targeted for reduction by integrating physiological imaging into the angiography suite, potentially improving clinical outcomes. The evaluation was conducted to compare C-Arm cone beam CT perfusion (CBCTP) with multi-detector CT perfusion (MDCTP) in patients with acute ischemic stroke (AIS).
Approach: Thirty-nine patients with anterior circulation AIS underwent both MDCTP and CBCTP. Imaging results were compared using an in-house algorithm for CBCTP map generation and RAPID for post-processing. Blinded neuroradiologists assessed images for quality, diagnostic utility, and treatment decision support, with non-inferiority analysis (two one-sided tests for equivalence) and inter-reviewer consistency (Cohen's kappa).
Results: The mean time from MDCTP to angiography suite arrival was , and that from arrival to the first CBCTP image was . Stroke diagnosis accuracies were 96% [93%, 97%] with MDCTP and 91% [90%, 93%] with CBCTP. Cohen's kappa between observers was 0.86 for MDCTP and 0.90 for CBCTP, showing excellent inter-reader consistency. CBCTP's scores for diagnostic utility, mismatch pattern detection, and treatment decisions were noninferior to MDCTP scores (alpha = 0.05) within 20% of the range. MDCTP was slightly superior for image quality and artifact score (1.8 versus 2.3, ).
Conclusions: In this small paper, CBCTP was noninferior to MDCTP, potentially saving nearly an hour per patient if they went directly to the angiography suite upon hospital arrival.
{"title":"Comparison of sequential multi-detector CT and cone-beam CT perfusion maps in 39 subjects with anterior circulation acute ischemic stroke due to a large vessel occlusion.","authors":"John W Garrett, Kelly Capel, Laura Eisenmenger, Azam Ahmed, David Niemann, Yinsheng Li, Ke Li, Dalton Griner, Sebastian Schafer, Charles Strother, Guang-Hong Chen, Beverly Aagaard-Kienitz","doi":"10.1117/1.JMI.11.6.065502","DOIUrl":"10.1117/1.JMI.11.6.065502","url":null,"abstract":"<p><strong>Purpose: </strong>The critical time between stroke onset and treatment was targeted for reduction by integrating physiological imaging into the angiography suite, potentially improving clinical outcomes. The evaluation was conducted to compare C-Arm cone beam CT perfusion (CBCTP) with multi-detector CT perfusion (MDCTP) in patients with acute ischemic stroke (AIS).</p><p><strong>Approach: </strong>Thirty-nine patients with anterior circulation AIS underwent both MDCTP and CBCTP. Imaging results were compared using an in-house algorithm for CBCTP map generation and RAPID for post-processing. Blinded neuroradiologists assessed images for quality, diagnostic utility, and treatment decision support, with non-inferiority analysis (two one-sided tests for equivalence) and inter-reviewer consistency (Cohen's kappa).</p><p><strong>Results: </strong>The mean time from MDCTP to angiography suite arrival was <math><mrow><mn>50</mn> <mo>±</mo> <mn>34</mn> <mtext> </mtext> <mi>min</mi></mrow> </math> , and that from arrival to the first CBCTP image was <math><mrow><mn>21</mn> <mo>±</mo> <mn>8</mn> <mtext> </mtext> <mi>min</mi></mrow> </math> . Stroke diagnosis accuracies were 96% [93%, 97%] with MDCTP and 91% [90%, 93%] with CBCTP. Cohen's kappa between observers was 0.86 for MDCTP and 0.90 for CBCTP, showing excellent inter-reader consistency. CBCTP's scores for diagnostic utility, mismatch pattern detection, and treatment decisions were noninferior to MDCTP scores (alpha = 0.05) within 20% of the range. MDCTP was slightly superior for image quality and artifact score (1.8 versus 2.3, <math><mrow><mi>p</mi> <mo><</mo> <mn>0.01</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>In this small paper, CBCTP was noninferior to MDCTP, potentially saving nearly an hour per patient if they went directly to the angiography suite upon hospital arrival.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"065502"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11614149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142781068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-12-11DOI: 10.1117/1.JMI.11.6.064501
Mena Shenouda, Abbas Shaikh, Ilana Deutsch, Owen Mitchell, Hedy L Kindler, Samuel G Armato
Purpose: The BRCA1-associated protein 1 (BAP1) gene is of great interest because somatic (BAP1) mutations are the most common alteration associated with pleural mesothelioma (PM). Further, germline mutation of the BAP1 gene has been linked to the development of PM. This study aimed to explore the potential of radiomics on computed tomography scans to identify somatic BAP1 gene mutations and assess the feasibility of radiomics in future research in identifying germline mutations.
Approach: A cohort of 149 patients with PM and known somatic BAP1 mutation status was collected, and a previously published deep learning model was used to first automatically segment the tumor, followed by radiologist modifications. Image preprocessing was performed, and texture features were extracted from the segmented tumor regions. The top features were selected and used to train 18 separate machine learning models using leave-one-out cross-validation (LOOCV). The performance of the models in distinguishing between BAP1-mutated (BAP1+) and BAP1 wild-type (BAP1-) tumors was evaluated using the receiver operating characteristic area under the curve (ROC AUC).
Results: A decision tree classifier achieved the highest overall AUC value of 0.69 (95% confidence interval: 0.60 and 0.77). The features selected most frequently through the LOOCV were all second-order (gray-level co-occurrence or gray-level size zone matrices) and were extracted from images with an applied transformation.
Conclusions: This proof-of-concept work demonstrated the potential of radiomics to differentiate among BAP1+/- in patients with PM. Future work will extend these methods to the assessment of germline BAP1 mutation status through image analysis for improved patient prognostication.
{"title":"Radiomics for differentiation of somatic <i>BAP1</i> mutation on CT scans of patients with pleural mesothelioma.","authors":"Mena Shenouda, Abbas Shaikh, Ilana Deutsch, Owen Mitchell, Hedy L Kindler, Samuel G Armato","doi":"10.1117/1.JMI.11.6.064501","DOIUrl":"10.1117/1.JMI.11.6.064501","url":null,"abstract":"<p><strong>Purpose: </strong>The BRCA1-associated protein 1 (<i>BAP1</i>) gene is of great interest because somatic (<i>BAP1</i>) mutations are the most common alteration associated with pleural mesothelioma (PM). Further, germline mutation of the <i>BAP1</i> gene has been linked to the development of PM. This study aimed to explore the potential of radiomics on computed tomography scans to identify somatic <i>BAP1</i> gene mutations and assess the feasibility of radiomics in future research in identifying germline mutations.</p><p><strong>Approach: </strong>A cohort of 149 patients with PM and known somatic <i>BAP1</i> mutation status was collected, and a previously published deep learning model was used to first automatically segment the tumor, followed by radiologist modifications. Image preprocessing was performed, and texture features were extracted from the segmented tumor regions. The top features were selected and used to train 18 separate machine learning models using leave-one-out cross-validation (LOOCV). The performance of the models in distinguishing between <i>BAP1</i>-mutated (<i>BAP1+</i>) and <i>BAP1</i> wild-type (<i>BAP1-</i>) tumors was evaluated using the receiver operating characteristic area under the curve (ROC AUC).</p><p><strong>Results: </strong>A decision tree classifier achieved the highest overall AUC value of 0.69 (95% confidence interval: 0.60 and 0.77). The features selected most frequently through the LOOCV were all second-order (gray-level co-occurrence or gray-level size zone matrices) and were extracted from images with an applied transformation.</p><p><strong>Conclusions: </strong>This proof-of-concept work demonstrated the potential of radiomics to differentiate among <i>BAP1+/-</i> in patients with PM. Future work will extend these methods to the assessment of germline <i>BAP1</i> mutation status through image analysis for improved patient prognostication.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064501"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11633667/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-12-13DOI: 10.1117/1.JMI.11.6.069801
Xander Jacquemyn, Kobe Bamps, Ruben Moermans, Christophe Dubois, Filip Rega, Peter Verbrugghe, Barbara Weyn, Steven Dymarkowski, Werner Budts, Alexander Van De Bruaene
[This corrects the article DOI: 10.1117/1.JMI.11.6.062606.].
[This corrects the article DOI: 10.1117/1.JMI.11.6.062606.].
{"title":"Erratum: Publisher's Note: Augmented and virtual reality imaging for collaborative planning of structural cardiovascular interventions: a proof-of-concept and validation study.","authors":"Xander Jacquemyn, Kobe Bamps, Ruben Moermans, Christophe Dubois, Filip Rega, Peter Verbrugghe, Barbara Weyn, Steven Dymarkowski, Werner Budts, Alexander Van De Bruaene","doi":"10.1117/1.JMI.11.6.069801","DOIUrl":"10.1117/1.JMI.11.6.069801","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.1117/1.JMI.11.6.062606.].</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"069801"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11638976/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142830514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2024-12-10DOI: 10.1117/1.JMI.11.6.067502
Madeleine S Torcasso, Junting Ai, Gabriel Casella, Thao Cao, Anthony Chang, Ariel Halper-Stromberg, Bana Jabri, Marcus R Clark, Maryellen L Giger
Purpose: The rapid development of highly multiplexed microscopy has enabled the study of cells embedded within their native tissue. The rich spatial data provided by these techniques have yielded exciting insights into the spatial features of human disease. However, computational methods for analyzing these high-content images are still emerging; there is a need for more robust and generalizable tools for evaluating the cellular constituents and stroma captured by high-plex imaging. To address this need, we have adapted spectral angle mapping-an algorithm developed for hyperspectral image analysis-to compress the channel dimension of high-plex immunofluorescence (IF) images.
Approach: Here, we present pseudo-spectral angle mapping (pSAM), a robust and flexible method for determining the most likely class of each pixel in a high-plex image. The class maps calculated through pSAM yield pixel classifications which can be combined with instance segmentation algorithms to classify individual cells.
Results: In a dataset of colon biopsies imaged with a 13-plex staining panel, 16 pSAM class maps were computed to generate pixel classifications. Instance segmentations of cells with Cellpose2.0 ( -score of ) were combined with these class maps to provide cell class predictions for 13 cell classes. In addition, in a separate unseen dataset of kidney biopsies imaged with a 44-plex staining panel, pSAM plus Cellpose2.0 ( -score of ) detected a diverse set of 38 classes of structural and immune cells.
Conclusions: In summary, pSAM is a powerful and generalizable tool for evaluating high-plex IF image data and classifying cells in these high-dimensional images.
{"title":"Pseudo-spectral angle mapping for pixel and cell classification in highly multiplexed immunofluorescence images.","authors":"Madeleine S Torcasso, Junting Ai, Gabriel Casella, Thao Cao, Anthony Chang, Ariel Halper-Stromberg, Bana Jabri, Marcus R Clark, Maryellen L Giger","doi":"10.1117/1.JMI.11.6.067502","DOIUrl":"10.1117/1.JMI.11.6.067502","url":null,"abstract":"<p><strong>Purpose: </strong>The rapid development of highly multiplexed microscopy has enabled the study of cells embedded within their native tissue. The rich spatial data provided by these techniques have yielded exciting insights into the spatial features of human disease. However, computational methods for analyzing these high-content images are still emerging; there is a need for more robust and generalizable tools for evaluating the cellular constituents and stroma captured by high-plex imaging. To address this need, we have adapted spectral angle mapping-an algorithm developed for hyperspectral image analysis-to compress the channel dimension of high-plex immunofluorescence (IF) images.</p><p><strong>Approach: </strong>Here, we present pseudo-spectral angle mapping (pSAM), a robust and flexible method for determining the most likely class of each pixel in a high-plex image. The class maps calculated through pSAM yield pixel classifications which can be combined with instance segmentation algorithms to classify individual cells.</p><p><strong>Results: </strong>In a dataset of colon biopsies imaged with a 13-plex staining panel, 16 pSAM class maps were computed to generate pixel classifications. Instance segmentations of cells with Cellpose2.0 ( <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> -score of <math><mrow><mn>0.83</mn> <mo>±</mo> <mn>0.13</mn></mrow> </math> ) were combined with these class maps to provide cell class predictions for 13 cell classes. In addition, in a separate unseen dataset of kidney biopsies imaged with a 44-plex staining panel, pSAM plus Cellpose2.0 ( <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> -score of <math><mrow><mn>0.86</mn> <mo>±</mo> <mn>0.11</mn></mrow> </math> ) detected a diverse set of 38 classes of structural and immune cells.</p><p><strong>Conclusions: </strong>In summary, pSAM is a powerful and generalizable tool for evaluating high-plex IF image data and classifying cells in these high-dimensional images.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"067502"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11629784/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}