首页 > 最新文献

Journal of Medical Imaging最新文献

英文 中文
Contrast-to-noise ratio comparison between X-ray fluorescence emission tomography and computed tomography. X 射线荧光发射断层扫描与计算机断层扫描的对比度与噪声比。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-01 Epub Date: 2024-10-15 DOI: 10.1117/1.JMI.11.S1.S12808
Hadley DeBrosse, Giavanna Jadick, Ling Jian Meng, Patrick La Rivière

Purpose: We provide a comparison of X-ray fluorescence emission tomography (XFET) and computed tomography (CT) for detecting low concentrations of gold nanoparticles (GNPs) in soft tissue and characterize the conditions under which XFET outperforms energy-integrating CT (EICT) and photon-counting CT (PCCT).

Approach: We compared dose-matched Monte Carlo XFET simulations and analytical fan-beam EICT and PCCT simulations. Each modality was used to image a numerical mouse phantom and contrast-depth phantom containing GNPs ranging from 0.05% to 4% by weight in soft tissue. Contrast-to-noise ratios (CNRs) of gold regions were compared among the three modalities, and XFET's detection limit was quantified based on the Rose criterion. A partial field-of-view (FOV) image was acquired for the phantom region containing 0.05% GNPs.

Results: For the mouse phantom, XFET produced superior CNR values ( CNRs = 24.5 , 21.6, and 3.4) compared with CT images obtained with both energy-integrating ( CNR = 4.4 , 4.6, and 1.5) and photon-counting ( CNR = 6.5 , 7.7, and 2.0) detection systems. More generally, XFET outperformed CT for superficial imaging depths ( < 28.75    mm ) for gold concentrations at and above 0.5%. XFET's surface detection limit was quantified as 0.44% for an average phantom dose of 16 mGy compatible with in vivo imaging. XFET's ability to image partial FOVs was demonstrated, and 0.05% gold was easily detected with an estimated dose of 81.6    cGy to a localized region of interest.

Conclusions: We demonstrate a proof of XFET's benefit for imaging low concentrations of gold at superficial depths and the feasibility of XFET for in vivo metal mapping in preclinical imaging tasks.

目的:我们对 X 射线荧光发射断层成像(XFET)和计算机断层扫描(CT)检测软组织中低浓度金纳米粒子(GNPs)的方法进行了比较,并确定了 XFET 优于能量积分 CT(EICT)和光子计数 CT(PCCT)的条件:方法:我们将剂量匹配的蒙特卡罗 XFET 模拟与分析扇形光束 EICT 和 PCCT 模拟进行了比较。每种模式都用于对一个数值小鼠模型和对比度深度模型进行成像,模型中的软组织含有按重量计从 0.05% 到 4% 不等的 GNP。比较了三种模式下金区域的对比度-噪声比(CNR),并根据罗斯标准量化了 XFET 的检测极限。对含有 0.05% GNPs 的模型区域采集了部分视场(FOV)图像:对于小鼠模型,XFET 产生的 CNR 值(CNR = 24.5、21.6 和 3.4)优于使用能量积分(CNR = 4.4、4.6 和 1.5)和光子计数(CNR = 6.5、7.7 和 2.0)检测系统获得的 CT 图像。总体而言,对于金浓度在 0.5% 及以上的浅层成像深度(28.75 毫米),XFET 的性能优于 CT。XFET 的表面检测极限被量化为 0.44%,平均模型剂量为 16 mGy,符合体内成像。XFET 对部分 FOV 的成像能力得到了证明,在局部感兴趣区域的估计剂量为 ∼ 81.6 cGy 时,0.05% 的金很容易被检测到:结论:我们证明了 XFET 在浅层低浓度金成像方面的优势,以及 XFET 在临床前成像任务中用于体内金属绘图的可行性。
{"title":"Contrast-to-noise ratio comparison between X-ray fluorescence emission tomography and computed tomography.","authors":"Hadley DeBrosse, Giavanna Jadick, Ling Jian Meng, Patrick La Rivière","doi":"10.1117/1.JMI.11.S1.S12808","DOIUrl":"https://doi.org/10.1117/1.JMI.11.S1.S12808","url":null,"abstract":"<p><strong>Purpose: </strong>We provide a comparison of X-ray fluorescence emission tomography (XFET) and computed tomography (CT) for detecting low concentrations of gold nanoparticles (GNPs) in soft tissue and characterize the conditions under which XFET outperforms energy-integrating CT (EICT) and photon-counting CT (PCCT).</p><p><strong>Approach: </strong>We compared dose-matched Monte Carlo XFET simulations and analytical fan-beam EICT and PCCT simulations. Each modality was used to image a numerical mouse phantom and contrast-depth phantom containing GNPs ranging from 0.05% to 4% by weight in soft tissue. Contrast-to-noise ratios (CNRs) of gold regions were compared among the three modalities, and XFET's detection limit was quantified based on the Rose criterion. A partial field-of-view (FOV) image was acquired for the phantom region containing 0.05% GNPs.</p><p><strong>Results: </strong>For the mouse phantom, XFET produced superior CNR values ( <math><mrow><mi>CNRs</mi> <mo>=</mo> <mn>24.5</mn></mrow> </math> , 21.6, and 3.4) compared with CT images obtained with both energy-integrating ( <math><mrow><mi>CNR</mi> <mo>=</mo> <mn>4.4</mn></mrow> </math> , 4.6, and 1.5) and photon-counting ( <math><mrow><mi>CNR</mi> <mo>=</mo> <mn>6.5</mn></mrow> </math> , 7.7, and 2.0) detection systems. More generally, XFET outperformed CT for superficial imaging depths ( <math><mrow><mo><</mo> <mn>28.75</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> ) for gold concentrations at and above 0.5%. XFET's surface detection limit was quantified as 0.44% for an average phantom dose of 16 mGy compatible with <i>in vivo</i> imaging. XFET's ability to image partial FOVs was demonstrated, and 0.05% gold was easily detected with an estimated dose of <math><mrow><mo>∼</mo> <mn>81.6</mn> <mtext>  </mtext> <mi>cGy</mi></mrow> </math> to a localized region of interest.</p><p><strong>Conclusions: </strong>We demonstrate a proof of XFET's benefit for imaging low concentrations of gold at superficial depths and the feasibility of XFET for <i>in vivo</i> metal mapping in preclinical imaging tasks.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 Suppl 1","pages":"S12808"},"PeriodicalIF":1.9,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11478016/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Iterative clustering material decomposition aided by empirical spectral correction for photon counting detectors in micro-CT. 基于经验光谱校正的微ct光子计数探测器的迭代聚类材料分解。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-12-01 Epub Date: 2024-12-27 DOI: 10.1117/1.JMI.11.S1.S12810
J Carlos Rodriguez Luna, Mini Das
<p><strong>Purpose: </strong>Photon counting detectors offer promising advancements in computed tomography (CT) imaging by enabling the quantification and three-dimensional imaging of contrast agents and tissue types through simultaneous multi-energy projections from broad X-ray spectra. However, the accuracy of these decomposition methods hinges on precise composite spectral attenuation values that one must reconstruct from spectral micro-CT. Errors in such estimations could be due to effects such as beam hardening, object scatter, or detector sensor-related spectral distortions such as fluorescence. Even if accurate spectral correction is done, multi-material separation within a volume remains a challenge. Increasing the number of energy bins in material decomposition problems often comes with a significant noise penalty but with minimal decomposition benefits.</p><p><strong>Approach: </strong>We begin with an empirical spectral correction method executed in the tomographic domain that accounts for distortions in estimated spectral attenuation for each voxel. This is followed by our proposed iterative clustering material decomposition (ICMD) where clustering of voxels is used to reduce the number of basis materials to be resolved for each cluster. Using a larger number of energy bins for the clustering step shows distinct advantages in excellent classification to a larger number of clusters with accurate cluster centers when compared with the National Institute of Standards and Technology attenuation values. The decomposition step is applied to each cluster separately where each cluster has fewer basis materials compared with the entire volume. This is shown to reduce the need for the number of energy bins required in each decomposition step for the clusters. This approach significantly increases the total number of materials that can be decomposed within the volume with high accuracy and with excellent noise properties.</p><p><strong>Results: </strong>Utilizing a (cadmium telluride 1-mm-thick sensor) Medipix detector with a <math><mrow><mn>55</mn> <mtext>-</mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> pitch, we demonstrate the quantitatively accurate decomposition of several materials in a phantom study, where the sample includes mixtures of soft materials such as water and poly-methyl methacrylate along with contrast-enhancing materials. We show improved accuracy and lower noise when all five energy bins were used to yield effective classification of voxels into multiple accurate fundamental clusters which was followed by the decomposition step applied to each cluster using just two energy bins. We also show an example of biological sample imaging and separating three distinct types of tissue in mice: muscle, fat, and bone. Our experimental results show that the combination of effective and practical spectral correction and high-dimensional data clustering enhances decomposition accuracy and reduces noise in micro-CT.</p><p><strong>Conclusions
目的:光子计数探测器在计算机断层扫描(CT)成像中提供了有希望的进步,通过同时从宽x射线光谱中进行多能投射,实现造影剂和组织类型的量化和三维成像。然而,这些分解方法的准确性取决于精确的复合光谱衰减值,必须从光谱微ct中重建。这种估计中的误差可能是由于光束硬化、物体散射或探测器传感器相关的光谱畸变(如荧光)等影响造成的。即使进行了精确的光谱校正,在一个体积内的多材料分离仍然是一个挑战。在材料分解问题中,增加能量箱的数量通常会带来显著的噪音惩罚,但分解效益却微乎其微。方法:我们从在层析域中执行的经验光谱校正方法开始,该方法考虑了每个体素估计的光谱衰减的扭曲。接下来是我们提出的迭代聚类材料分解(ICMD),其中使用体素聚类来减少每个聚类需要解析的基材料的数量。与美国国家标准与技术研究院(National Institute of Standards and Technology)的衰减值相比,在聚类步骤中使用更大数量的能量桶,对于具有准确聚类中心的更大数量的聚类具有明显的优势。分解步骤分别应用于每个簇,每个簇与整个体积相比具有更少的基材料。这可以减少对簇的每个分解步骤所需的能量箱数量的需求。这种方法显著增加了可以在体积内分解的材料总数,具有高精度和优异的噪声特性。结果:利用55 μ m间距的(1 mm厚的碲化镉传感器)Medipix探测器,我们在模拟研究中展示了几种材料的定量准确分解,其中样品包括软材料(如水和聚甲基丙烯酸甲酯)以及对比度增强材料的混合物。当使用所有五个能量桶将体素有效分类为多个准确的基本聚类时,我们显示出更高的准确性和更低的噪声,然后仅使用两个能量桶对每个聚类应用分解步骤。我们还展示了一个生物样本成像的例子,并在小鼠中分离了三种不同类型的组织:肌肉、脂肪和骨骼。实验结果表明,有效实用的光谱校正与高维数据聚类相结合,提高了微ct分解精度,降低了噪声。结论:该ICMD可以定量分离多种材料,包括混合物,也可以有效分离多种造影剂。
{"title":"Iterative clustering material decomposition aided by empirical spectral correction for photon counting detectors in micro-CT.","authors":"J Carlos Rodriguez Luna, Mini Das","doi":"10.1117/1.JMI.11.S1.S12810","DOIUrl":"10.1117/1.JMI.11.S1.S12810","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Purpose: &lt;/strong&gt;Photon counting detectors offer promising advancements in computed tomography (CT) imaging by enabling the quantification and three-dimensional imaging of contrast agents and tissue types through simultaneous multi-energy projections from broad X-ray spectra. However, the accuracy of these decomposition methods hinges on precise composite spectral attenuation values that one must reconstruct from spectral micro-CT. Errors in such estimations could be due to effects such as beam hardening, object scatter, or detector sensor-related spectral distortions such as fluorescence. Even if accurate spectral correction is done, multi-material separation within a volume remains a challenge. Increasing the number of energy bins in material decomposition problems often comes with a significant noise penalty but with minimal decomposition benefits.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Approach: &lt;/strong&gt;We begin with an empirical spectral correction method executed in the tomographic domain that accounts for distortions in estimated spectral attenuation for each voxel. This is followed by our proposed iterative clustering material decomposition (ICMD) where clustering of voxels is used to reduce the number of basis materials to be resolved for each cluster. Using a larger number of energy bins for the clustering step shows distinct advantages in excellent classification to a larger number of clusters with accurate cluster centers when compared with the National Institute of Standards and Technology attenuation values. The decomposition step is applied to each cluster separately where each cluster has fewer basis materials compared with the entire volume. This is shown to reduce the need for the number of energy bins required in each decomposition step for the clusters. This approach significantly increases the total number of materials that can be decomposed within the volume with high accuracy and with excellent noise properties.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;Utilizing a (cadmium telluride 1-mm-thick sensor) Medipix detector with a &lt;math&gt;&lt;mrow&gt;&lt;mn&gt;55&lt;/mn&gt; &lt;mtext&gt;-&lt;/mtext&gt; &lt;mi&gt;μ&lt;/mi&gt; &lt;mi&gt;m&lt;/mi&gt;&lt;/mrow&gt; &lt;/math&gt; pitch, we demonstrate the quantitatively accurate decomposition of several materials in a phantom study, where the sample includes mixtures of soft materials such as water and poly-methyl methacrylate along with contrast-enhancing materials. We show improved accuracy and lower noise when all five energy bins were used to yield effective classification of voxels into multiple accurate fundamental clusters which was followed by the decomposition step applied to each cluster using just two energy bins. We also show an example of biological sample imaging and separating three distinct types of tissue in mice: muscle, fat, and bone. Our experimental results show that the combination of effective and practical spectral correction and high-dimensional data clustering enhances decomposition accuracy and reduces noise in micro-CT.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusions","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 Suppl 1","pages":"S12810"},"PeriodicalIF":1.9,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11676343/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised learning for chest computed tomography: training strategies and effect on downstream applications. 胸部计算机断层扫描的自我监督学习:训练策略及对下游应用的影响。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-11-09 DOI: 10.1117/1.JMI.11.6.064003
Amara Tariq, Gokul Ramasamy, Bhavik Patel, Imon Banerjee

Purpose: Self-supervised pre-training can reduce the amount of labeled training data needed by pre-learning fundamental visual characteristics of the medical imaging data. We investigate several self-supervised training strategies for chest computed tomography exams and their effects on downstream applications.

Approach: We benchmark five well-known self-supervision strategies (masked image region prediction, next slice prediction, rotation prediction, flip prediction, and denoising) on 15 M chest computed tomography (CT) slices collected from four sites of the Mayo Clinic enterprise, United States. These models were evaluated for two downstream tasks on public datasets: pulmonary embolism (PE) detection (classification) and lung nodule segmentation. Image embeddings generated by these models were also evaluated for prediction of patient age, race, and gender to study inherent biases in models' understanding of chest CT exams.

Results: The use of pre-training weights especially masked region prediction-based weights, improved performance, and reduced computational effort needed for downstream tasks compared with task-specific state-of-the-art (SOTA) models. Performance improvement for PE detection was observed for training dataset sizes as large as 380    K with a maximum gain of 5% over SOTA. The segmentation model initialized with pre-training weights learned twice as fast as the randomly initialized model. While gender and age predictors built using self-supervised training weights showed no performance improvement over randomly initialized predictors, the race predictor experienced a 10% performance boost when using self-supervised training weights.

Conclusion: We released self-supervised models and weights under an open-source academic license. These models can then be fine-tuned with limited task-specific annotated data for a variety of downstream imaging tasks, thus accelerating research in biomedical imaging informatics.

目的:自监督预训练可以通过预学习医学影像数据的基本视觉特征来减少所需的标记训练数据量。我们研究了胸部计算机断层扫描检查的几种自监督训练策略及其对下游应用的影响:我们在从美国梅奥诊所企业的四个站点收集的 1500 万张胸部计算机断层扫描(CT)切片上,对五种著名的自监督策略(遮蔽图像区域预测、下一切片预测、旋转预测、翻转预测和去噪)进行了基准测试。这些模型针对公共数据集上的两项下游任务进行了评估:肺栓塞(PE)检测(分类)和肺结节分割。此外,还对这些模型生成的图像嵌入进行了评估,以预测患者的年龄、种族和性别,从而研究模型对胸部 CT 检查的理解是否存在固有偏差:结果:与针对特定任务的最先进模型(SOTA)相比,使用预训练权重(尤其是基于掩蔽区域预测的权重)提高了性能,并减少了下游任务所需的计算工作量。当训练数据集的大小达到 380 K 时,PE 检测的性能有所提高,与 SOTA 相比最大提高了 5%。使用预训练权重初始化的分割模型的学习速度是随机初始化模型的两倍。与随机初始化的预测器相比,使用自我监督训练权重构建的性别和年龄预测器的性能没有提高,但使用自我监督训练权重的种族预测器的性能提高了 10%:我们以开源学术许可证的形式发布了自监督模型和权重。结论:我们以开源学术许可证的形式发布了自监督模型和权重,然后可以利用有限的特定任务注释数据对这些模型进行微调,以用于各种下游成像任务,从而加速生物医学成像信息学的研究。
{"title":"Self-supervised learning for chest computed tomography: training strategies and effect on downstream applications.","authors":"Amara Tariq, Gokul Ramasamy, Bhavik Patel, Imon Banerjee","doi":"10.1117/1.JMI.11.6.064003","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.064003","url":null,"abstract":"<p><strong>Purpose: </strong>Self-supervised pre-training can reduce the amount of labeled training data needed by pre-learning fundamental visual characteristics of the medical imaging data. We investigate several self-supervised training strategies for chest computed tomography exams and their effects on downstream applications.</p><p><strong>Approach: </strong>We benchmark five well-known self-supervision strategies (masked image region prediction, next slice prediction, rotation prediction, flip prediction, and denoising) on 15 M chest computed tomography (CT) slices collected from four sites of the Mayo Clinic enterprise, United States. These models were evaluated for two downstream tasks on public datasets: pulmonary embolism (PE) detection (classification) and lung nodule segmentation. Image embeddings generated by these models were also evaluated for prediction of patient age, race, and gender to study inherent biases in models' understanding of chest CT exams.</p><p><strong>Results: </strong>The use of pre-training weights especially masked region prediction-based weights, improved performance, and reduced computational effort needed for downstream tasks compared with task-specific state-of-the-art (SOTA) models. Performance improvement for PE detection was observed for training dataset sizes as large as <math><mrow><mo>∼</mo> <mn>380</mn> <mtext>  </mtext> <mi>K</mi></mrow> </math> with a maximum gain of 5% over SOTA. The segmentation model initialized with pre-training weights learned twice as fast as the randomly initialized model. While gender and age predictors built using self-supervised training weights showed no performance improvement over randomly initialized predictors, the race predictor experienced a 10% performance boost when using self-supervised training weights.</p><p><strong>Conclusion: </strong>We released self-supervised models and weights under an open-source academic license. These models can then be fine-tuned with limited task-specific annotated data for a variety of downstream imaging tasks, thus accelerating research in biomedical imaging informatics.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064003"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11550486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-driven nucleus subclassification on colon hematoxylin and eosin using style-transferred digital pathology. 使用样式转移数字病理学对结肠苏木精和伊红进行数据驱动的细胞核亚分类。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-11-05 DOI: 10.1117/1.JMI.11.6.067501
Lucas W Remedios, Shunxing Bao, Samuel W Remedios, Ho Hin Lee, Leon Y Cai, Thomas Li, Ruining Deng, Nancy R Newlin, Adam M Saunders, Can Cui, Jia Li, Qi Liu, Ken S Lau, Joseph T Roland, Mary K Washington, Lori A Coburn, Keith T Wilson, Yuankai Huo, Bennett A Landman

Purpose: Cells are building blocks for human physiology; consequently, understanding the way cells communicate, co-locate, and interrelate is essential to furthering our understanding of how the body functions in both health and disease. Hematoxylin and eosin (H&E) is the standard stain used in histological analysis of tissues in both clinical and research settings. Although H&E is ubiquitous and reveals tissue microanatomy, the classification and mapping of cell subtypes often require the use of specialized stains. The recent CoNIC Challenge focused on artificial intelligence classification of six types of cells on colon H&E but was unable to classify epithelial subtypes (progenitor, enteroendocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), and connective subtypes (fibroblasts). We propose to use inter-modality learning to label previously un-labelable cell types on H&E.

Approach: We took advantage of the cell classification information inherent in multiplexed immunofluorescence (MxIF) histology to create cell-level annotations for 14 subclasses. Then, we performed style transfer on the MxIF to synthesize realistic virtual H&E. We assessed the efficacy of a supervised learning scheme using the virtual H&E and 14 subclass labels. We evaluated our model on virtual H&E and real H&E.

Results: On virtual H&E, we were able to classify helper T cells and epithelial progenitors with positive predictive values of 0.34 ± 0.15 (prevalence 0.03 ± 0.01 ) and 0.47 ± 0.1 (prevalence 0.07 ± 0.02 ), respectively, when using ground truth centroid information. On real H&E, we needed to compute bounded metrics instead of direct metrics because our fine-grained virtual H&E predicted classes had to be matched to the closest available parent classes in the coarser labels from the real H&E dataset. For the real H&E, we could classify bounded metrics for the helper T cells and epithelial progenitors with upper bound positive predictive values of 0.43 ± 0.03 (parent class prevalence 0.21) and 0.94 ± 0.02 (parent class prevalence 0.49) when using ground truth centroid information.

Conclusions: This is the first work to provide cell type classification for helper T and epithelial progenitor nuclei on H&E.

目的:细胞是人体生理的基石;因此,要进一步了解人体在健康和疾病时的功能,就必须了解细胞交流、共处和相互关系的方式。血色素和伊红(H&E)是临床和研究机构对组织进行组织学分析时使用的标准染色剂。虽然 H&E 无处不在并能显示组织的微观解剖结构,但细胞亚型的分类和绘图通常需要使用专用染色剂。最近的 CoNIC 挑战赛重点关注结肠 H&E 上六种类型细胞的人工智能分类,但无法对上皮亚型(祖细胞、肠内分泌细胞、鹅口疮细胞)、淋巴细胞亚型(B 细胞、辅助 T 细胞、细胞毒性 T 细胞)和结缔组织亚型(成纤维细胞)进行分类。我们建议使用跨模态学习来标记 H&E 上以前无法标记的细胞类型:我们利用多重免疫荧光(MxIF)组织学中固有的细胞分类信息,为 14 个亚类创建了细胞级注释。然后,我们对 MxIF 进行了样式转移,合成了逼真的虚拟 H&E。我们使用虚拟 H&E 和 14 个子类标签评估了监督学习方案的效果。我们在虚拟 H&E 和真实 H&E 上评估了我们的模型:在虚拟 H&E 上,当使用地面实况中心点信息时,我们能够对辅助性 T 细胞和上皮祖细胞进行分类,阳性预测值分别为 0.34 ± 0.15(流行率为 0.03 ± 0.01)和 0.47 ± 0.1(流行率为 0.07 ± 0.02)。在真实 H&E 数据集上,我们需要计算有界度量而不是直接度量,因为我们的细粒度虚拟 H&E 预测类别必须与真实 H&E 数据集中较粗标签中最接近的可用父类别相匹配。对于真实的 H&E,当使用地面实况中心点信息时,我们可以对辅助性 T 细胞和上皮祖细胞进行有界度量分类,阳性预测值上限分别为 0.43 ± 0.03(父类流行率为 0.21)和 0.94 ± 0.02(父类流行率为 0.49):这是首次在 H&E 上对辅助 T 细胞和上皮祖细胞核进行细胞类型分类。
{"title":"Data-driven nucleus subclassification on colon hematoxylin and eosin using style-transferred digital pathology.","authors":"Lucas W Remedios, Shunxing Bao, Samuel W Remedios, Ho Hin Lee, Leon Y Cai, Thomas Li, Ruining Deng, Nancy R Newlin, Adam M Saunders, Can Cui, Jia Li, Qi Liu, Ken S Lau, Joseph T Roland, Mary K Washington, Lori A Coburn, Keith T Wilson, Yuankai Huo, Bennett A Landman","doi":"10.1117/1.JMI.11.6.067501","DOIUrl":"10.1117/1.JMI.11.6.067501","url":null,"abstract":"<p><strong>Purpose: </strong>Cells are building blocks for human physiology; consequently, understanding the way cells communicate, co-locate, and interrelate is essential to furthering our understanding of how the body functions in both health and disease. Hematoxylin and eosin (H&E) is the standard stain used in histological analysis of tissues in both clinical and research settings. Although H&E is ubiquitous and reveals tissue microanatomy, the classification and mapping of cell subtypes often require the use of specialized stains. The recent CoNIC Challenge focused on artificial intelligence classification of six types of cells on colon H&E but was unable to classify epithelial subtypes (progenitor, enteroendocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), and connective subtypes (fibroblasts). We propose to use inter-modality learning to label previously un-labelable cell types on H&E.</p><p><strong>Approach: </strong>We took advantage of the cell classification information inherent in multiplexed immunofluorescence (MxIF) histology to create cell-level annotations for 14 subclasses. Then, we performed style transfer on the MxIF to synthesize realistic virtual H&E. We assessed the efficacy of a supervised learning scheme using the virtual H&E and 14 subclass labels. We evaluated our model on virtual H&E and real H&E.</p><p><strong>Results: </strong>On virtual H&E, we were able to classify helper T cells and epithelial progenitors with positive predictive values of <math><mrow><mn>0.34</mn> <mo>±</mo> <mn>0.15</mn></mrow> </math> (prevalence <math><mrow><mn>0.03</mn> <mo>±</mo> <mn>0.01</mn></mrow> </math> ) and <math><mrow><mn>0.47</mn> <mo>±</mo> <mn>0.1</mn></mrow> </math> (prevalence <math><mrow><mn>0.07</mn> <mo>±</mo> <mn>0.02</mn></mrow> </math> ), respectively, when using ground truth centroid information. On real H&E, we needed to compute bounded metrics instead of direct metrics because our fine-grained virtual H&E predicted classes had to be matched to the closest available parent classes in the coarser labels from the real H&E dataset. For the real H&E, we could classify bounded metrics for the helper T cells and epithelial progenitors with upper bound positive predictive values of <math><mrow><mn>0.43</mn> <mo>±</mo> <mn>0.03</mn></mrow> </math> (parent class prevalence 0.21) and <math><mrow><mn>0.94</mn> <mo>±</mo> <mn>0.02</mn></mrow> </math> (parent class prevalence 0.49) when using ground truth centroid information.</p><p><strong>Conclusions: </strong>This is the first work to provide cell type classification for helper T and epithelial progenitor nuclei on H&E.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"067501"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11537205/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vector field attention for deformable image registration. 用于可变形图像配准的矢量场关注。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-11-06 DOI: 10.1117/1.JMI.11.6.064001
Yihao Liu, Junyu Chen, Lianrui Zuo, Aaron Carass, Jerry L Prince

Purpose: Deformable image registration establishes non-linear spatial correspondences between fixed and moving images. Deep learning-based deformable registration methods have been widely studied in recent years due to their speed advantage over traditional algorithms as well as their better accuracy. Most existing deep learning-based methods require neural networks to encode location information in their feature maps and predict displacement or deformation fields through convolutional or fully connected layers from these high-dimensional feature maps. We present vector field attention (VFA), a novel framework that enhances the efficiency of the existing network design by enabling direct retrieval of location correspondences.

Approach: VFA uses neural networks to extract multi-resolution feature maps from the fixed and moving images and then retrieves pixel-level correspondences based on feature similarity. The retrieval is achieved with a novel attention module without the need for learnable parameters. VFA is trained end-to-end in either a supervised or unsupervised manner.

Results: We evaluated VFA for intra- and inter-modality registration and unsupervised and semi-supervised registration using public datasets as well as the Learn2Reg challenge. VFA demonstrated comparable or superior registration accuracy compared with several state-of-the-art methods.

Conclusions: VFA offers a novel approach to deformable image registration by directly retrieving spatial correspondences from feature maps, leading to improved performance in registration tasks. It holds potential for broader applications.

目的:可变形图像配准可在固定图像和移动图像之间建立非线性空间对应关系。与传统算法相比,基于深度学习的可变形配准方法具有速度快、精度高等优点,近年来已被广泛研究。现有的基于深度学习的方法大多需要神经网络在其特征图中编码位置信息,并通过卷积层或全连接层从这些高维特征图中预测位移或变形场。我们提出的向量场注意(VFA)是一种新型框架,通过直接检索位置对应关系来提高现有网络设计的效率:方法:VFA 利用神经网络从固定和移动图像中提取多分辨率特征图,然后根据特征相似性检索像素级对应关系。检索是通过一个新颖的注意力模块实现的,无需可学习参数。VFA 采用有监督或无监督的方式进行端到端训练:我们使用公共数据集和 Learn2Reg 挑战赛评估了 VFA 在模式内和模式间注册以及无监督和半监督注册方面的表现。与几种最先进的方法相比,VFA 的配准精度相当或更高:VFA 通过直接从特征图中检索空间对应关系,为可变形图像配准提供了一种新方法,从而提高了配准任务的性能。它具有更广泛的应用潜力。
{"title":"Vector field attention for deformable image registration.","authors":"Yihao Liu, Junyu Chen, Lianrui Zuo, Aaron Carass, Jerry L Prince","doi":"10.1117/1.JMI.11.6.064001","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.064001","url":null,"abstract":"<p><strong>Purpose: </strong>Deformable image registration establishes non-linear spatial correspondences between fixed and moving images. Deep learning-based deformable registration methods have been widely studied in recent years due to their speed advantage over traditional algorithms as well as their better accuracy. Most existing deep learning-based methods require neural networks to encode location information in their feature maps and predict displacement or deformation fields through convolutional or fully connected layers from these high-dimensional feature maps. We present vector field attention (VFA), a novel framework that enhances the efficiency of the existing network design by enabling direct retrieval of location correspondences.</p><p><strong>Approach: </strong>VFA uses neural networks to extract multi-resolution feature maps from the fixed and moving images and then retrieves pixel-level correspondences based on feature similarity. The retrieval is achieved with a novel attention module without the need for learnable parameters. VFA is trained end-to-end in either a supervised or unsupervised manner.</p><p><strong>Results: </strong>We evaluated VFA for intra- and inter-modality registration and unsupervised and semi-supervised registration using public datasets as well as the Learn2Reg challenge. VFA demonstrated comparable or superior registration accuracy compared with several state-of-the-art methods.</p><p><strong>Conclusions: </strong>VFA offers a novel approach to deformable image registration by directly retrieving spatial correspondences from feature maps, leading to improved performance in registration tasks. It holds potential for broader applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064001"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540117/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142606811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Role of eXtended Reality use in medical imaging interpretation for pre-surgical planning and intraoperative augmentation. 扩展现实技术在术前计划和术中增强的医学影像解释中的作用。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-12-05 DOI: 10.1117/1.JMI.11.6.062607
Taylor Kantor, Prashant Mahajan, Sarah Murthi, Candice Stegink, Barbara Brawn, Amitabh Varshney, Rishindra M Reddy

Purpose: eXtended Reality (XR) technology, including virtual reality (VR), augmented reality (AR), and mixed reality (MR), is a growing field in healthcare. Each modality offers unique benefits and drawbacks for medical education, simulation, and clinical care. We review current studies to understand how XR technology uses medical imaging to enhance surgical diagnostics, planning, and performance. We also highlight current limitations and future directions.

Approach: We reviewed the literature on immersive XR technologies for surgical planning and intraoperative augmentation, excluding studies on telemedicine and 2D video-based training. We cited publications highlighting XR's advantages and limitations in these categories.

Results: A review of 556 papers on XR for medical imaging in surgery yielded 155 relevant papers reviewed utilizing the aid of chatGPT. XR technology may improve procedural times, reduce errors, and enhance surgical workflows. It aids in preoperative planning, surgical navigation, and real-time data integration, improving surgeon ergonomics and enabling remote collaboration. However, adoption faces challenges such as high costs, infrastructure needs, and regulatory hurdles. Despite these, XR shows significant potential in advancing surgical care.

Conclusions: Immersive technologies in healthcare enhance visualization and understanding of medical conditions, promising better patient outcomes and innovative treatments but face adoption challenges such as cost, technological constraints, and regulatory hurdles. Addressing these requires strategic collaborations and improvements in image quality, hardware, integration, and training.

目的:扩展现实(XR)技术,包括虚拟现实(VR)、增强现实(AR)和混合现实(MR),是医疗保健领域一个不断发展的领域。每种模式都为医学教育、模拟和临床护理提供了独特的优点和缺点。我们回顾了当前的研究,以了解XR技术如何使用医学成像来增强手术诊断、计划和性能。我们还强调了当前的局限性和未来的方向。方法:我们回顾了沉浸式XR技术用于手术计划和术中增强的文献,不包括远程医疗和2D视频培训的研究。我们引用了强调XR在这些类别中的优势和局限性的出版物。结果:利用chatGPT对556篇关于外科医学成像的XR文献进行综述,获得155篇相关文献。XR技术可以缩短手术时间,减少错误,提高手术工作流程。它有助于术前规划、手术导航和实时数据集成,改善外科医生的人体工程学并实现远程协作。然而,采用面临着诸如高成本、基础设施需求和监管障碍等挑战。尽管如此,XR在推进外科护理方面显示出巨大的潜力。结论:医疗保健领域的沉浸式技术增强了对医疗状况的可视化和理解,有望改善患者的治疗效果和创新的治疗方法,但面临成本、技术限制和监管障碍等采用挑战。解决这些问题需要在图像质量、硬件、集成和培训方面进行战略协作和改进。
{"title":"Role of eXtended Reality use in medical imaging interpretation for pre-surgical planning and intraoperative augmentation.","authors":"Taylor Kantor, Prashant Mahajan, Sarah Murthi, Candice Stegink, Barbara Brawn, Amitabh Varshney, Rishindra M Reddy","doi":"10.1117/1.JMI.11.6.062607","DOIUrl":"10.1117/1.JMI.11.6.062607","url":null,"abstract":"<p><strong>Purpose: </strong>eXtended Reality (XR) technology, including virtual reality (VR), augmented reality (AR), and mixed reality (MR), is a growing field in healthcare. Each modality offers unique benefits and drawbacks for medical education, simulation, and clinical care. We review current studies to understand how XR technology uses medical imaging to enhance surgical diagnostics, planning, and performance. We also highlight current limitations and future directions.</p><p><strong>Approach: </strong>We reviewed the literature on immersive XR technologies for surgical planning and intraoperative augmentation, excluding studies on telemedicine and 2D video-based training. We cited publications highlighting XR's advantages and limitations in these categories.</p><p><strong>Results: </strong>A review of 556 papers on XR for medical imaging in surgery yielded 155 relevant papers reviewed utilizing the aid of chatGPT. XR technology may improve procedural times, reduce errors, and enhance surgical workflows. It aids in preoperative planning, surgical navigation, and real-time data integration, improving surgeon ergonomics and enabling remote collaboration. However, adoption faces challenges such as high costs, infrastructure needs, and regulatory hurdles. Despite these, XR shows significant potential in advancing surgical care.</p><p><strong>Conclusions: </strong>Immersive technologies in healthcare enhance visualization and understanding of medical conditions, promising better patient outcomes and innovative treatments but face adoption challenges such as cost, technological constraints, and regulatory hurdles. Addressing these requires strategic collaborations and improvements in image quality, hardware, integration, and training.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"062607"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11618384/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142802703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of sequential multi-detector CT and cone-beam CT perfusion maps in 39 subjects with anterior circulation acute ischemic stroke due to a large vessel occlusion. 39例大血管闭塞所致前循环急性缺血性脑卒中患者序列CT与锥束CT灌注图的比较
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-12-03 DOI: 10.1117/1.JMI.11.6.065502
John W Garrett, Kelly Capel, Laura Eisenmenger, Azam Ahmed, David Niemann, Yinsheng Li, Ke Li, Dalton Griner, Sebastian Schafer, Charles Strother, Guang-Hong Chen, Beverly Aagaard-Kienitz

Purpose: The critical time between stroke onset and treatment was targeted for reduction by integrating physiological imaging into the angiography suite, potentially improving clinical outcomes. The evaluation was conducted to compare C-Arm cone beam CT perfusion (CBCTP) with multi-detector CT perfusion (MDCTP) in patients with acute ischemic stroke (AIS).

Approach: Thirty-nine patients with anterior circulation AIS underwent both MDCTP and CBCTP. Imaging results were compared using an in-house algorithm for CBCTP map generation and RAPID for post-processing. Blinded neuroradiologists assessed images for quality, diagnostic utility, and treatment decision support, with non-inferiority analysis (two one-sided tests for equivalence) and inter-reviewer consistency (Cohen's kappa).

Results: The mean time from MDCTP to angiography suite arrival was 50 ± 34    min , and that from arrival to the first CBCTP image was 21 ± 8    min . Stroke diagnosis accuracies were 96% [93%, 97%] with MDCTP and 91% [90%, 93%] with CBCTP. Cohen's kappa between observers was 0.86 for MDCTP and 0.90 for CBCTP, showing excellent inter-reader consistency. CBCTP's scores for diagnostic utility, mismatch pattern detection, and treatment decisions were noninferior to MDCTP scores (alpha = 0.05) within 20% of the range. MDCTP was slightly superior for image quality and artifact score (1.8 versus 2.3, p < 0.01 ).

Conclusions: In this small paper, CBCTP was noninferior to MDCTP, potentially saving nearly an hour per patient if they went directly to the angiography suite upon hospital arrival.

目的:通过将生理成像整合到血管造影套件中,以减少卒中发作和治疗之间的关键时间为目标,潜在地改善临床结果。比较c臂锥束CT灌注(CBCTP)与多探头CT灌注(MDCTP)在急性缺血性脑卒中(AIS)患者中的应用价值。方法:39例前循环AIS患者行MDCTP和CBCTP。使用CBCTP地图生成的内部算法和后处理的RAPID算法对成像结果进行比较。盲法神经放射科医师通过非劣效性分析(两个单侧等效检验)和评审员间一致性(Cohen’s kappa)评估图像的质量、诊断效用和治疗决策支持。结果:从MDCTP到血管造影室的平均时间为50±34 min,从到达第一张CBCTP图像的平均时间为21±8 min。MDCTP和CBCTP的脑卒中诊断准确率分别为96%[93%,97%]和91%[90%,93%]。MDCTP的观察者之间的Cohen kappa为0.86,CBCTP的观察者之间的kappa为0.90,显示出良好的读者间一致性。在20%的范围内,CBCTP在诊断效用、错配模式检测和治疗决策方面的得分不低于MDCTP得分(alpha = 0.05)。MDCTP在图像质量和伪影评分方面稍优于前者(1.8比2.3,p 0.01)。结论:在这篇小论文中,CBCTP并不亚于MDCTP,如果患者一到医院就直接去血管造影室,CBCTP可能为每位患者节省近一个小时的时间。
{"title":"Comparison of sequential multi-detector CT and cone-beam CT perfusion maps in 39 subjects with anterior circulation acute ischemic stroke due to a large vessel occlusion.","authors":"John W Garrett, Kelly Capel, Laura Eisenmenger, Azam Ahmed, David Niemann, Yinsheng Li, Ke Li, Dalton Griner, Sebastian Schafer, Charles Strother, Guang-Hong Chen, Beverly Aagaard-Kienitz","doi":"10.1117/1.JMI.11.6.065502","DOIUrl":"10.1117/1.JMI.11.6.065502","url":null,"abstract":"<p><strong>Purpose: </strong>The critical time between stroke onset and treatment was targeted for reduction by integrating physiological imaging into the angiography suite, potentially improving clinical outcomes. The evaluation was conducted to compare C-Arm cone beam CT perfusion (CBCTP) with multi-detector CT perfusion (MDCTP) in patients with acute ischemic stroke (AIS).</p><p><strong>Approach: </strong>Thirty-nine patients with anterior circulation AIS underwent both MDCTP and CBCTP. Imaging results were compared using an in-house algorithm for CBCTP map generation and RAPID for post-processing. Blinded neuroradiologists assessed images for quality, diagnostic utility, and treatment decision support, with non-inferiority analysis (two one-sided tests for equivalence) and inter-reviewer consistency (Cohen's kappa).</p><p><strong>Results: </strong>The mean time from MDCTP to angiography suite arrival was <math><mrow><mn>50</mn> <mo>±</mo> <mn>34</mn> <mtext>  </mtext> <mi>min</mi></mrow> </math> , and that from arrival to the first CBCTP image was <math><mrow><mn>21</mn> <mo>±</mo> <mn>8</mn> <mtext>  </mtext> <mi>min</mi></mrow> </math> . Stroke diagnosis accuracies were 96% [93%, 97%] with MDCTP and 91% [90%, 93%] with CBCTP. Cohen's kappa between observers was 0.86 for MDCTP and 0.90 for CBCTP, showing excellent inter-reader consistency. CBCTP's scores for diagnostic utility, mismatch pattern detection, and treatment decisions were noninferior to MDCTP scores (alpha = 0.05) within 20% of the range. MDCTP was slightly superior for image quality and artifact score (1.8 versus 2.3, <math><mrow><mi>p</mi> <mo><</mo> <mn>0.01</mn></mrow> </math> ).</p><p><strong>Conclusions: </strong>In this small paper, CBCTP was noninferior to MDCTP, potentially saving nearly an hour per patient if they went directly to the angiography suite upon hospital arrival.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"065502"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11614149/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142781068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiomics for differentiation of somatic BAP1 mutation on CT scans of patients with pleural mesothelioma. 放射组学用于区分胸膜间皮瘤患者 CT 扫描中的体细胞 BAP1 突变。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-12-11 DOI: 10.1117/1.JMI.11.6.064501
Mena Shenouda, Abbas Shaikh, Ilana Deutsch, Owen Mitchell, Hedy L Kindler, Samuel G Armato

Purpose: The BRCA1-associated protein 1 (BAP1) gene is of great interest because somatic (BAP1) mutations are the most common alteration associated with pleural mesothelioma (PM). Further, germline mutation of the BAP1 gene has been linked to the development of PM. This study aimed to explore the potential of radiomics on computed tomography scans to identify somatic BAP1 gene mutations and assess the feasibility of radiomics in future research in identifying germline mutations.

Approach: A cohort of 149 patients with PM and known somatic BAP1 mutation status was collected, and a previously published deep learning model was used to first automatically segment the tumor, followed by radiologist modifications. Image preprocessing was performed, and texture features were extracted from the segmented tumor regions. The top features were selected and used to train 18 separate machine learning models using leave-one-out cross-validation (LOOCV). The performance of the models in distinguishing between BAP1-mutated (BAP1+) and BAP1 wild-type (BAP1-) tumors was evaluated using the receiver operating characteristic area under the curve (ROC AUC).

Results: A decision tree classifier achieved the highest overall AUC value of 0.69 (95% confidence interval: 0.60 and 0.77). The features selected most frequently through the LOOCV were all second-order (gray-level co-occurrence or gray-level size zone matrices) and were extracted from images with an applied transformation.

Conclusions: This proof-of-concept work demonstrated the potential of radiomics to differentiate among BAP1+/- in patients with PM. Future work will extend these methods to the assessment of germline BAP1 mutation status through image analysis for improved patient prognostication.

目的:BRCA1相关蛋白1(BAP1)基因备受关注,因为体细胞(BAP1)突变是胸膜间皮瘤(PM)最常见的相关改变。此外,BAP1 基因的种系突变也与胸膜间皮瘤的发病有关。本研究旨在探索放射组学在计算机断层扫描中识别体细胞BAP1基因突变的潜力,并评估放射组学在未来识别种系突变研究中的可行性:方法:收集了149例已知体细胞BAP1基因突变状态的PM患者,并使用之前发表的深度学习模型首先对肿瘤进行自动分割,然后由放射科医生进行修改。然后进行图像预处理,并从分割的肿瘤区域提取纹理特征。筛选出最重要的特征,并使用留一交叉验证(LOOCV)训练 18 个独立的机器学习模型。使用接收者操作特征曲线下面积(ROC AUC)评估了这些模型在区分 BAP1 突变肿瘤(BAP1+)和 BAP1 野生型肿瘤(BAP1-)方面的性能:决策树分类器的总体 AUC 值最高,为 0.69(95% 置信区间:0.60 至 0.77)。通过 LOOCV 最常选择的特征都是二阶特征(灰度级共现或灰度级大小区矩阵),并且是从应用了转换的图像中提取的:这项概念验证工作证明了放射组学在区分BAP1+/- PM患者方面的潜力。未来的工作将把这些方法扩展到通过图像分析评估种系BAP1突变状态,以改善患者的预后。
{"title":"Radiomics for differentiation of somatic <i>BAP1</i> mutation on CT scans of patients with pleural mesothelioma.","authors":"Mena Shenouda, Abbas Shaikh, Ilana Deutsch, Owen Mitchell, Hedy L Kindler, Samuel G Armato","doi":"10.1117/1.JMI.11.6.064501","DOIUrl":"10.1117/1.JMI.11.6.064501","url":null,"abstract":"<p><strong>Purpose: </strong>The BRCA1-associated protein 1 (<i>BAP1</i>) gene is of great interest because somatic (<i>BAP1</i>) mutations are the most common alteration associated with pleural mesothelioma (PM). Further, germline mutation of the <i>BAP1</i> gene has been linked to the development of PM. This study aimed to explore the potential of radiomics on computed tomography scans to identify somatic <i>BAP1</i> gene mutations and assess the feasibility of radiomics in future research in identifying germline mutations.</p><p><strong>Approach: </strong>A cohort of 149 patients with PM and known somatic <i>BAP1</i> mutation status was collected, and a previously published deep learning model was used to first automatically segment the tumor, followed by radiologist modifications. Image preprocessing was performed, and texture features were extracted from the segmented tumor regions. The top features were selected and used to train 18 separate machine learning models using leave-one-out cross-validation (LOOCV). The performance of the models in distinguishing between <i>BAP1</i>-mutated (<i>BAP1+</i>) and <i>BAP1</i> wild-type (<i>BAP1-</i>) tumors was evaluated using the receiver operating characteristic area under the curve (ROC AUC).</p><p><strong>Results: </strong>A decision tree classifier achieved the highest overall AUC value of 0.69 (95% confidence interval: 0.60 and 0.77). The features selected most frequently through the LOOCV were all second-order (gray-level co-occurrence or gray-level size zone matrices) and were extracted from images with an applied transformation.</p><p><strong>Conclusions: </strong>This proof-of-concept work demonstrated the potential of radiomics to differentiate among <i>BAP1+/-</i> in patients with PM. Future work will extend these methods to the assessment of germline <i>BAP1</i> mutation status through image analysis for improved patient prognostication.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064501"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11633667/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum: Publisher's Note: Augmented and virtual reality imaging for collaborative planning of structural cardiovascular interventions: a proof-of-concept and validation study. 勘误:出版者注:增强和虚拟现实成像用于心血管结构干预的协作规划:概念验证和验证研究。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-12-13 DOI: 10.1117/1.JMI.11.6.069801
Xander Jacquemyn, Kobe Bamps, Ruben Moermans, Christophe Dubois, Filip Rega, Peter Verbrugghe, Barbara Weyn, Steven Dymarkowski, Werner Budts, Alexander Van De Bruaene

[This corrects the article DOI: 10.1117/1.JMI.11.6.062606.].

[This corrects the article DOI: 10.1117/1.JMI.11.6.062606.].
{"title":"Erratum: Publisher's Note: Augmented and virtual reality imaging for collaborative planning of structural cardiovascular interventions: a proof-of-concept and validation study.","authors":"Xander Jacquemyn, Kobe Bamps, Ruben Moermans, Christophe Dubois, Filip Rega, Peter Verbrugghe, Barbara Weyn, Steven Dymarkowski, Werner Budts, Alexander Van De Bruaene","doi":"10.1117/1.JMI.11.6.069801","DOIUrl":"10.1117/1.JMI.11.6.069801","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.1117/1.JMI.11.6.062606.].</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"069801"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11638976/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142830514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pseudo-spectral angle mapping for pixel and cell classification in highly multiplexed immunofluorescence images. 高复用免疫荧光图像中像素和细胞分类的伪光谱角映射。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-12-10 DOI: 10.1117/1.JMI.11.6.067502
Madeleine S Torcasso, Junting Ai, Gabriel Casella, Thao Cao, Anthony Chang, Ariel Halper-Stromberg, Bana Jabri, Marcus R Clark, Maryellen L Giger

Purpose: The rapid development of highly multiplexed microscopy has enabled the study of cells embedded within their native tissue. The rich spatial data provided by these techniques have yielded exciting insights into the spatial features of human disease. However, computational methods for analyzing these high-content images are still emerging; there is a need for more robust and generalizable tools for evaluating the cellular constituents and stroma captured by high-plex imaging. To address this need, we have adapted spectral angle mapping-an algorithm developed for hyperspectral image analysis-to compress the channel dimension of high-plex immunofluorescence (IF) images.

Approach: Here, we present pseudo-spectral angle mapping (pSAM), a robust and flexible method for determining the most likely class of each pixel in a high-plex image. The class maps calculated through pSAM yield pixel classifications which can be combined with instance segmentation algorithms to classify individual cells.

Results: In a dataset of colon biopsies imaged with a 13-plex staining panel, 16 pSAM class maps were computed to generate pixel classifications. Instance segmentations of cells with Cellpose2.0 ( F 1 -score of 0.83 ± 0.13 ) were combined with these class maps to provide cell class predictions for 13 cell classes. In addition, in a separate unseen dataset of kidney biopsies imaged with a 44-plex staining panel, pSAM plus Cellpose2.0 ( F 1 -score of 0.86 ± 0.11 ) detected a diverse set of 38 classes of structural and immune cells.

Conclusions: In summary, pSAM is a powerful and generalizable tool for evaluating high-plex IF image data and classifying cells in these high-dimensional images.

目的:高度复用显微镜技术的快速发展使人们能够对嵌入原生组织中的细胞进行研究。这些技术提供的丰富空间数据使人们对人类疾病的空间特征有了令人兴奋的认识。然而,用于分析这些高含量图像的计算方法仍在不断涌现;我们需要更强大、更通用的工具来评估高倍显微成像捕获的细胞成分和基质。为了满足这一需求,我们采用了光谱角度映射--一种为高光谱图像分析开发的算法--来压缩高倍免疫荧光(IF)图像的通道维度:在此,我们提出了伪光谱角映射(pSAM),这是一种稳健而灵活的方法,可用于确定高倍图像中每个像素的最可能类别。通过 pSAM 计算出的类别图可以对像素进行分类,然后结合实例分割算法对单个细胞进行分类:结果:在使用 13 种复合物染色板成像的结肠活检数据集中,计算出了 16 个 pSAM 类别图,从而生成了像素分类。使用 Cellpose2.0 对细胞进行实例分割(F 1 分数为 0.83 ± 0.13),并将这些分类图与 13 个细胞类别的细胞类别预测相结合。此外,在另一个未见过的肾脏活检数据集中,pSAM 加上 Cellpose2.0 (F 1 -score 为 0.86 ± 0.11)检测到了 38 类不同的结构细胞和免疫细胞:总之,pSAM 是评估高倍 IF 图像数据和对这些高维图像中的细胞进行分类的强大而通用的工具。
{"title":"Pseudo-spectral angle mapping for pixel and cell classification in highly multiplexed immunofluorescence images.","authors":"Madeleine S Torcasso, Junting Ai, Gabriel Casella, Thao Cao, Anthony Chang, Ariel Halper-Stromberg, Bana Jabri, Marcus R Clark, Maryellen L Giger","doi":"10.1117/1.JMI.11.6.067502","DOIUrl":"10.1117/1.JMI.11.6.067502","url":null,"abstract":"<p><strong>Purpose: </strong>The rapid development of highly multiplexed microscopy has enabled the study of cells embedded within their native tissue. The rich spatial data provided by these techniques have yielded exciting insights into the spatial features of human disease. However, computational methods for analyzing these high-content images are still emerging; there is a need for more robust and generalizable tools for evaluating the cellular constituents and stroma captured by high-plex imaging. To address this need, we have adapted spectral angle mapping-an algorithm developed for hyperspectral image analysis-to compress the channel dimension of high-plex immunofluorescence (IF) images.</p><p><strong>Approach: </strong>Here, we present pseudo-spectral angle mapping (pSAM), a robust and flexible method for determining the most likely class of each pixel in a high-plex image. The class maps calculated through pSAM yield pixel classifications which can be combined with instance segmentation algorithms to classify individual cells.</p><p><strong>Results: </strong>In a dataset of colon biopsies imaged with a 13-plex staining panel, 16 pSAM class maps were computed to generate pixel classifications. Instance segmentations of cells with Cellpose2.0 ( <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> -score of <math><mrow><mn>0.83</mn> <mo>±</mo> <mn>0.13</mn></mrow> </math> ) were combined with these class maps to provide cell class predictions for 13 cell classes. In addition, in a separate unseen dataset of kidney biopsies imaged with a 44-plex staining panel, pSAM plus Cellpose2.0 ( <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> -score of <math><mrow><mn>0.86</mn> <mo>±</mo> <mn>0.11</mn></mrow> </math> ) detected a diverse set of 38 classes of structural and immune cells.</p><p><strong>Conclusions: </strong>In summary, pSAM is a powerful and generalizable tool for evaluating high-plex IF image data and classifying cells in these high-dimensional images.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"067502"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11629784/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1