首页 > 最新文献

Journal of Medical Imaging最新文献

英文 中文
Self-supervised learning for chest computed tomography: training strategies and effect on downstream applications. 胸部计算机断层扫描的自我监督学习:训练策略及对下游应用的影响。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-11-09 DOI: 10.1117/1.JMI.11.6.064003
Amara Tariq, Gokul Ramasamy, Bhavik Patel, Imon Banerjee

Purpose: Self-supervised pre-training can reduce the amount of labeled training data needed by pre-learning fundamental visual characteristics of the medical imaging data. We investigate several self-supervised training strategies for chest computed tomography exams and their effects on downstream applications.

Approach: We benchmark five well-known self-supervision strategies (masked image region prediction, next slice prediction, rotation prediction, flip prediction, and denoising) on 15 M chest computed tomography (CT) slices collected from four sites of the Mayo Clinic enterprise, United States. These models were evaluated for two downstream tasks on public datasets: pulmonary embolism (PE) detection (classification) and lung nodule segmentation. Image embeddings generated by these models were also evaluated for prediction of patient age, race, and gender to study inherent biases in models' understanding of chest CT exams.

Results: The use of pre-training weights especially masked region prediction-based weights, improved performance, and reduced computational effort needed for downstream tasks compared with task-specific state-of-the-art (SOTA) models. Performance improvement for PE detection was observed for training dataset sizes as large as 380    K with a maximum gain of 5% over SOTA. The segmentation model initialized with pre-training weights learned twice as fast as the randomly initialized model. While gender and age predictors built using self-supervised training weights showed no performance improvement over randomly initialized predictors, the race predictor experienced a 10% performance boost when using self-supervised training weights.

Conclusion: We released self-supervised models and weights under an open-source academic license. These models can then be fine-tuned with limited task-specific annotated data for a variety of downstream imaging tasks, thus accelerating research in biomedical imaging informatics.

目的:自监督预训练可以通过预学习医学影像数据的基本视觉特征来减少所需的标记训练数据量。我们研究了胸部计算机断层扫描检查的几种自监督训练策略及其对下游应用的影响:我们在从美国梅奥诊所企业的四个站点收集的 1500 万张胸部计算机断层扫描(CT)切片上,对五种著名的自监督策略(遮蔽图像区域预测、下一切片预测、旋转预测、翻转预测和去噪)进行了基准测试。这些模型针对公共数据集上的两项下游任务进行了评估:肺栓塞(PE)检测(分类)和肺结节分割。此外,还对这些模型生成的图像嵌入进行了评估,以预测患者的年龄、种族和性别,从而研究模型对胸部 CT 检查的理解是否存在固有偏差:结果:与针对特定任务的最先进模型(SOTA)相比,使用预训练权重(尤其是基于掩蔽区域预测的权重)提高了性能,并减少了下游任务所需的计算工作量。当训练数据集的大小达到 380 K 时,PE 检测的性能有所提高,与 SOTA 相比最大提高了 5%。使用预训练权重初始化的分割模型的学习速度是随机初始化模型的两倍。与随机初始化的预测器相比,使用自我监督训练权重构建的性别和年龄预测器的性能没有提高,但使用自我监督训练权重的种族预测器的性能提高了 10%:我们以开源学术许可证的形式发布了自监督模型和权重。结论:我们以开源学术许可证的形式发布了自监督模型和权重,然后可以利用有限的特定任务注释数据对这些模型进行微调,以用于各种下游成像任务,从而加速生物医学成像信息学的研究。
{"title":"Self-supervised learning for chest computed tomography: training strategies and effect on downstream applications.","authors":"Amara Tariq, Gokul Ramasamy, Bhavik Patel, Imon Banerjee","doi":"10.1117/1.JMI.11.6.064003","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.064003","url":null,"abstract":"<p><strong>Purpose: </strong>Self-supervised pre-training can reduce the amount of labeled training data needed by pre-learning fundamental visual characteristics of the medical imaging data. We investigate several self-supervised training strategies for chest computed tomography exams and their effects on downstream applications.</p><p><strong>Approach: </strong>We benchmark five well-known self-supervision strategies (masked image region prediction, next slice prediction, rotation prediction, flip prediction, and denoising) on 15 M chest computed tomography (CT) slices collected from four sites of the Mayo Clinic enterprise, United States. These models were evaluated for two downstream tasks on public datasets: pulmonary embolism (PE) detection (classification) and lung nodule segmentation. Image embeddings generated by these models were also evaluated for prediction of patient age, race, and gender to study inherent biases in models' understanding of chest CT exams.</p><p><strong>Results: </strong>The use of pre-training weights especially masked region prediction-based weights, improved performance, and reduced computational effort needed for downstream tasks compared with task-specific state-of-the-art (SOTA) models. Performance improvement for PE detection was observed for training dataset sizes as large as <math><mrow><mo>∼</mo> <mn>380</mn> <mtext>  </mtext> <mi>K</mi></mrow> </math> with a maximum gain of 5% over SOTA. The segmentation model initialized with pre-training weights learned twice as fast as the randomly initialized model. While gender and age predictors built using self-supervised training weights showed no performance improvement over randomly initialized predictors, the race predictor experienced a 10% performance boost when using self-supervised training weights.</p><p><strong>Conclusion: </strong>We released self-supervised models and weights under an open-source academic license. These models can then be fine-tuned with limited task-specific annotated data for a variety of downstream imaging tasks, thus accelerating research in biomedical imaging informatics.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064003"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11550486/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-driven nucleus subclassification on colon hematoxylin and eosin using style-transferred digital pathology. 使用样式转移数字病理学对结肠苏木精和伊红进行数据驱动的细胞核亚分类。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-11-05 DOI: 10.1117/1.JMI.11.6.067501
Lucas W Remedios, Shunxing Bao, Samuel W Remedios, Ho Hin Lee, Leon Y Cai, Thomas Li, Ruining Deng, Nancy R Newlin, Adam M Saunders, Can Cui, Jia Li, Qi Liu, Ken S Lau, Joseph T Roland, Mary K Washington, Lori A Coburn, Keith T Wilson, Yuankai Huo, Bennett A Landman

Purpose: Cells are building blocks for human physiology; consequently, understanding the way cells communicate, co-locate, and interrelate is essential to furthering our understanding of how the body functions in both health and disease. Hematoxylin and eosin (H&E) is the standard stain used in histological analysis of tissues in both clinical and research settings. Although H&E is ubiquitous and reveals tissue microanatomy, the classification and mapping of cell subtypes often require the use of specialized stains. The recent CoNIC Challenge focused on artificial intelligence classification of six types of cells on colon H&E but was unable to classify epithelial subtypes (progenitor, enteroendocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), and connective subtypes (fibroblasts). We propose to use inter-modality learning to label previously un-labelable cell types on H&E.

Approach: We took advantage of the cell classification information inherent in multiplexed immunofluorescence (MxIF) histology to create cell-level annotations for 14 subclasses. Then, we performed style transfer on the MxIF to synthesize realistic virtual H&E. We assessed the efficacy of a supervised learning scheme using the virtual H&E and 14 subclass labels. We evaluated our model on virtual H&E and real H&E.

Results: On virtual H&E, we were able to classify helper T cells and epithelial progenitors with positive predictive values of 0.34 ± 0.15 (prevalence 0.03 ± 0.01 ) and 0.47 ± 0.1 (prevalence 0.07 ± 0.02 ), respectively, when using ground truth centroid information. On real H&E, we needed to compute bounded metrics instead of direct metrics because our fine-grained virtual H&E predicted classes had to be matched to the closest available parent classes in the coarser labels from the real H&E dataset. For the real H&E, we could classify bounded metrics for the helper T cells and epithelial progenitors with upper bound positive predictive values of 0.43 ± 0.03 (parent class prevalence 0.21) and 0.94 ± 0.02 (parent class prevalence 0.49) when using ground truth centroid information.

Conclusions: This is the first work to provide cell type classification for helper T and epithelial progenitor nuclei on H&E.

目的:细胞是人体生理的基石;因此,要进一步了解人体在健康和疾病时的功能,就必须了解细胞交流、共处和相互关系的方式。血色素和伊红(H&E)是临床和研究机构对组织进行组织学分析时使用的标准染色剂。虽然 H&E 无处不在并能显示组织的微观解剖结构,但细胞亚型的分类和绘图通常需要使用专用染色剂。最近的 CoNIC 挑战赛重点关注结肠 H&E 上六种类型细胞的人工智能分类,但无法对上皮亚型(祖细胞、肠内分泌细胞、鹅口疮细胞)、淋巴细胞亚型(B 细胞、辅助 T 细胞、细胞毒性 T 细胞)和结缔组织亚型(成纤维细胞)进行分类。我们建议使用跨模态学习来标记 H&E 上以前无法标记的细胞类型:我们利用多重免疫荧光(MxIF)组织学中固有的细胞分类信息,为 14 个亚类创建了细胞级注释。然后,我们对 MxIF 进行了样式转移,合成了逼真的虚拟 H&E。我们使用虚拟 H&E 和 14 个子类标签评估了监督学习方案的效果。我们在虚拟 H&E 和真实 H&E 上评估了我们的模型:在虚拟 H&E 上,当使用地面实况中心点信息时,我们能够对辅助性 T 细胞和上皮祖细胞进行分类,阳性预测值分别为 0.34 ± 0.15(流行率为 0.03 ± 0.01)和 0.47 ± 0.1(流行率为 0.07 ± 0.02)。在真实 H&E 数据集上,我们需要计算有界度量而不是直接度量,因为我们的细粒度虚拟 H&E 预测类别必须与真实 H&E 数据集中较粗标签中最接近的可用父类别相匹配。对于真实的 H&E,当使用地面实况中心点信息时,我们可以对辅助性 T 细胞和上皮祖细胞进行有界度量分类,阳性预测值上限分别为 0.43 ± 0.03(父类流行率为 0.21)和 0.94 ± 0.02(父类流行率为 0.49):这是首次在 H&E 上对辅助 T 细胞和上皮祖细胞核进行细胞类型分类。
{"title":"Data-driven nucleus subclassification on colon hematoxylin and eosin using style-transferred digital pathology.","authors":"Lucas W Remedios, Shunxing Bao, Samuel W Remedios, Ho Hin Lee, Leon Y Cai, Thomas Li, Ruining Deng, Nancy R Newlin, Adam M Saunders, Can Cui, Jia Li, Qi Liu, Ken S Lau, Joseph T Roland, Mary K Washington, Lori A Coburn, Keith T Wilson, Yuankai Huo, Bennett A Landman","doi":"10.1117/1.JMI.11.6.067501","DOIUrl":"10.1117/1.JMI.11.6.067501","url":null,"abstract":"<p><strong>Purpose: </strong>Cells are building blocks for human physiology; consequently, understanding the way cells communicate, co-locate, and interrelate is essential to furthering our understanding of how the body functions in both health and disease. Hematoxylin and eosin (H&E) is the standard stain used in histological analysis of tissues in both clinical and research settings. Although H&E is ubiquitous and reveals tissue microanatomy, the classification and mapping of cell subtypes often require the use of specialized stains. The recent CoNIC Challenge focused on artificial intelligence classification of six types of cells on colon H&E but was unable to classify epithelial subtypes (progenitor, enteroendocrine, goblet), lymphocyte subtypes (B, helper T, cytotoxic T), and connective subtypes (fibroblasts). We propose to use inter-modality learning to label previously un-labelable cell types on H&E.</p><p><strong>Approach: </strong>We took advantage of the cell classification information inherent in multiplexed immunofluorescence (MxIF) histology to create cell-level annotations for 14 subclasses. Then, we performed style transfer on the MxIF to synthesize realistic virtual H&E. We assessed the efficacy of a supervised learning scheme using the virtual H&E and 14 subclass labels. We evaluated our model on virtual H&E and real H&E.</p><p><strong>Results: </strong>On virtual H&E, we were able to classify helper T cells and epithelial progenitors with positive predictive values of <math><mrow><mn>0.34</mn> <mo>±</mo> <mn>0.15</mn></mrow> </math> (prevalence <math><mrow><mn>0.03</mn> <mo>±</mo> <mn>0.01</mn></mrow> </math> ) and <math><mrow><mn>0.47</mn> <mo>±</mo> <mn>0.1</mn></mrow> </math> (prevalence <math><mrow><mn>0.07</mn> <mo>±</mo> <mn>0.02</mn></mrow> </math> ), respectively, when using ground truth centroid information. On real H&E, we needed to compute bounded metrics instead of direct metrics because our fine-grained virtual H&E predicted classes had to be matched to the closest available parent classes in the coarser labels from the real H&E dataset. For the real H&E, we could classify bounded metrics for the helper T cells and epithelial progenitors with upper bound positive predictive values of <math><mrow><mn>0.43</mn> <mo>±</mo> <mn>0.03</mn></mrow> </math> (parent class prevalence 0.21) and <math><mrow><mn>0.94</mn> <mo>±</mo> <mn>0.02</mn></mrow> </math> (parent class prevalence 0.49) when using ground truth centroid information.</p><p><strong>Conclusions: </strong>This is the first work to provide cell type classification for helper T and epithelial progenitor nuclei on H&E.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"067501"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11537205/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142591962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vector field attention for deformable image registration. 用于可变形图像配准的矢量场关注。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-11-06 DOI: 10.1117/1.JMI.11.6.064001
Yihao Liu, Junyu Chen, Lianrui Zuo, Aaron Carass, Jerry L Prince

Purpose: Deformable image registration establishes non-linear spatial correspondences between fixed and moving images. Deep learning-based deformable registration methods have been widely studied in recent years due to their speed advantage over traditional algorithms as well as their better accuracy. Most existing deep learning-based methods require neural networks to encode location information in their feature maps and predict displacement or deformation fields through convolutional or fully connected layers from these high-dimensional feature maps. We present vector field attention (VFA), a novel framework that enhances the efficiency of the existing network design by enabling direct retrieval of location correspondences.

Approach: VFA uses neural networks to extract multi-resolution feature maps from the fixed and moving images and then retrieves pixel-level correspondences based on feature similarity. The retrieval is achieved with a novel attention module without the need for learnable parameters. VFA is trained end-to-end in either a supervised or unsupervised manner.

Results: We evaluated VFA for intra- and inter-modality registration and unsupervised and semi-supervised registration using public datasets as well as the Learn2Reg challenge. VFA demonstrated comparable or superior registration accuracy compared with several state-of-the-art methods.

Conclusions: VFA offers a novel approach to deformable image registration by directly retrieving spatial correspondences from feature maps, leading to improved performance in registration tasks. It holds potential for broader applications.

目的:可变形图像配准可在固定图像和移动图像之间建立非线性空间对应关系。与传统算法相比,基于深度学习的可变形配准方法具有速度快、精度高等优点,近年来已被广泛研究。现有的基于深度学习的方法大多需要神经网络在其特征图中编码位置信息,并通过卷积层或全连接层从这些高维特征图中预测位移或变形场。我们提出的向量场注意(VFA)是一种新型框架,通过直接检索位置对应关系来提高现有网络设计的效率:方法:VFA 利用神经网络从固定和移动图像中提取多分辨率特征图,然后根据特征相似性检索像素级对应关系。检索是通过一个新颖的注意力模块实现的,无需可学习参数。VFA 采用有监督或无监督的方式进行端到端训练:我们使用公共数据集和 Learn2Reg 挑战赛评估了 VFA 在模式内和模式间注册以及无监督和半监督注册方面的表现。与几种最先进的方法相比,VFA 的配准精度相当或更高:VFA 通过直接从特征图中检索空间对应关系,为可变形图像配准提供了一种新方法,从而提高了配准任务的性能。它具有更广泛的应用潜力。
{"title":"Vector field attention for deformable image registration.","authors":"Yihao Liu, Junyu Chen, Lianrui Zuo, Aaron Carass, Jerry L Prince","doi":"10.1117/1.JMI.11.6.064001","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.064001","url":null,"abstract":"<p><strong>Purpose: </strong>Deformable image registration establishes non-linear spatial correspondences between fixed and moving images. Deep learning-based deformable registration methods have been widely studied in recent years due to their speed advantage over traditional algorithms as well as their better accuracy. Most existing deep learning-based methods require neural networks to encode location information in their feature maps and predict displacement or deformation fields through convolutional or fully connected layers from these high-dimensional feature maps. We present vector field attention (VFA), a novel framework that enhances the efficiency of the existing network design by enabling direct retrieval of location correspondences.</p><p><strong>Approach: </strong>VFA uses neural networks to extract multi-resolution feature maps from the fixed and moving images and then retrieves pixel-level correspondences based on feature similarity. The retrieval is achieved with a novel attention module without the need for learnable parameters. VFA is trained end-to-end in either a supervised or unsupervised manner.</p><p><strong>Results: </strong>We evaluated VFA for intra- and inter-modality registration and unsupervised and semi-supervised registration using public datasets as well as the Learn2Reg challenge. VFA demonstrated comparable or superior registration accuracy compared with several state-of-the-art methods.</p><p><strong>Conclusions: </strong>VFA offers a novel approach to deformable image registration by directly retrieving spatial correspondences from feature maps, leading to improved performance in registration tasks. It holds potential for broader applications.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064001"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540117/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142606811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented reality for point-of-care ultrasound-guided vascular access in pediatric patients using Microsoft HoloLens 2: a preliminary evaluation. 使用 Microsoft HoloLens 2 对儿科患者进行护理点超声引导血管通路的增强现实技术:初步评估。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-09-13 DOI: 10.1117/1.JMI.11.6.062604
Gesiren Zhang, Trong N Nguyen, Hadi Fooladi-Talari, Tyler Salvador, Kia Thomas, Daragh Crowley, R Scott Dingeman, Raj Shekhar

Significance: Conventional ultrasound-guided vascular access procedures are challenging due to the need for anatomical understanding, precise needle manipulation, and hand-eye coordination. Recently, augmented reality (AR)-based guidance has emerged as an aid to improve procedural efficiency and potential outcomes. However, its application in pediatric vascular access has not been comprehensively evaluated.

Aim: We developed an AR ultrasound application, HoloUS, using the Microsoft HoloLens 2 to display live ultrasound images directly in the proceduralist's field of view. We presented our evaluation of the effect of using the Microsoft HoloLens 2 for point-of-care ultrasound (POCUS)-guided vascular access in 30 pediatric patients.

Approach: A custom software module was developed on a tablet capable of capturing the moving ultrasound image from any ultrasound machine's screen. The captured image was compressed and sent to the HoloLens 2 via a hotspot without needing Internet access. On the HoloLens 2, we developed a custom software module to receive, decompress, and display the live ultrasound image. Hand gesture and voice command features were implemented for the user to reposition, resize, and change the gain and the contrast of the image. We evaluated 30 (15 successful control and 12 successful interventional) cases completed in a single-center, prospective, randomized study.

Results: The mean overall rendering latency and the rendering frame rate of the HoloUS application were 139.30 ms ( σ = 32.02    ms ) and 30 frames per second, respectively. The average procedure completion time was 17.3% shorter using AR guidance. The numbers of puncture attempts and needle redirections were similar between the two groups, and the number of head adjustments was minimal in the interventional group.

Conclusion: We presented our evaluation of the results from the first study using the Microsoft HoloLens 2 that investigates AR-based POCUS-guided vascular access in pediatric patients. Our evaluation confirmed clinical feasibility and potential improvement in procedural efficiency.

意义重大:传统的超声引导血管通路手术具有挑战性,因为需要了解解剖结构、精确操作针头和手眼协调。最近,基于增强现实技术(AR)的引导已成为提高手术效率和潜在结果的辅助工具。目的:我们利用微软 HoloLens 2 开发了一款 AR 超声波应用程序 HoloUS,可直接在手术医师的视野中显示实时超声波图像。我们介绍了使用微软HoloLens 2对30名儿科患者进行护理点超声(POCUS)引导血管通路的效果评估:方法:在平板电脑上开发了一个定制软件模块,能够从任何超声波机的屏幕上捕捉移动的超声波图像。捕捉到的图像经过压缩后通过热点发送到 HoloLens 2,无需访问互联网。在 HoloLens 2 上,我们开发了一个定制软件模块,用于接收、解压缩和显示实时超声波图像。通过手势和语音命令功能,用户可以调整图像的位置、大小、增益和对比度。我们评估了在单中心、前瞻性、随机研究中完成的 30 个病例(15 个成功的对照病例和 12 个成功的介入病例):结果:HoloUS 应用程序的平均整体渲染延迟和渲染帧率分别为 139.30 毫秒(σ = 32.02 毫秒)和每秒 30 帧。使用增强现实引导技术,平均手术完成时间缩短了 17.3%。两组的穿刺尝试次数和针头重定向次数相似,介入组的头部调整次数最少:我们介绍了对第一项使用微软 HoloLens 2 的研究结果的评估,该研究调查了儿科患者基于 AR 的 POCUS 引导血管通路。我们的评估结果证实了临床可行性和潜在的程序效率改进。
{"title":"Augmented reality for point-of-care ultrasound-guided vascular access in pediatric patients using Microsoft HoloLens 2: a preliminary evaluation.","authors":"Gesiren Zhang, Trong N Nguyen, Hadi Fooladi-Talari, Tyler Salvador, Kia Thomas, Daragh Crowley, R Scott Dingeman, Raj Shekhar","doi":"10.1117/1.JMI.11.6.062604","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.062604","url":null,"abstract":"<p><strong>Significance: </strong>Conventional ultrasound-guided vascular access procedures are challenging due to the need for anatomical understanding, precise needle manipulation, and hand-eye coordination. Recently, augmented reality (AR)-based guidance has emerged as an aid to improve procedural efficiency and potential outcomes. However, its application in pediatric vascular access has not been comprehensively evaluated.</p><p><strong>Aim: </strong>We developed an AR ultrasound application, HoloUS, using the Microsoft HoloLens 2 to display live ultrasound images directly in the proceduralist's field of view. We presented our evaluation of the effect of using the Microsoft HoloLens 2 for point-of-care ultrasound (POCUS)-guided vascular access in 30 pediatric patients.</p><p><strong>Approach: </strong>A custom software module was developed on a tablet capable of capturing the moving ultrasound image from any ultrasound machine's screen. The captured image was compressed and sent to the HoloLens 2 via a hotspot without needing Internet access. On the HoloLens 2, we developed a custom software module to receive, decompress, and display the live ultrasound image. Hand gesture and voice command features were implemented for the user to reposition, resize, and change the gain and the contrast of the image. We evaluated 30 (15 successful control and 12 successful interventional) cases completed in a single-center, prospective, randomized study.</p><p><strong>Results: </strong>The mean overall rendering latency and the rendering frame rate of the HoloUS application were 139.30 ms <math><mrow><mo>(</mo> <mi>σ</mi> <mo>=</mo> <mn>32.02</mn> <mtext>  </mtext> <mi>ms</mi> <mo>)</mo></mrow> </math> and 30 frames per second, respectively. The average procedure completion time was 17.3% shorter using AR guidance. The numbers of puncture attempts and needle redirections were similar between the two groups, and the number of head adjustments was minimal in the interventional group.</p><p><strong>Conclusion: </strong>We presented our evaluation of the results from the first study using the Microsoft HoloLens 2 that investigates AR-based POCUS-guided vascular access in pediatric patients. Our evaluation confirmed clinical feasibility and potential improvement in procedural efficiency.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"062604"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11393663/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Super-resolution multi-contrast unbiased eye atlases with deep probabilistic refinement. 具有深度概率细化功能的超分辨率多对比度无偏眼图。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-11-14 DOI: 10.1117/1.JMI.11.6.064004
Ho Hin Lee, Adam M Saunders, Michael E Kim, Samuel W Remedios, Lucas W Remedios, Yucheng Tang, Qi Yang, Xin Yu, Shunxing Bao, Chloe Cho, Louise A Mawn, Tonia S Rex, Kevin L Schey, Blake E Dewey, Jeffrey M Spraggins, Jerry L Prince, Yuankai Huo, Bennett A Landman

Purpose: Eye morphology varies significantly across the population, especially for the orbit and optic nerve. These variations limit the feasibility and robustness of generalizing population-wise features of eye organs to an unbiased spatial reference.

Approach: To tackle these limitations, we propose a process for creating high-resolution unbiased eye atlases. First, to restore spatial details from scans with a low through-plane resolution compared with a high in-plane resolution, we apply a deep learning-based super-resolution algorithm. Then, we generate an initial unbiased reference with an iterative metric-based registration using a small portion of subject scans. We register the remaining scans to this template and refine the template using an unsupervised deep probabilistic approach that generates a more expansive deformation field to enhance the organ boundary alignment. We demonstrate this framework using magnetic resonance images across four different tissue contrasts, generating four atlases in separate spatial alignments.

Results: When refining the template with sufficient subjects, we find a significant improvement using the Wilcoxon signed-rank test in the average Dice score across four labeled regions compared with a standard registration framework consisting of rigid, affine, and deformable transformations. These results highlight the effective alignment of eye organs and boundaries using our proposed process.

Conclusions: By combining super-resolution preprocessing and deep probabilistic models, we address the challenge of generating an eye atlas to serve as a standardized reference across a largely variable population.

目的:不同人群的眼部形态差异很大,尤其是眼眶和视神经。这些差异限制了将人群眼部器官特征归纳为无偏空间参考的可行性和稳健性:为了解决这些限制,我们提出了一种创建高分辨率无偏眼图谱的方法。首先,与高平面内分辨率相比,我们采用了一种基于深度学习的超分辨率算法,以还原低平面内分辨率扫描的空间细节。然后,我们使用一小部分受试者扫描数据,通过基于度量的迭代配准生成一个无偏的初始参考。我们将剩余的扫描结果注册到该模板上,并使用一种无监督的深度概率方法来完善模板,该方法可生成一个更广阔的形变场,以增强器官边界对齐。我们使用四种不同组织对比度的磁共振图像演示了这一框架,生成了四张不同空间排列的地图集:结果:与包含刚性、仿射和可变形变换的标准配准框架相比,我们发现在使用 Wilcoxon 符号秩检验改进模板时,四个标记区域的平均 Dice 分数有了显著提高。这些结果凸显了我们提出的流程能有效对准眼球器官和边界:通过将超分辨率预处理与深度概率模型相结合,我们解决了生成眼图集的难题,该图集可作为多变人群的标准化参考。
{"title":"Super-resolution multi-contrast unbiased eye atlases with deep probabilistic refinement.","authors":"Ho Hin Lee, Adam M Saunders, Michael E Kim, Samuel W Remedios, Lucas W Remedios, Yucheng Tang, Qi Yang, Xin Yu, Shunxing Bao, Chloe Cho, Louise A Mawn, Tonia S Rex, Kevin L Schey, Blake E Dewey, Jeffrey M Spraggins, Jerry L Prince, Yuankai Huo, Bennett A Landman","doi":"10.1117/1.JMI.11.6.064004","DOIUrl":"10.1117/1.JMI.11.6.064004","url":null,"abstract":"<p><strong>Purpose: </strong>Eye morphology varies significantly across the population, especially for the orbit and optic nerve. These variations limit the feasibility and robustness of generalizing population-wise features of eye organs to an unbiased spatial reference.</p><p><strong>Approach: </strong>To tackle these limitations, we propose a process for creating high-resolution unbiased eye atlases. First, to restore spatial details from scans with a low through-plane resolution compared with a high in-plane resolution, we apply a deep learning-based super-resolution algorithm. Then, we generate an initial unbiased reference with an iterative metric-based registration using a small portion of subject scans. We register the remaining scans to this template and refine the template using an unsupervised deep probabilistic approach that generates a more expansive deformation field to enhance the organ boundary alignment. We demonstrate this framework using magnetic resonance images across four different tissue contrasts, generating four atlases in separate spatial alignments.</p><p><strong>Results: </strong>When refining the template with sufficient subjects, we find a significant improvement using the Wilcoxon signed-rank test in the average Dice score across four labeled regions compared with a standard registration framework consisting of rigid, affine, and deformable transformations. These results highlight the effective alignment of eye organs and boundaries using our proposed process.</p><p><strong>Conclusions: </strong>By combining super-resolution preprocessing and deep probabilistic models, we address the challenge of generating an eye atlas to serve as a standardized reference across a largely variable population.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064004"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11561295/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented and virtual reality imaging for collaborative planning of structural cardiovascular interventions: a proof-of-concept and validation study. 增强和虚拟现实成像用于心血管结构干预的协作规划:概念验证和验证研究。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-10-08 DOI: 10.1117/1.JMI.11.6.062606
Jacquemyn Xander, Bamps Kobe, Moermans Ruben, Dubois Christophe, Rega Filip, Verbrugghe Peter, Weyn Barbara, Dymarkowski Steven, Budts Werner, Van De Bruaene Alexander

Purpose: Virtual reality (VR) and augmented reality (AR) have led to significant advancements in cardiac preoperative planning, shaping the world in profound ways. A noticeable gap exists in the availability of a comprehensive multi-user, multi-device mixed reality application that can be used in a multidisciplinary team meeting.

Approach: A multi-user, multi-device mixed reality application was developed, supporting AR and VR implementations. Technical validation involved a standardized testing protocol and comparison of AR and VR measurements regarding absolute error and time. Preclinical validation engaged experts in interventional cardiology, evaluating the clinical applicability prior to clinical validation. Clinical validation included patient-specific measurements for five patients in VR compared with standard computed tomography (CT) for preoperative planning. Questionnaires were used at all stages for subjective evaluation.

Results: Technical validation, including 106 size measurements, demonstrated an absolute median error of 0.69 mm (0.25 to 1.18 mm) compared with ground truth. The time to complete the entire task was 892 ± 407    s on average, with VR measurements being faster than AR ( 804 ± 483 versus 957 ± 257    s , P = 0.045 ). On clinical validation of five preoperative patients, there was no statistically significant difference between paired CT and VR measurements (0.58 [95% CI, - 1.58 to 2.74], P = 0.586 ). Questionnaires showcased unanimous agreement on the user-friendly nature, effectiveness, and clinical value.

Conclusions: The mixed reality application, validated through technical, preclinical, and clinical assessments, demonstrates precision and user-friendliness. Further research of our application is needed to validate the generalizability and impact on patient outcomes.

目的:虚拟现实(VR)和增强现实(AR)在心脏术前规划方面取得了重大进展,深刻地改变了世界。但在可用于多学科团队会议的多用户、多设备混合现实综合应用方面存在明显差距:方法:开发了一款多用户、多设备混合现实应用程序,支持 AR 和 VR 实现。技术验证包括标准化测试协议以及 AR 和 VR 测量绝对误差和时间的比较。临床前验证邀请了介入心脏病学专家参与,在临床验证之前评估临床适用性。临床验证包括将 VR 与标准计算机断层扫描(CT)进行术前规划比较,对五名患者进行特定测量。所有阶段均使用问卷进行主观评估:技术验证包括 106 次尺寸测量,与地面实况相比,绝对中位误差为 0.69 毫米(0.25 至 1.18 毫米)。完成整个任务的平均时间为 892 ± 407 秒,VR 测量比 AR 测量快(804 ± 483 秒对 957 ± 257 秒,P = 0.045)。在对五名术前患者进行临床验证时,CT 和 VR 的配对测量结果在统计学上没有显著差异(0.58 [95% CI, - 1.58 to 2.74], P = 0.586)。调查问卷显示,用户一致认同该应用的易用性、有效性和临床价值:结论:通过技术、临床前和临床评估验证的混合现实应用显示出精确性和用户友好性。需要对我们的应用进行进一步研究,以验证其通用性和对患者治疗效果的影响。
{"title":"Augmented and virtual reality imaging for collaborative planning of structural cardiovascular interventions: a proof-of-concept and validation study.","authors":"Jacquemyn Xander, Bamps Kobe, Moermans Ruben, Dubois Christophe, Rega Filip, Verbrugghe Peter, Weyn Barbara, Dymarkowski Steven, Budts Werner, Van De Bruaene Alexander","doi":"10.1117/1.JMI.11.6.062606","DOIUrl":"10.1117/1.JMI.11.6.062606","url":null,"abstract":"<p><strong>Purpose: </strong>Virtual reality (VR) and augmented reality (AR) have led to significant advancements in cardiac preoperative planning, shaping the world in profound ways. A noticeable gap exists in the availability of a comprehensive multi-user, multi-device mixed reality application that can be used in a multidisciplinary team meeting.</p><p><strong>Approach: </strong>A multi-user, multi-device mixed reality application was developed, supporting AR and VR implementations. Technical validation involved a standardized testing protocol and comparison of AR and VR measurements regarding absolute error and time. Preclinical validation engaged experts in interventional cardiology, evaluating the clinical applicability prior to clinical validation. Clinical validation included patient-specific measurements for five patients in VR compared with standard computed tomography (CT) for preoperative planning. Questionnaires were used at all stages for subjective evaluation.</p><p><strong>Results: </strong>Technical validation, including 106 size measurements, demonstrated an absolute median error of 0.69 mm (0.25 to 1.18 mm) compared with ground truth. The time to complete the entire task was <math><mrow><mn>892</mn> <mo>±</mo> <mn>407</mn> <mtext>  </mtext> <mi>s</mi></mrow> </math> on average, with VR measurements being faster than AR ( <math><mrow><mn>804</mn> <mo>±</mo> <mn>483</mn></mrow> </math> versus <math><mrow><mn>957</mn> <mo>±</mo> <mn>257</mn> <mtext>  </mtext> <mi>s</mi></mrow> </math> , <math><mrow><mi>P</mi> <mo>=</mo> <mn>0.045</mn></mrow> </math> ). On clinical validation of five preoperative patients, there was no statistically significant difference between paired CT and VR measurements (0.58 [95% CI, <math><mrow><mo>-</mo> <mn>1.58</mn></mrow> </math> to 2.74], <math><mrow><mi>P</mi> <mo>=</mo> <mn>0.586</mn></mrow> </math> ). Questionnaires showcased unanimous agreement on the user-friendly nature, effectiveness, and clinical value.</p><p><strong>Conclusions: </strong>The mixed reality application, validated through technical, preclinical, and clinical assessments, demonstrates precision and user-friendliness. Further research of our application is needed to validate the generalizability and impact on patient outcomes.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"062606"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11460359/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Centerline-guided reinforcement learning model for pancreatic duct identifications. 用于胰腺导管识别的中心线引导强化学习模型。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-11-08 DOI: 10.1117/1.JMI.11.6.064002
Sepideh Amiri, Reza Karimzadeh, Tomaž Vrtovec, Erik Gudmann Steuble Brandt, Henrik S Thomsen, Michael Brun Andersen, Christoph Felix Müller, Anders Bertil Rodell, Bulat Ibragimov

Purpose: Pancreatic ductal adenocarcinoma is forecast to become the second most significant cause of cancer mortality as the number of patients with cancer in the main duct of the pancreas grows, and measurement of the pancreatic duct diameter from medical images has been identified as relevant for its early diagnosis.

Approach: We propose an automated pancreatic duct centerline tracing method from computed tomography (CT) images that is based on deep reinforcement learning, which employs an artificial agent to interact with the environment and calculates rewards by combining the distances from the target and the centerline. A deep neural network is implemented to forecast step-wise values for each potential action. With the help of this mechanism, the agent can probe along the pancreatic duct centerline using the best possible navigational path. To enhance the tracing accuracy, we employ landmark-based registration, which enables the generation of a probability map of the pancreatic duct. Subsequently, we utilize a gradient-based method on the registered data to extract a probability map specifically indicating the centerline of the pancreatic duct.

Results: Three datasets with a total of 115 CT images were used to evaluate the proposed method. Using image hold-out from the first two datasets, the method performance was 2.0, 4.0, and 2.1 mm measured in terms of the mean detection error, Hausdorff distance (HD), and root mean squared error (RMSE), respectively. Using the first two datasets for training and the third one for testing, the method accuracy was 2.2, 4.9, and 2.6 mm measured in terms of the mean detection error, HD, and RMSE, respectively.

Conclusions: We present an algorithm for automated pancreatic duct centerline tracing using deep reinforcement learning. We observe that validation on an external dataset confirms the potential for practical utilization of the presented method.

目的:随着胰腺主导管癌症患者人数的增加,胰腺导管腺癌预计将成为癌症死亡的第二大主要原因:我们提出了一种从计算机断层扫描(CT)图像中自动追踪胰腺导管中心线的方法,该方法基于深度强化学习,采用人工代理与环境交互,并通过结合目标和中心线的距离来计算奖励。深度神经网络用于预测每个潜在行动的分步值。在这一机制的帮助下,代理可以使用最佳导航路径沿着胰腺导管中心线进行探测。为了提高追踪精度,我们采用了基于地标的注册方法,从而生成了胰腺导管的概率图。随后,我们在注册数据上使用基于梯度的方法提取概率图,专门指示胰管中心线:我们使用了三个数据集,共 115 张 CT 图像来评估所提出的方法。使用前两个数据集的图像保留,该方法的平均检测误差、豪斯多夫距离(HD)和均方根误差(RMSE)分别为 2.0 毫米、4.0 毫米和 2.1 毫米。使用前两个数据集进行训练,使用第三个数据集进行测试,以平均检测误差、HD 和均方根误差计算,该方法的准确度分别为 2.2、4.9 和 2.6 毫米:我们提出了一种利用深度强化学习自动追踪胰管中心线的算法。我们注意到,在外部数据集上的验证证实了所提出方法的实际应用潜力。
{"title":"Centerline-guided reinforcement learning model for pancreatic duct identifications.","authors":"Sepideh Amiri, Reza Karimzadeh, Tomaž Vrtovec, Erik Gudmann Steuble Brandt, Henrik S Thomsen, Michael Brun Andersen, Christoph Felix Müller, Anders Bertil Rodell, Bulat Ibragimov","doi":"10.1117/1.JMI.11.6.064002","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.064002","url":null,"abstract":"<p><strong>Purpose: </strong>Pancreatic ductal adenocarcinoma is forecast to become the second most significant cause of cancer mortality as the number of patients with cancer in the main duct of the pancreas grows, and measurement of the pancreatic duct diameter from medical images has been identified as relevant for its early diagnosis.</p><p><strong>Approach: </strong>We propose an automated pancreatic duct centerline tracing method from computed tomography (CT) images that is based on deep reinforcement learning, which employs an artificial agent to interact with the environment and calculates rewards by combining the distances from the target and the centerline. A deep neural network is implemented to forecast step-wise values for each potential action. With the help of this mechanism, the agent can probe along the pancreatic duct centerline using the best possible navigational path. To enhance the tracing accuracy, we employ landmark-based registration, which enables the generation of a probability map of the pancreatic duct. Subsequently, we utilize a gradient-based method on the registered data to extract a probability map specifically indicating the centerline of the pancreatic duct.</p><p><strong>Results: </strong>Three datasets with a total of 115 CT images were used to evaluate the proposed method. Using image hold-out from the first two datasets, the method performance was 2.0, 4.0, and 2.1 mm measured in terms of the mean detection error, Hausdorff distance (HD), and root mean squared error (RMSE), respectively. Using the first two datasets for training and the third one for testing, the method accuracy was 2.2, 4.9, and 2.6 mm measured in terms of the mean detection error, HD, and RMSE, respectively.</p><p><strong>Conclusions: </strong>We present an algorithm for automated pancreatic duct centerline tracing using deep reinforcement learning. We observe that validation on an external dataset confirms the potential for practical utilization of the presented method.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"064002"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11543826/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of monocular and binocular contrast perception on virtual reality head-mounted displays. 评估虚拟现实头戴式显示器上的单眼和双眼对比度感知。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-11-01 Epub Date: 2024-09-14 DOI: 10.1117/1.JMI.11.6.062605
Khushi Bhansali, Miguel A Lago, Ryan Beams, Chumin Zhao

Purpose: Visualization of medical images on a virtual reality (VR) head-mounted display (HMD) requires binocular fusion of a stereoscopic pair of graphical views. However, current image quality assessment on VR HMDs for medical applications has been primarily limited to time-consuming monocular optical bench measurement on a single eyepiece.

Approach: As an alternative to optical bench measurement to quantify the image quality on VR HMDs, we developed a WebXR test platform to perform contrast perceptual experiments that can be used for binocular image quality assessment. We obtained monocular and binocular contrast sensitivity responses (CSRs) from participants on a Meta Quest 2 VR HMD using varied interpupillary distance (IPD) configurations.

Results: The perceptual result shows that contrast perception on VR HMDs is primarily affected by optical aberration of the VR HMD. As a result, monocular CSR degrades at a high spatial frequency greater than 4 cycles per degree when gazing at the periphery of the display field of view, especially for mismatched IPD settings consistent with optical bench measurements. On the contrary, binocular contrast perception is dominated by the monocular view with superior image quality measured by the contrast.

Conclusions: We developed a test platform to investigate monocular and binocular contrast perception by performing perceptual experiments. The test method can be used to evaluate monocular and/or binocular image quality on VR HMDs for potential medical applications without extensive optical bench measurements.

目的:在虚拟现实(VR)头戴式显示器(HMD)上可视化医疗图像需要双目融合一对立体图形视图。然而,目前用于医疗应用的 VR HMD 图像质量评估主要局限于在单个目镜上进行耗时的单目光学台架测量:我们开发了一个 WebXR 测试平台来执行对比度感知实验,该平台可用于双眼图像质量评估。我们在 Meta Quest 2 VR HMD 上使用不同的瞳间距(IPD)配置获得了参与者的单眼和双眼对比敏感度反应(CSR):感知结果表明,VR HMD 上的对比度感知主要受 VR HMD 光学像差的影响。因此,当注视显示屏视野的外围时,单眼 CSR 会以大于每度 4 个周期的高空间频率下降,特别是在 IPD 设置不匹配的情况下,这与光学工作台的测量结果一致。相反,双眼对比度感知由单眼视图主导,对比度测量出的图像质量更优:我们开发了一个测试平台,通过感知实验来研究单眼和双眼对比感知。该测试方法可用于评估 VR HMD 上的单眼和/或双眼图像质量,从而实现潜在的医疗应用,而无需进行大量的光学台架测量。
{"title":"Evaluation of monocular and binocular contrast perception on virtual reality head-mounted displays.","authors":"Khushi Bhansali, Miguel A Lago, Ryan Beams, Chumin Zhao","doi":"10.1117/1.JMI.11.6.062605","DOIUrl":"https://doi.org/10.1117/1.JMI.11.6.062605","url":null,"abstract":"<p><strong>Purpose: </strong>Visualization of medical images on a virtual reality (VR) head-mounted display (HMD) requires binocular fusion of a stereoscopic pair of graphical views. However, current image quality assessment on VR HMDs for medical applications has been primarily limited to time-consuming monocular optical bench measurement on a single eyepiece.</p><p><strong>Approach: </strong>As an alternative to optical bench measurement to quantify the image quality on VR HMDs, we developed a WebXR test platform to perform contrast perceptual experiments that can be used for binocular image quality assessment. We obtained monocular and binocular contrast sensitivity responses (CSRs) from participants on a Meta Quest 2 VR HMD using varied interpupillary distance (IPD) configurations.</p><p><strong>Results: </strong>The perceptual result shows that contrast perception on VR HMDs is primarily affected by optical aberration of the VR HMD. As a result, monocular CSR degrades at a high spatial frequency greater than 4 cycles per degree when gazing at the periphery of the display field of view, especially for mismatched IPD settings consistent with optical bench measurements. On the contrary, binocular contrast perception is dominated by the monocular view with superior image quality measured by the contrast.</p><p><strong>Conclusions: </strong>We developed a test platform to investigate monocular and binocular contrast perception by performing perceptual experiments. The test method can be used to evaluate monocular and/or binocular image quality on VR HMDs for potential medical applications without extensive optical bench measurements.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 6","pages":"062605"},"PeriodicalIF":1.9,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11401613/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142298701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterization of arteriosclerosis based on computer-aided measurements of intra-arterial thickness. 根据计算机辅助测量动脉内厚度确定动脉硬化的特征。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-09-01 Epub Date: 2024-10-10 DOI: 10.1117/1.JMI.11.5.057501
Jin Zhou, Xiang Li, Dawit Demeke, Timothy A Dinh, Yingbao Yang, Andrew R Janowczyk, Jarcy Zee, Lawrence Holzman, Laura Mariani, Krishnendu Chakrabarty, Laura Barisoni, Jeffrey B Hodgin, Kyle J Lafata

Purpose: Our purpose is to develop a computer vision approach to quantify intra-arterial thickness on digital pathology images of kidney biopsies as a computational biomarker of arteriosclerosis.

Approach: The severity of the arteriosclerosis was scored (0 to 3) in 753 arteries from 33 trichrome-stained whole slide images (WSIs) of kidney biopsies, and the outer contours of the media, intima, and lumen were manually delineated by a renal pathologist. We then developed a multi-class deep learning (DL) framework for segmenting the different intra-arterial compartments (training dataset: 648 arteries from 24 WSIs; testing dataset: 105 arteries from 9 WSIs). Subsequently, we employed radial sampling and made measurements of media and intima thickness as a function of spatially encoded polar coordinates throughout the artery. Pathomic features were extracted from the measurements to collectively describe the arterial wall characteristics. The technique was first validated through numerical analysis of simulated arteries, with systematic deformations applied to study their effect on arterial thickness measurements. We then compared these computationally derived measurements with the pathologists' grading of arteriosclerosis.

Results: Numerical validation shows that our measurement technique adeptly captured the decreasing smoothness in the intima and media thickness as the deformation increases in the simulated arteries. Intra-arterial DL segmentations of media, intima, and lumen achieved Dice scores of 0.84, 0.78, and 0.86, respectively. Several significant associations were identified between arteriosclerosis grade and pathomic features using our technique (e.g., intima-media ratio average [ τ = 0.52 , p < 0.0001 ]) through Kendall's tau analysis.

Conclusions: We developed a computer vision approach to computationally characterize intra-arterial morphology on digital pathology images and demonstrate its feasibility as a potential computational biomarker of arteriosclerosis.

目的:我们的目的是开发一种计算机视觉方法,以量化肾活检数字病理图像上的动脉内厚度,作为动脉硬化的计算生物标志物:方法:从33张三色染色的肾活检全切片图像(WSIs)中对753条动脉的动脉硬化严重程度进行评分(0至3分),并由肾脏病理学家手动划定中膜、内膜和管腔的外轮廓。然后,我们开发了一个多类深度学习(DL)框架,用于分割不同的动脉内分区(训练数据集:训练数据集:来自 24 个 WSI 的 648 条动脉;测试数据集:来自 9 个 WSI 的 105 条动脉:来自 9 个 WSI 的 105 条动脉)。随后,我们采用径向采样,测量了整个动脉中作为空间编码极坐标函数的中膜和内膜厚度。从测量结果中提取病理特征,以综合描述动脉壁的特征。该技术首先通过模拟动脉的数值分析进行验证,并应用系统变形研究其对动脉厚度测量的影响。然后,我们将计算得出的测量结果与病理学家对动脉硬化的分级进行了比较:结果:数值验证表明,我们的测量技术能很好地捕捉到随着模拟动脉变形的增加,动脉内膜和介质厚度的平滑度不断降低的现象。动脉内介质、内膜和管腔的 DL 分段 Dice 分数分别为 0.84、0.78 和 0.86。通过 Kendall's tau 分析,我们的技术在动脉硬化等级和病理特征(如内膜-介质比平均值 [ τ = 0.52 , p 0.0001 ])之间发现了一些重要的关联:我们开发了一种计算机视觉方法来计算数字病理图像上的动脉内形态特征,并证明了其作为动脉硬化潜在计算生物标志物的可行性。
{"title":"Characterization of arteriosclerosis based on computer-aided measurements of intra-arterial thickness.","authors":"Jin Zhou, Xiang Li, Dawit Demeke, Timothy A Dinh, Yingbao Yang, Andrew R Janowczyk, Jarcy Zee, Lawrence Holzman, Laura Mariani, Krishnendu Chakrabarty, Laura Barisoni, Jeffrey B Hodgin, Kyle J Lafata","doi":"10.1117/1.JMI.11.5.057501","DOIUrl":"https://doi.org/10.1117/1.JMI.11.5.057501","url":null,"abstract":"<p><strong>Purpose: </strong>Our purpose is to develop a computer vision approach to quantify intra-arterial thickness on digital pathology images of kidney biopsies as a computational biomarker of arteriosclerosis.</p><p><strong>Approach: </strong>The severity of the arteriosclerosis was scored (0 to 3) in 753 arteries from 33 trichrome-stained whole slide images (WSIs) of kidney biopsies, and the outer contours of the media, intima, and lumen were manually delineated by a renal pathologist. We then developed a multi-class deep learning (DL) framework for segmenting the different intra-arterial compartments (training dataset: 648 arteries from 24 WSIs; testing dataset: 105 arteries from 9 WSIs). Subsequently, we employed radial sampling and made measurements of media and intima thickness as a function of spatially encoded polar coordinates throughout the artery. Pathomic features were extracted from the measurements to collectively describe the arterial wall characteristics. The technique was first validated through numerical analysis of simulated arteries, with systematic deformations applied to study their effect on arterial thickness measurements. We then compared these computationally derived measurements with the pathologists' grading of arteriosclerosis.</p><p><strong>Results: </strong>Numerical validation shows that our measurement technique adeptly captured the decreasing smoothness in the intima and media thickness as the deformation increases in the simulated arteries. Intra-arterial DL segmentations of media, intima, and lumen achieved Dice scores of 0.84, 0.78, and 0.86, respectively. Several significant associations were identified between arteriosclerosis grade and pathomic features using our technique (e.g., intima-media ratio average [ <math><mrow><mi>τ</mi> <mo>=</mo> <mn>0.52</mn></mrow> </math> , <math><mrow><mi>p</mi> <mo><</mo> <mn>0.0001</mn></mrow> </math> ]) through Kendall's tau analysis.</p><p><strong>Conclusions: </strong>We developed a computer vision approach to computationally characterize intra-arterial morphology on digital pathology images and demonstrate its feasibility as a potential computational biomarker of arteriosclerosis.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"057501"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11466048/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142477764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the use of signal detection information in supervised learning-based image denoising with consideration of task-shift. 研究在基于监督学习的图像去噪中使用信号检测信息,并考虑任务转移。
IF 1.9 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2024-09-01 Epub Date: 2024-09-05 DOI: 10.1117/1.JMI.11.5.055501
Kaiyan Li, Hua Li, Mark A Anastasio

Purpose: Recently, learning-based denoising methods that incorporate task-relevant information into the training procedure have been developed to enhance the utility of the denoised images. However, this line of research is relatively new and underdeveloped, and some fundamental issues remain unexplored. Our purpose is to yield insights into general issues related to these task-informed methods. This includes understanding the impact of denoising on objective measures of image quality (IQ) when the specified task at inference time is different from that employed for model training, a phenomenon we refer to as "task-shift."

Approach: A virtual imaging test bed comprising a stylized computational model of a chest X-ray computed tomography imaging system was employed to enable a controlled and tractable study design. A canonical, fully supervised, convolutional neural network-based denoising method was purposely adopted to understand the underlying issues that may be relevant to a variety of applications and more advanced denoising or image reconstruction methods. Signal detection and signal detection-localization tasks under signal-known-statistically with background-known-statistically conditions were considered, and several distinct types of numerical observers were employed to compute estimates of the task performance. Studies were designed to reveal how a task-informed transfer-learning approach can influence the tradeoff between conventional and task-based measures of image quality within the context of the considered tasks. In addition, the impact of task-shift on these image quality measures was assessed.

Results: The results indicated that certain tradeoffs can be achieved such that the resulting AUC value was significantly improved and the degradation of physical IQ measures was statistically insignificant. It was also observed that introducing task-shift degrades the task performance as expected. The degradation was significant when a relatively simple task was considered for network training and observer performance on a more complex one was assessed at inference time.

Conclusions: The presented results indicate that the task-informed training method can improve the observer performance while providing control over the tradeoff between traditional and task-based measures of image quality. The behavior of a task-informed model fine-tuning procedure was demonstrated, and the impact of task-shift on task-based image quality measures was investigated.

目的最近,人们开发了基于学习的去噪方法,将任务相关信息纳入训练程序,以提高去噪图像的实用性。然而,这一研究方向相对较新,发展还不充分,一些基本问题仍未得到探索。我们的目的是深入了解与这些任务信息方法相关的一般问题。这包括了解当推理时的指定任务不同于模型训练时的指定任务时,去噪对客观图像质量(IQ)测量的影响,我们将这种现象称为 "任务偏移":虚拟成像试验台由胸部 X 射线计算机断层扫描成像系统的风格化计算模型组成,以实现可控、可操作的研究设计。特意采用了一种基于卷积神经网络的典型、完全监督去噪方法,以了解可能与各种应用和更先进的去噪或图像重建方法相关的基本问题。研究考虑了信号已知统计和背景已知统计条件下的信号检测和信号检测定位任务,并采用了几种不同类型的数字观测器来计算任务性能的估计值。研究旨在揭示在所考虑的任务背景下,基于任务的迁移学习方法如何影响图像质量的传统测量方法和基于任务的测量方法之间的权衡。此外,还评估了任务转移对这些图像质量衡量标准的影响:结果表明,可以实现某些权衡,从而显著提高 AUC 值,而物理智商指标的下降在统计上并不明显。此外,还观察到引入任务转移会降低任务性能。当考虑用相对简单的任务进行网络训练,并在推理时评估观察者在更复杂的任务上的表现时,任务性能的下降非常明显:以上结果表明,基于任务的训练方法可以提高观察者的表现,同时还能控制传统图像质量测量方法和基于任务的图像质量测量方法之间的权衡。演示了任务信息模型微调程序的行为,并研究了任务转移对基于任务的图像质量测量的影响。
{"title":"Investigating the use of signal detection information in supervised learning-based image denoising with consideration of task-shift.","authors":"Kaiyan Li, Hua Li, Mark A Anastasio","doi":"10.1117/1.JMI.11.5.055501","DOIUrl":"10.1117/1.JMI.11.5.055501","url":null,"abstract":"<p><strong>Purpose: </strong>Recently, learning-based denoising methods that incorporate task-relevant information into the training procedure have been developed to enhance the utility of the denoised images. However, this line of research is relatively new and underdeveloped, and some fundamental issues remain unexplored. Our purpose is to yield insights into general issues related to these task-informed methods. This includes understanding the impact of denoising on objective measures of image quality (IQ) when the specified task at inference time is different from that employed for model training, a phenomenon we refer to as \"task-shift.\"</p><p><strong>Approach: </strong>A virtual imaging test bed comprising a stylized computational model of a chest X-ray computed tomography imaging system was employed to enable a controlled and tractable study design. A canonical, fully supervised, convolutional neural network-based denoising method was purposely adopted to understand the underlying issues that may be relevant to a variety of applications and more advanced denoising or image reconstruction methods. Signal detection and signal detection-localization tasks under signal-known-statistically with background-known-statistically conditions were considered, and several distinct types of numerical observers were employed to compute estimates of the task performance. Studies were designed to reveal how a task-informed transfer-learning approach can influence the tradeoff between conventional and task-based measures of image quality within the context of the considered tasks. In addition, the impact of task-shift on these image quality measures was assessed.</p><p><strong>Results: </strong>The results indicated that certain tradeoffs can be achieved such that the resulting AUC value was significantly improved and the degradation of physical IQ measures was statistically insignificant. It was also observed that introducing task-shift degrades the task performance as expected. The degradation was significant when a relatively simple task was considered for network training and observer performance on a more complex one was assessed at inference time.</p><p><strong>Conclusions: </strong>The presented results indicate that the task-informed training method can improve the observer performance while providing control over the tradeoff between traditional and task-based measures of image quality. The behavior of a task-informed model fine-tuning procedure was demonstrated, and the impact of task-shift on task-based image quality measures was investigated.</p>","PeriodicalId":47707,"journal":{"name":"Journal of Medical Imaging","volume":"11 5","pages":"055501"},"PeriodicalIF":1.9,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11376226/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1