首页 > 最新文献

Journal of Biomedical Optics最新文献

英文 中文
Optical coherence tomography otoscope for imaging of tympanic membrane and middle ear pathology. 用于鼓膜和中耳病理成像的光学相干断层耳镜。
IF 3 3区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-08-01 Epub Date: 2024-08-20 DOI: 10.1117/1.JBO.29.8.086005
Wihan Kim, Ryan Long, Zihan Yang, John S Oghalai, Brian E Applegate

Significance: Pathologies within the tympanic membrane (TM) and middle ear (ME) can lead to hearing loss. Imaging tools available in the hearing clinic for diagnosis and management are limited to visual inspection using the classic otoscope. The otoscopic view is limited to the surface of the TM, especially in diseased ears where the TM is opaque. An integrated optical coherence tomography (OCT) otoscope can provide images of the interior of the TM and ME space as well as an otoscope image. This enables the clinicians to correlate the standard otoscopic view with OCT and then use the new information to improve the diagnostic accuracy and management.

Aim: We aim to develop an OCT otoscope that can easily be used in the hearing clinic and demonstrate the system in the hearing clinic, identifying relevant image features of various pathologies not apparent in the standard otoscopic view.

Approach: We developed a portable OCT otoscope device featuring an improved field of view and form-factor that can be operated solely by the clinician using an integrated foot pedal to control image acquisition. The device was used to image patients at a hearing clinic.

Results: The field of view of the imaging system was improved to a 7.4 mm diameter, with lateral and axial resolutions of 38    μ m and 33.4    μ m , respectively. We developed algorithms to resample the images in Cartesian coordinates after collection in spherical polar coordinates and correct the image aberration. We imaged over 100 patients in the hearing clinic at USC Keck Hospital. Here, we identify some of the pathological features evident in the OCT images and highlight cases in which the OCT image provided clinically relevant information that was not available from traditional otoscopic imaging.

Conclusions: The developed OCT otoscope can readily fit into the hearing clinic workflow and provide new relevant information for diagnosing and managing TM and ME disease.

意义重大:鼓膜(TM)和中耳(ME)的病变可导致听力损失。听力诊所用于诊断和管理的成像工具仅限于使用传统耳镜进行目视检查。耳镜观察仅限于 TM 表面,尤其是在 TM 不透明的病耳。集成光学相干断层扫描(OCT)耳镜可提供 TM 内部和 ME 空间的图像以及耳镜图像。目的:我们旨在开发一种可在听力诊所轻松使用的 OCT 耳镜,并在听力诊所演示该系统,识别标准耳镜视图中不明显的各种病症的相关图像特征:方法:我们开发了一种便携式 OCT 耳镜设备,该设备具有更好的视野和外形,可由临床医生通过集成的脚踏板控制图像采集。该设备用于为听力诊所的患者成像:结果:成像系统的视场改进为直径 7.4 毫米,横向和轴向分辨率分别为 38 μ m 和 33.4 μ m。我们开发了算法,在以球面极坐标采集图像后,以直角坐标对图像进行重新采样,并校正图像像差。我们对南加州大学凯克医院听力诊所的 100 多名患者进行了成像。在此,我们确定了 OCT 图像中明显的一些病理特征,并重点介绍了 OCT 图像提供了传统耳镜成像无法提供的临床相关信息的病例:结论:开发的 OCT 耳镜可轻松融入听力诊所的工作流程,并为 TM 和 ME 疾病的诊断和管理提供新的相关信息。
{"title":"Optical coherence tomography otoscope for imaging of tympanic membrane and middle ear pathology.","authors":"Wihan Kim, Ryan Long, Zihan Yang, John S Oghalai, Brian E Applegate","doi":"10.1117/1.JBO.29.8.086005","DOIUrl":"10.1117/1.JBO.29.8.086005","url":null,"abstract":"<p><strong>Significance: </strong>Pathologies within the tympanic membrane (TM) and middle ear (ME) can lead to hearing loss. Imaging tools available in the hearing clinic for diagnosis and management are limited to visual inspection using the classic otoscope. The otoscopic view is limited to the surface of the TM, especially in diseased ears where the TM is opaque. An integrated optical coherence tomography (OCT) otoscope can provide images of the interior of the TM and ME space as well as an otoscope image. This enables the clinicians to correlate the standard otoscopic view with OCT and then use the new information to improve the diagnostic accuracy and management.</p><p><strong>Aim: </strong>We aim to develop an OCT otoscope that can easily be used in the hearing clinic and demonstrate the system in the hearing clinic, identifying relevant image features of various pathologies not apparent in the standard otoscopic view.</p><p><strong>Approach: </strong>We developed a portable OCT otoscope device featuring an improved field of view and form-factor that can be operated solely by the clinician using an integrated foot pedal to control image acquisition. The device was used to image patients at a hearing clinic.</p><p><strong>Results: </strong>The field of view of the imaging system was improved to a 7.4 mm diameter, with lateral and axial resolutions of <math><mrow><mn>38</mn> <mtext>  </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> and <math><mrow><mn>33.4</mn> <mtext>  </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> , respectively. We developed algorithms to resample the images in Cartesian coordinates after collection in spherical polar coordinates and correct the image aberration. We imaged over 100 patients in the hearing clinic at USC Keck Hospital. Here, we identify some of the pathological features evident in the OCT images and highlight cases in which the OCT image provided clinically relevant information that was not available from traditional otoscopic imaging.</p><p><strong>Conclusions: </strong>The developed OCT otoscope can readily fit into the hearing clinic workflow and provide new relevant information for diagnosing and managing TM and ME disease.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"086005"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11334941/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142008803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation of multispectral imaging-based tissue oxygen saturation detecting system for wound healing recognition on open wounds. 验证基于多光谱成像的组织氧饱和度检测系统,用于识别开放性伤口的愈合情况。
IF 3 3区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-08-01 Epub Date: 2024-08-13 DOI: 10.1117/1.JBO.29.8.086004
Yi-Syuan Shin, Kuo-Shu Hung, Chung-Te Tsai, Meng-Hsuan Wu, Chih-Lung Lin, Yuan-Yu Hsueh

Significance: The multispectral imaging-based tissue oxygen saturation detecting (TOSD) system offers deeper penetration ( 2 to 3 mm) and comprehensive tissue oxygen saturation ( StO 2 ) assessment and recognizes the wound healing phase at a low cost and computational requirement. The potential for miniaturization and integration of TOSD into telemedicine platforms could revolutionize wound care in the challenging pandemic era.

Aim: We aim to validate TOSD's application in detecting StO 2 by comparing it with wound closure rates and laser speckle contrast imaging (LSCI), demonstrating TOSD's ability to recognize the wound healing process.

Approach: Utilizing a murine model, we compared TOSD with digital photography and LSCI for comprehensive wound observation in five mice with 6-mm back wounds. Sequential biochemical analysis of wound discharge was investigated for the translational relevance of TOSD.

Results: TOSD demonstrated constant signals on unwounded skin with differential changes on open wounds. Compared with LSCI, TOSD provides indicative recognition of the proliferative phase during wound healing, with a higher correlation coefficient to wound closure rate (TOSD: 0.58; LSCI: 0.44). StO 2 detected by TOSD was further correlated with proliferative phase angiogenesis markers.

Conclusions: Our findings suggest TOSD's enhanced utility in wound management protocols, evaluating clinical staging and therapeutic outcomes. By offering a noncontact, convenient monitoring tool, TOSD can be applied to telemedicine, aiming to advance wound care and regeneration, potentially improving patient outcomes and reducing healthcare costs associated with chronic wounds.

意义重大:基于多光谱成像的组织氧饱和度检测(TOSD)系统具有更深的穿透力(∼ 2 至 3 毫米)和全面的组织氧饱和度(StO 2)评估,并能以较低的成本和计算要求识别伤口愈合阶段。目的:我们旨在通过将 TOSD 与伤口闭合率和激光斑点对比成像(LSCI)进行比较,验证 TOSD 在检测 StO 2 方面的应用,从而证明 TOSD 识别伤口愈合过程的能力:方法:我们利用小鼠模型,比较了 TOSD 与数码照相和 LSCI 对五只背部有 6 毫米伤口的小鼠进行的全面伤口观察。方法:我们利用小鼠模型,比较了 TOSD 与数码照相和 LSCI 对五只背部 6 毫米伤口的全面观察,并对伤口分泌物进行了序列生化分析,以研究 TOSD 的转化意义:结果:TOSD 在未受伤的皮肤上显示出恒定的信号,而在开放性伤口上则显示出不同的变化。与 LSCI 相比,TOSD 能指示性地识别伤口愈合过程中的增殖期,与伤口闭合率的相关系数更高(TOSD:0.58;LSCI:0.44)。 TOSD 检测到的 StO 2 与增殖期血管生成标记物进一步相关:我们的研究结果表明,TOSD 在伤口管理方案、评估临床分期和治疗效果方面具有更强的实用性。通过提供一种非接触、方便的监测工具,TOSD 可以应用于远程医疗,旨在促进伤口护理和再生,从而改善患者的治疗效果,降低与慢性伤口相关的医疗费用。
{"title":"Validation of multispectral imaging-based tissue oxygen saturation detecting system for wound healing recognition on open wounds.","authors":"Yi-Syuan Shin, Kuo-Shu Hung, Chung-Te Tsai, Meng-Hsuan Wu, Chih-Lung Lin, Yuan-Yu Hsueh","doi":"10.1117/1.JBO.29.8.086004","DOIUrl":"10.1117/1.JBO.29.8.086004","url":null,"abstract":"<p><strong>Significance: </strong>The multispectral imaging-based tissue oxygen saturation detecting (TOSD) system offers deeper penetration ( <math><mrow><mo>∼</mo> <mn>2</mn></mrow> </math> to 3 mm) and comprehensive tissue oxygen saturation ( <math> <mrow><msub><mi>StO</mi> <mn>2</mn></msub> </mrow> </math> ) assessment and recognizes the wound healing phase at a low cost and computational requirement. The potential for miniaturization and integration of TOSD into telemedicine platforms could revolutionize wound care in the challenging pandemic era.</p><p><strong>Aim: </strong>We aim to validate TOSD's application in detecting <math> <mrow><msub><mi>StO</mi> <mn>2</mn></msub> </mrow> </math> by comparing it with wound closure rates and laser speckle contrast imaging (LSCI), demonstrating TOSD's ability to recognize the wound healing process.</p><p><strong>Approach: </strong>Utilizing a murine model, we compared TOSD with digital photography and LSCI for comprehensive wound observation in five mice with 6-mm back wounds. Sequential biochemical analysis of wound discharge was investigated for the translational relevance of TOSD.</p><p><strong>Results: </strong>TOSD demonstrated constant signals on unwounded skin with differential changes on open wounds. Compared with LSCI, TOSD provides indicative recognition of the proliferative phase during wound healing, with a higher correlation coefficient to wound closure rate (TOSD: 0.58; LSCI: 0.44). <math> <mrow><msub><mi>StO</mi> <mn>2</mn></msub> </mrow> </math> detected by TOSD was further correlated with proliferative phase angiogenesis markers.</p><p><strong>Conclusions: </strong>Our findings suggest TOSD's enhanced utility in wound management protocols, evaluating clinical staging and therapeutic outcomes. By offering a noncontact, convenient monitoring tool, TOSD can be applied to telemedicine, aiming to advance wound care and regeneration, potentially improving patient outcomes and reducing healthcare costs associated with chronic wounds.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"086004"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11321076/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141975760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving diffuse optical tomography imaging quality using APU-Net: an attention-based physical U-Net model. 利用 APU-Net:基于注意力的物理 U-Net 模型提高漫反射光学断层成像质量。
IF 3 3区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-08-01 Epub Date: 2024-07-25 DOI: 10.1117/1.JBO.29.8.086001
Minghao Xue, Shuying Li, Quing Zhu

Significance: Traditional diffuse optical tomography (DOT) reconstructions are hampered by image artifacts arising from factors such as DOT sources being closer to shallow lesions, poor optode-tissue coupling, tissue heterogeneity, and large high-contrast lesions lacking information in deeper regions (known as shadowing effect). Addressing these challenges is crucial for improving the quality of DOT images and obtaining robust lesion diagnosis.

Aim: We address the limitations of current DOT imaging reconstruction by introducing an attention-based U-Net (APU-Net) model to enhance the image quality of DOT reconstruction, ultimately improving lesion diagnostic accuracy.

Approach: We designed an APU-Net model incorporating a contextual transformer attention module to enhance DOT reconstruction. The model was trained on simulation and phantom data, focusing on challenges such as artifact-induced distortions and lesion-shadowing effects. The model was then evaluated by the clinical data.

Results: Transitioning from simulation and phantom data to clinical patients' data, our APU-Net model effectively reduced artifacts with an average artifact contrast decrease of 26.83% and improved image quality. In addition, statistical analyses revealed significant contrast improvements in depth profile with an average contrast increase of 20.28% and 45.31% for the second and third target layers, respectively. These results highlighted the efficacy of our approach in breast cancer diagnosis.

Conclusions: The APU-Net model improves the image quality of DOT reconstruction by reducing DOT image artifacts and improving the target depth profile.

意义重大:传统的漫反射光学断层成像(DOT)重建受到图像伪影的影响,这些伪影产生的原因包括:DOT光源更靠近浅层病变、光电耦合不良、组织异质性以及缺乏深层区域信息的大面积高对比度病变(称为阴影效应)。目的:针对目前 DOT 成像重建的局限性,我们引入了基于注意力的 U-Net (APU-Net)模型,以提高 DOT 重建的图像质量,最终提高病变诊断的准确性:方法:我们设计了一个 APU-Net 模型,其中包含一个上下文转换器注意力模块,用于增强 DOT 重建。我们在模拟和模型数据上对该模型进行了训练,重点解决了伪影引起的失真和病变阴影效应等难题。然后通过临床数据对模型进行评估:结果:从模拟和模型数据到临床患者数据,我们的 APU-Net 模型有效地减少了伪影,伪影对比度平均降低了 26.83%,提高了图像质量。此外,统计分析显示,深度剖面的对比度有了显著改善,第二和第三目标层的平均对比度分别提高了 20.28% 和 45.31%。这些结果凸显了我们的方法在乳腺癌诊断中的功效:APU-Net 模型通过减少 DOT 图像伪影和改善目标深度轮廓,提高了 DOT 重建的图像质量。
{"title":"Improving diffuse optical tomography imaging quality using APU-Net: an attention-based physical U-Net model.","authors":"Minghao Xue, Shuying Li, Quing Zhu","doi":"10.1117/1.JBO.29.8.086001","DOIUrl":"10.1117/1.JBO.29.8.086001","url":null,"abstract":"<p><strong>Significance: </strong>Traditional diffuse optical tomography (DOT) reconstructions are hampered by image artifacts arising from factors such as DOT sources being closer to shallow lesions, poor optode-tissue coupling, tissue heterogeneity, and large high-contrast lesions lacking information in deeper regions (known as shadowing effect). Addressing these challenges is crucial for improving the quality of DOT images and obtaining robust lesion diagnosis.</p><p><strong>Aim: </strong>We address the limitations of current DOT imaging reconstruction by introducing an attention-based U-Net (APU-Net) model to enhance the image quality of DOT reconstruction, ultimately improving lesion diagnostic accuracy.</p><p><strong>Approach: </strong>We designed an APU-Net model incorporating a contextual transformer attention module to enhance DOT reconstruction. The model was trained on simulation and phantom data, focusing on challenges such as artifact-induced distortions and lesion-shadowing effects. The model was then evaluated by the clinical data.</p><p><strong>Results: </strong>Transitioning from simulation and phantom data to clinical patients' data, our APU-Net model effectively reduced artifacts with an average artifact contrast decrease of 26.83% and improved image quality. In addition, statistical analyses revealed significant contrast improvements in depth profile with an average contrast increase of 20.28% and 45.31% for the second and third target layers, respectively. These results highlighted the efficacy of our approach in breast cancer diagnosis.</p><p><strong>Conclusions: </strong>The APU-Net model improves the image quality of DOT reconstruction by reducing DOT image artifacts and improving the target depth profile.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"086001"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11272096/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141788061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DermoGAN: multi-task cycle generative adversarial networks for unsupervised automatic cell identification on in-vivo reflectance confocal microscopy images of the human epidermis. DermoGAN:用于人体表皮体内反射共聚焦显微镜图像无监督自动细胞识别的多任务循环生成对抗网络。
IF 3 3区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-08-01 Epub Date: 2024-08-02 DOI: 10.1117/1.JBO.29.8.086003
Imane Lboukili, Georgios Stamatas, Xavier Descombes

Significance: Accurate identification of epidermal cells on reflectance confocal microscopy (RCM) images is important in the study of epidermal architecture and topology of both healthy and diseased skin. However, analysis of these images is currently done manually and therefore time-consuming and subject to human error and inter-expert interpretation. It is also hindered by low image quality due to noise and heterogeneity.

Aim: We aimed to design an automated pipeline for the analysis of the epidermal structure from RCM images.

Approach: Two attempts have been made at automatically localizing epidermal cells, called keratinocytes, on RCM images: the first is based on a rotationally symmetric error function mask, and the second on cell morphological features. Here, we propose a dual-task network to automatically identify keratinocytes on RCM images. Each task consists of a cycle generative adversarial network. The first task aims to translate real RCM images into binary images, thus learning the noise and texture model of RCM images, whereas the second task maps Gabor-filtered RCM images into binary images, learning the epidermal structure visible on RCM images. The combination of the two tasks allows one task to constrict the solution space of the other, thus improving overall results. We refine our cell identification by applying the pre-trained StarDist algorithm to detect star-convex shapes, thus closing any incomplete membranes and separating neighboring cells.

Results: The results are evaluated both on simulated data and manually annotated real RCM data. Accuracy is measured using recall and precision metrics, which is summarized as the F 1 -score.

Conclusions: We demonstrate that the proposed fully unsupervised method successfully identifies keratinocytes on RCM images of the epidermis, with an accuracy on par with experts' cell identification, is not constrained by limited available annotated data, and can be extended to images acquired using various imaging techniques without retraining.

意义重大:准确识别反射共聚焦显微镜(RCM)图像上的表皮细胞对于研究健康和患病皮肤的表皮结构和拓扑非常重要。然而,目前对这些图像的分析都是人工完成的,因此非常耗时,而且容易出现人为错误和专家之间的解释。目的:我们的目标是设计一个自动管道,用于分析 RCM 图像中的表皮结构:在 RCM 图像上自动定位表皮细胞(称为角质细胞)的尝试有两种:第一种基于旋转对称误差函数掩码,第二种基于细胞形态特征。在此,我们提出了一种双任务网络,用于自动识别 RCM 图像上的角质形成细胞。每个任务都由一个循环生成对抗网络组成。第一个任务旨在将真实的 RCM 图像转换成二值图像,从而学习 RCM 图像的噪声和纹理模型,而第二个任务则将 Gabor 过滤后的 RCM 图像映射成二值图像,学习 RCM 图像上可见的表皮结构。这两项任务的结合使其中一项任务限制了另一项任务的求解空间,从而改善了整体结果。我们通过应用预先训练好的 StarDist 算法来检测星凸形状,从而关闭任何不完整的膜并分离相邻细胞,从而完善细胞识别:结果:我们在模拟数据和人工标注的真实 RCM 数据上对结果进行了评估。结果:我们在模拟数据和人工标注的真实 RCM 数据上对结果进行了评估,并使用召回率和精确度指标对准确性进行了衡量,总结为 F 1 分数:我们证明了所提出的完全无监督方法能成功识别表皮 RCM 图像上的角质形成细胞,其准确率与专家的细胞识别水平相当,而且不受有限可用注释数据的限制,无需重新训练即可扩展到使用各种成像技术获取的图像。
{"title":"DermoGAN: multi-task cycle generative adversarial networks for unsupervised automatic cell identification on <i>in-vivo</i> reflectance confocal microscopy images of the human epidermis.","authors":"Imane Lboukili, Georgios Stamatas, Xavier Descombes","doi":"10.1117/1.JBO.29.8.086003","DOIUrl":"10.1117/1.JBO.29.8.086003","url":null,"abstract":"<p><strong>Significance: </strong>Accurate identification of epidermal cells on reflectance confocal microscopy (RCM) images is important in the study of epidermal architecture and topology of both healthy and diseased skin. However, analysis of these images is currently done manually and therefore time-consuming and subject to human error and inter-expert interpretation. It is also hindered by low image quality due to noise and heterogeneity.</p><p><strong>Aim: </strong>We aimed to design an automated pipeline for the analysis of the epidermal structure from RCM images.</p><p><strong>Approach: </strong>Two attempts have been made at automatically localizing epidermal cells, called keratinocytes, on RCM images: the first is based on a rotationally symmetric error function mask, and the second on cell morphological features. Here, we propose a dual-task network to automatically identify keratinocytes on RCM images. Each task consists of a cycle generative adversarial network. The first task aims to translate real RCM images into binary images, thus learning the noise and texture model of RCM images, whereas the second task maps Gabor-filtered RCM images into binary images, learning the epidermal structure visible on RCM images. The combination of the two tasks allows one task to constrict the solution space of the other, thus improving overall results. We refine our cell identification by applying the pre-trained StarDist algorithm to detect star-convex shapes, thus closing any incomplete membranes and separating neighboring cells.</p><p><strong>Results: </strong>The results are evaluated both on simulated data and manually annotated real RCM data. Accuracy is measured using recall and precision metrics, which is summarized as the <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> -score.</p><p><strong>Conclusions: </strong>We demonstrate that the proposed fully unsupervised method successfully identifies keratinocytes on RCM images of the epidermis, with an accuracy on par with experts' cell identification, is not constrained by limited available annotated data, and can be extended to images acquired using various imaging techniques without retraining.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"086003"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294601/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141889301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tutorial on phantoms for photoacoustic imaging applications. 光声成像应用模型教程。
IF 3 3区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-08-01 Epub Date: 2024-08-14 DOI: 10.1117/1.JBO.29.8.080801
Lina Hacker, James Joseph, Ledia Lilaj, Srirang Manohar, Aoife M Ivory, Ran Tao, Sarah E Bohndiek

Significance: Photoacoustic imaging (PAI) is an emerging technology that holds high promise in a wide range of clinical applications, but standardized methods for system testing are lacking, impeding objective device performance evaluation, calibration, and inter-device comparisons. To address this shortfall, this tutorial offers readers structured guidance in developing tissue-mimicking phantoms for photoacoustic applications with potential extensions to certain acoustic and optical imaging applications.

Aim: The tutorial review aims to summarize recommendations on phantom development for PAI applications to harmonize efforts in standardization and system calibration in the field.

Approach: The International Photoacoustic Standardization Consortium has conducted a consensus exercise to define recommendations for the development of tissue-mimicking phantoms in PAI.

Results: Recommendations on phantom development are summarized in seven defined steps, expanding from (1) general understanding of the imaging modality, definition of (2) relevant terminology and parameters and (3) phantom purposes, recommendation of (4) basic material properties, (5) material characterization methods, and (6) phantom design to (7) reproducibility efforts.

Conclusions: The tutorial offers a comprehensive framework for the development of tissue-mimicking phantoms in PAI to streamline efforts in system testing and push forward the advancement and translation of the technology.

意义重大:光声成像(PAI)是一项新兴技术,在广泛的临床应用中大有可为,但由于缺乏标准化的系统测试方法,妨碍了客观的设备性能评估、校准和设备间比较。为了弥补这一不足,本教程为读者提供了开发光声应用组织模拟模型的结构化指导,并有可能扩展到某些声学和光学成像应用:方法:国际光声标准化联合会开展了一项共识活动,以确定 PAI 中组织模拟模型的开发建议:关于模型开发的建议总结为七个明确的步骤,从(1)对成像模式的一般理解、(2)相关术语和参数的定义以及(3)模型用途、(4)基本材料特性的建议、(5)材料表征方法、(6)模型设计到(7)可重复性工作:本教程为 PAI 中组织模拟模型的开发提供了一个全面的框架,以简化系统测试工作,推动该技术的进步和转化。
{"title":"Tutorial on phantoms for photoacoustic imaging applications.","authors":"Lina Hacker, James Joseph, Ledia Lilaj, Srirang Manohar, Aoife M Ivory, Ran Tao, Sarah E Bohndiek","doi":"10.1117/1.JBO.29.8.080801","DOIUrl":"10.1117/1.JBO.29.8.080801","url":null,"abstract":"<p><strong>Significance: </strong>Photoacoustic imaging (PAI) is an emerging technology that holds high promise in a wide range of clinical applications, but standardized methods for system testing are lacking, impeding objective device performance evaluation, calibration, and inter-device comparisons. To address this shortfall, this tutorial offers readers structured guidance in developing tissue-mimicking phantoms for photoacoustic applications with potential extensions to certain acoustic and optical imaging applications.</p><p><strong>Aim: </strong>The tutorial review aims to summarize recommendations on phantom development for PAI applications to harmonize efforts in standardization and system calibration in the field.</p><p><strong>Approach: </strong>The International Photoacoustic Standardization Consortium has conducted a consensus exercise to define recommendations for the development of tissue-mimicking phantoms in PAI.</p><p><strong>Results: </strong>Recommendations on phantom development are summarized in seven defined steps, expanding from (1) general understanding of the imaging modality, definition of (2) relevant terminology and parameters and (3) phantom purposes, recommendation of (4) basic material properties, (5) material characterization methods, and (6) phantom design to (7) reproducibility efforts.</p><p><strong>Conclusions: </strong>The tutorial offers a comprehensive framework for the development of tissue-mimicking phantoms in PAI to streamline efforts in system testing and push forward the advancement and translation of the technology.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"080801"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11324153/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141982358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convolutional neural network-based regression analysis to predict subnuclear chromatin organization from two-dimensional optical scattering signals. 基于卷积神经网络的回归分析,从二维光学散射信号预测核下染色质组织。
IF 3 3区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-08-01 Epub Date: 2024-08-28 DOI: 10.1117/1.JBO.29.8.080502
Yazdan Al-Kurdi, Cem Direkoǧlu, Meryem Erbilek, Dizem Arifler

Significance: Azimuth-resolved optical scattering signals obtained from cell nuclei are sensitive to changes in their internal refractive index profile. These two-dimensional signals can therefore offer significant insights into chromatin organization.

Aim: We aim to determine whether two-dimensional scattering signals can be used in an inverse scheme to extract the spatial correlation length c and extent δ n of subnuclear refractive index fluctuations to provide quantitative information on chromatin distribution.

Approach: Since an analytical formulation that links azimuth-resolved signals to c and δ n is not feasible, we set out to assess the potential of machine learning to predict these parameters via a data-driven approach. We carry out a convolutional neural network (CNN)-based regression analysis on 198 numerically computed signals for nuclear models constructed with c varying in steps of 0.1    μ m between 0.4 and 1.0    μ m , and δ n varying in steps of 0.005 between 0.005 and 0.035. We quantify the performance of our analysis using a five-fold cross-validation technique.

Results: The results show agreement between the true and predicted values for both c and δ n , with mean absolute percent errors of 8.5% and 13.5%, respectively. These errors are smaller than the minimum percent increment between successive values for respective parameters characterizing the constructed models and thus signify an extremely good prediction performance over the range of interest.

Conclusions: Our results reveal that CNN-based regression can be a powerful approach for exploiting the information content of two-dimensional optical scattering signals and hence monitoring chromatin organization in a quantitative manner.

意义重大:从细胞核中获得的方位分辨光学散射信号对其内部折射率曲线的变化非常敏感。目的:我们旨在确定二维散射信号是否可用于反向方案,以提取核下折射率波动的空间相关长度ℓ c和范围δ n,从而提供染色质分布的定量信息:由于将方位分辨信号与 ℓ c 和 δ n 联系起来的分析表述不可行,我们开始评估机器学习通过数据驱动方法预测这些参数的潜力。我们对 198 个核模型的数值计算信号进行了基于卷积神经网络(CNN)的回归分析,这些模型的ℓ c 在 0.4 和 1.0 μ m 之间以 0.1 μ m 为单位变化,δ n 在 0.005 和 0.035 之间以 0.005 为单位变化。我们使用五倍交叉验证技术对分析结果进行量化:结果显示,ℓ c 和 δ n 的真实值与预测值一致,平均绝对百分误差分别为 8.5% 和 13.5%。这些误差小于所构建模型的各参数值之间的最小百分比增量,因此在所关注的范围内具有极佳的预测性能:我们的研究结果表明,基于 CNN 的回归可以成为利用二维光学散射信号的信息含量,从而定量监测染色质组织的有力方法。
{"title":"Convolutional neural network-based regression analysis to predict subnuclear chromatin organization from two-dimensional optical scattering signals.","authors":"Yazdan Al-Kurdi, Cem Direkoǧlu, Meryem Erbilek, Dizem Arifler","doi":"10.1117/1.JBO.29.8.080502","DOIUrl":"10.1117/1.JBO.29.8.080502","url":null,"abstract":"<p><strong>Significance: </strong>Azimuth-resolved optical scattering signals obtained from cell nuclei are sensitive to changes in their internal refractive index profile. These two-dimensional signals can therefore offer significant insights into chromatin organization.</p><p><strong>Aim: </strong>We aim to determine whether two-dimensional scattering signals can be used in an inverse scheme to extract the spatial correlation length <math> <mrow><msub><mi>ℓ</mi> <mi>c</mi></msub> </mrow> </math> and extent <math><mrow><mi>δ</mi> <mi>n</mi></mrow> </math> of subnuclear refractive index fluctuations to provide quantitative information on chromatin distribution.</p><p><strong>Approach: </strong>Since an analytical formulation that links azimuth-resolved signals to <math> <mrow><msub><mi>ℓ</mi> <mi>c</mi></msub> </mrow> </math> and <math><mrow><mi>δ</mi> <mi>n</mi></mrow> </math> is not feasible, we set out to assess the potential of machine learning to predict these parameters via a data-driven approach. We carry out a convolutional neural network (CNN)-based regression analysis on 198 numerically computed signals for nuclear models constructed with <math> <mrow><msub><mi>ℓ</mi> <mi>c</mi></msub> </mrow> </math> varying in steps of <math><mrow><mn>0.1</mn> <mtext>  </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> between 0.4 and <math><mrow><mn>1.0</mn> <mtext>  </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> , and <math><mrow><mi>δ</mi> <mi>n</mi></mrow> </math> varying in steps of 0.005 between 0.005 and 0.035. We quantify the performance of our analysis using a five-fold cross-validation technique.</p><p><strong>Results: </strong>The results show agreement between the true and predicted values for both <math> <mrow><msub><mi>ℓ</mi> <mi>c</mi></msub> </mrow> </math> and <math><mrow><mi>δ</mi> <mi>n</mi></mrow> </math> , with mean absolute percent errors of 8.5% and 13.5%, respectively. These errors are smaller than the minimum percent increment between successive values for respective parameters characterizing the constructed models and thus signify an extremely good prediction performance over the range of interest.</p><p><strong>Conclusions: </strong>Our results reveal that CNN-based regression can be a powerful approach for exploiting the information content of two-dimensional optical scattering signals and hence monitoring chromatin organization in a quantitative manner.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"080502"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11350520/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142107840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NerveTracker: a Python-based software toolkit for visualizing and tracking groups of nerve fibers in serial block-face microscopy with ultraviolet surface excitation images. NerveTracker:基于 Python 的软件工具包,用于在序列块面显微镜下通过紫外表面激发图像观察和跟踪神经纤维组。
IF 3 3区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-07-01 Epub Date: 2024-06-18 DOI: 10.1117/1.JBO.29.7.076501
Chaitanya Kolluru, Naomi Joseph, James Seckler, Farzad Fereidouni, Richard Levenson, Andrew Shoffstall, Michael Jenkins, David Wilson

Significance: Information about the spatial organization of fibers within a nerve is crucial to our understanding of nerve anatomy and its response to neuromodulation therapies. A serial block-face microscopy method [three-dimensional microscopy with ultraviolet surface excitation (3D-MUSE)] has been developed to image nerves over extended depths ex vivo. To routinely visualize and track nerve fibers in these datasets, a dedicated and customizable software tool is required.

Aim: Our objective was to develop custom software that includes image processing and visualization methods to perform microscopic tractography along the length of a peripheral nerve sample.

Approach: We modified common computer vision algorithms (optic flow and structure tensor) to track groups of peripheral nerve fibers along the length of the nerve. Interactive streamline visualization and manual editing tools are provided. Optionally, deep learning segmentation of fascicles (fiber bundles) can be applied to constrain the tracts from inadvertently crossing into the epineurium. As an example, we performed tractography on vagus and tibial nerve datasets and assessed accuracy by comparing the resulting nerve tracts with segmentations of fascicles as they split and merge with each other in the nerve sample stack.

Results: We found that a normalized Dice overlap ( Dice norm ) metric had a mean value above 0.75 across several millimeters along the nerve. We also found that the tractograms were robust to changes in certain image properties (e.g., downsampling in-plane and out-of-plane), which resulted in only a 2% to 9% change to the mean Dice norm values. In a vagus nerve sample, tractography allowed us to readily identify that subsets of fibers from four distinct fascicles merge into a single fascicle as we move 5    mm along the nerve's length.

Conclusions: Overall, we demonstrated the feasibility of performing automated microscopic tractography on 3D-MUSE datasets of peripheral nerves. The software should be applicable to other imaging approaches. The code is available at https://github.com/ckolluru/NerveTracker.

意义重大:神经内纤维的空间组织信息对于我们了解神经解剖及其对神经调控疗法的反应至关重要。目前已开发出一种串行块面显微镜方法[紫外表面激发三维显微镜(3D-MUSE)],可在体外对深度更长的神经进行成像。目的:我们的目标是开发包含图像处理和可视化方法的定制软件,以便沿外周神经样本的长度进行显微牵引成像:方法:我们修改了常见的计算机视觉算法(视流和结构张量),以便沿神经长度追踪周围神经纤维群。我们提供了交互式流线可视化和手动编辑工具。此外,还可选择应用束状体(纤维束)的深度学习分割,以限制束状体无意中穿过会厌。举例来说,我们在迷走神经和胫神经数据集上进行了神经束成像,并通过比较神经束在神经样本堆中相互分裂和合并时产生的神经束与分段的神经束来评估准确性:我们发现,在神经沿线几毫米的范围内,归一化 Dice 重叠(Dice norm)指标的平均值高于 0.75。我们还发现,神经束图对某些图像属性的变化(如平面内和平面外的下采样)具有很强的鲁棒性,这导致 Dice norm 平均值仅有 2% 到 9% 的变化。在迷走神经样本中,当我们沿神经长度方向移动 5 毫米时,束成像技术让我们很容易地识别出来自四个不同束的纤维子集合并为一个单一束:总之,我们证明了在外周神经的三维-MUSE 数据集上进行自动显微束成像的可行性。该软件应适用于其他成像方法。代码可在 https://github.com/ckolluru/NerveTracker 上获取。
{"title":"NerveTracker: a Python-based software toolkit for visualizing and tracking groups of nerve fibers in serial block-face microscopy with ultraviolet surface excitation images.","authors":"Chaitanya Kolluru, Naomi Joseph, James Seckler, Farzad Fereidouni, Richard Levenson, Andrew Shoffstall, Michael Jenkins, David Wilson","doi":"10.1117/1.JBO.29.7.076501","DOIUrl":"10.1117/1.JBO.29.7.076501","url":null,"abstract":"<p><strong>Significance: </strong>Information about the spatial organization of fibers within a nerve is crucial to our understanding of nerve anatomy and its response to neuromodulation therapies. A serial block-face microscopy method [three-dimensional microscopy with ultraviolet surface excitation (3D-MUSE)] has been developed to image nerves over extended depths <i>ex vivo</i>. To routinely visualize and track nerve fibers in these datasets, a dedicated and customizable software tool is required.</p><p><strong>Aim: </strong>Our objective was to develop custom software that includes image processing and visualization methods to perform microscopic tractography along the length of a peripheral nerve sample.</p><p><strong>Approach: </strong>We modified common computer vision algorithms (optic flow and structure tensor) to track groups of peripheral nerve fibers along the length of the nerve. Interactive streamline visualization and manual editing tools are provided. Optionally, deep learning segmentation of fascicles (fiber bundles) can be applied to constrain the tracts from inadvertently crossing into the epineurium. As an example, we performed tractography on vagus and tibial nerve datasets and assessed accuracy by comparing the resulting nerve tracts with segmentations of fascicles as they split and merge with each other in the nerve sample stack.</p><p><strong>Results: </strong>We found that a normalized Dice overlap ( <math> <mrow> <msub><mrow><mtext>Dice</mtext></mrow> <mrow><mtext>norm</mtext></mrow> </msub> </mrow> </math> ) metric had a mean value above 0.75 across several millimeters along the nerve. We also found that the tractograms were robust to changes in certain image properties (e.g., downsampling in-plane and out-of-plane), which resulted in only a 2% to 9% change to the mean <math> <mrow> <msub><mrow><mtext>Dice</mtext></mrow> <mrow><mtext>norm</mtext></mrow> </msub> </mrow> </math> values. In a vagus nerve sample, tractography allowed us to readily identify that subsets of fibers from four distinct fascicles merge into a single fascicle as we move <math><mrow><mo>∼</mo> <mn>5</mn> <mtext>  </mtext> <mi>mm</mi></mrow> </math> along the nerve's length.</p><p><strong>Conclusions: </strong>Overall, we demonstrated the feasibility of performing automated microscopic tractography on 3D-MUSE datasets of peripheral nerves. The software should be applicable to other imaging approaches. The code is available at https://github.com/ckolluru/NerveTracker.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 7","pages":"076501"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11188586/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141442766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of handheld spectrally encoded coherence tomography and reflectometry for point-of-care ophthalmic diagnostic imaging. 优化用于护理点眼科诊断成像的手持式光谱编码相干断层扫描和反射测量仪。
IF 3 3区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-07-01 Epub Date: 2024-07-24 DOI: 10.1117/1.JBO.29.7.076006
Jacob J Watson, Rachel Hecht, Yuankai K Tao

Significance: Handheld optical coherence tomography (HH-OCT) systems enable point-of-care ophthalmic imaging in bedridden, uncooperative, and pediatric patients. Handheld spectrally encoded coherence tomography and reflectometry (HH-SECTR) combines OCT and spectrally encoded reflectometry (SER) to address critical clinical challenges in HH-OCT imaging with real-time en face retinal aiming for OCT volume alignment and volumetric correction of motion artifacts that occur during HH-OCT imaging.

Aim: We aim to enable robust clinical translation of HH-SECTR and improve clinical ergonomics during point-of-care OCT imaging for ophthalmic diagnostics.

Approach: HH-SECTR is redesigned with (1) optimized SER optical imaging for en face retinal aiming and retinal tracking for motion correction, (2) a modular aluminum form factor for sustained alignment and probe stability for longitudinal clinical studies, and (3) one-handed photographer-ergonomic motorized focus adjustment.

Results: We demonstrate an HH-SECTR imaging probe with micron-scale optical-optomechanical stability and use it for in vivo human retinal imaging and volumetric motion correction.

Conclusions: This research will benefit the clinical translation of HH-SECTR for point-of-care ophthalmic diagnostics.

意义重大:手持式光学相干断层扫描(HH-OCT)系统可对卧床不起、不合作的病人和儿科病人进行护理点眼科成像。手持式光谱编码相干断层成像和反射仪(HH-SECTR)结合了光学相干断层成像(OCT)和光谱编码反射仪(SER),解决了 HH-OCT 成像中的关键临床难题,可实时面对面瞄准视网膜进行 OCT 容积对准,并对 HH-OCT 成像过程中出现的运动伪影进行容积校正:HH-SECTR经过重新设计,(1) 优化了SER光学成像,用于正面视网膜瞄准和视网膜跟踪运动校正;(2) 采用模块化铝制外形,用于纵向临床研究的持续对准和探头稳定性;(3) 单手摄影师人体工程学电动焦距调节:结果:我们展示了具有微米级光学-光学机械稳定性的 HH-SECTR 成像探针,并将其用于体内人类视网膜成像和体积运动校正:这项研究将有利于 HH-SECTR 在眼科护理点诊断中的临床应用。
{"title":"Optimization of handheld spectrally encoded coherence tomography and reflectometry for point-of-care ophthalmic diagnostic imaging.","authors":"Jacob J Watson, Rachel Hecht, Yuankai K Tao","doi":"10.1117/1.JBO.29.7.076006","DOIUrl":"10.1117/1.JBO.29.7.076006","url":null,"abstract":"<p><strong>Significance: </strong>Handheld optical coherence tomography (HH-OCT) systems enable point-of-care ophthalmic imaging in bedridden, uncooperative, and pediatric patients. Handheld spectrally encoded coherence tomography and reflectometry (HH-SECTR) combines OCT and spectrally encoded reflectometry (SER) to address critical clinical challenges in HH-OCT imaging with real-time <i>en face</i> retinal aiming for OCT volume alignment and volumetric correction of motion artifacts that occur during HH-OCT imaging.</p><p><strong>Aim: </strong>We aim to enable robust clinical translation of HH-SECTR and improve clinical ergonomics during point-of-care OCT imaging for ophthalmic diagnostics.</p><p><strong>Approach: </strong>HH-SECTR is redesigned with (1) optimized SER optical imaging for <i>en face</i> retinal aiming and retinal tracking for motion correction, (2) a modular aluminum form factor for sustained alignment and probe stability for longitudinal clinical studies, and (3) one-handed photographer-ergonomic motorized focus adjustment.</p><p><strong>Results: </strong>We demonstrate an HH-SECTR imaging probe with micron-scale optical-optomechanical stability and use it for <i>in vivo</i> human retinal imaging and volumetric motion correction.</p><p><strong>Conclusions: </strong>This research will benefit the clinical translation of HH-SECTR for point-of-care ophthalmic diagnostics.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 7","pages":"076006"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11267400/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141758977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-contact elasticity contrast imaging using photon counting. 利用光子计数进行非接触式弹性对比成像。
IF 3 3区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-07-01 Epub Date: 2024-07-10 DOI: 10.1117/1.JBO.29.7.076003
Zipei Zheng, Yong Meng Sua, Shenyu Zhu, Patrick Rehain, Yu-Ping Huang

Significance: Tissues' biomechanical properties, such as elasticity, are related to tissue health. Optical coherence elastography produces images of tissues based on their elasticity, but its performance is constrained by the laser power used, working distance, and excitation methods.

Aim: We develop a new method to reconstruct the elasticity contrast image over a long working distance, with only low-intensity illumination, and by non-contact acoustic wave excitation.

Approach: We combine single-photon vibrometry and quantum parametric mode sorting (QPMS) to measure the oscillating backscattered signals at a single-photon level and derive the phantoms' relative elasticity.

Results: We test our system on tissue-mimicking phantoms consisting of contrast sections with different concentrations and thus stiffness. Our results show that as the driving acoustic frequency is swept, the phantoms' vibrational responses are mapped onto the photon-counting histograms from which their mechanical properties-including elasticity-can be derived. Through lateral and longitudinal laser scanning at a fixed frequency, a contrast image based on samples' elasticity can be reliably reconstructed upon photon level signals.

Conclusions: We demonstrated the reliability of QPMS-based elasticity contrast imaging of agar phantoms in a long working distance, low-intensity environment. This technique has the potential for in-depth images of real biological tissue and provides a new approach to elastography research and applications.

意义重大:组织的生物力学特性(如弹性)与组织健康有关。光学相干弹性成像可根据组织的弹性生成图像,但其性能受到所用激光功率、工作距离和激发方法的限制。目的:我们开发了一种新方法,可在较长的工作距离内,仅使用低强度照明,并通过非接触式声波激发重建弹性对比图像:方法:我们将单光子测振法和量子参数模式分选法(QPMS)结合起来,在单光子水平上测量振荡背向散射信号,并得出模型的相对弹性:我们在组织模拟模型上测试了我们的系统,模型由不同浓度的造影剂组成,因此具有不同的硬度。结果表明,随着驱动声波频率的扫频,模型的振动响应被映射到光子计数直方图上,并由此得出其机械特性,包括弹性。通过以固定频率进行横向和纵向激光扫描,可以根据光子级信号可靠地重建基于样品弹性的对比图像:我们证明了基于 QPMS 的琼脂模型弹性对比成像在长工作距离、低强度环境中的可靠性。这项技术具有深入研究真实生物组织图像的潜力,为弹性成像研究和应用提供了一种新方法。
{"title":"Non-contact elasticity contrast imaging using photon counting.","authors":"Zipei Zheng, Yong Meng Sua, Shenyu Zhu, Patrick Rehain, Yu-Ping Huang","doi":"10.1117/1.JBO.29.7.076003","DOIUrl":"10.1117/1.JBO.29.7.076003","url":null,"abstract":"<p><strong>Significance: </strong>Tissues' biomechanical properties, such as elasticity, are related to tissue health. Optical coherence elastography produces images of tissues based on their elasticity, but its performance is constrained by the laser power used, working distance, and excitation methods.</p><p><strong>Aim: </strong>We develop a new method to reconstruct the elasticity contrast image over a long working distance, with only low-intensity illumination, and by non-contact acoustic wave excitation.</p><p><strong>Approach: </strong>We combine single-photon vibrometry and quantum parametric mode sorting (QPMS) to measure the oscillating backscattered signals at a single-photon level and derive the phantoms' relative elasticity.</p><p><strong>Results: </strong>We test our system on tissue-mimicking phantoms consisting of contrast sections with different concentrations and thus stiffness. Our results show that as the driving acoustic frequency is swept, the phantoms' vibrational responses are mapped onto the photon-counting histograms from which their mechanical properties-including elasticity-can be derived. Through lateral and longitudinal laser scanning at a fixed frequency, a contrast image based on samples' elasticity can be reliably reconstructed upon photon level signals.</p><p><strong>Conclusions: </strong>We demonstrated the reliability of QPMS-based elasticity contrast imaging of agar phantoms in a long working distance, low-intensity environment. This technique has the potential for in-depth images of real biological tissue and provides a new approach to elastography research and applications.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 7","pages":"076003"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11234449/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141579808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing spectral effectiveness in color fundus photography for deep learning classification of retinopathy of prematurity. 评估用于早产儿视网膜病变深度学习分类的彩色眼底照片的光谱有效性。
IF 3 3区 医学 Q2 BIOCHEMICAL RESEARCH METHODS Pub Date : 2024-07-01 Epub Date: 2024-06-18 DOI: 10.1117/1.JBO.29.7.076001
Behrouz Ebrahimi, David Le, Mansour Abtahi, Albert K Dadzie, Alfa Rossi, Mojtaba Rahimi, Taeyoon Son, Susan Ostmo, J Peter Campbell, R V Paul Chan, Xincheng Yao

Significance: Retinopathy of prematurity (ROP) poses a significant global threat to childhood vision, necessitating effective screening strategies. This study addresses the impact of color channels in fundus imaging on ROP diagnosis, emphasizing the efficacy and safety of utilizing longer wavelengths, such as red or green for enhanced depth information and improved diagnostic capabilities.

Aim: This study aims to assess the spectral effectiveness in color fundus photography for the deep learning classification of ROP.

Approach: A convolutional neural network end-to-end classifier was utilized for deep learning classification of normal, stage 1, stage 2, and stage 3 ROP fundus images. The classification performances with individual-color-channel inputs, i.e., red, green, and blue, and multi-color-channel fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared.

Results: For individual-color-channel inputs, similar performance was observed for green channel (88.00% accuracy, 76.00% sensitivity, and 92.00% specificity) and red channel (87.25% accuracy, 74.50% sensitivity, and 91.50% specificity), which is substantially outperforming the blue channel (78.25% accuracy, 56.50% sensitivity, and 85.50% specificity). For multi-color-channel fusion options, the early-fusion and intermediate-fusion architecture showed almost the same performance when compared to the green/red channel input, and they outperformed the late-fusion architecture.

Conclusions: This study reveals that the classification of ROP stages can be effectively achieved using either the green or red image alone. This finding enables the exclusion of blue images, acknowledged for their increased susceptibility to light toxicity.

意义重大:早产儿视网膜病变(ROP)对全球儿童视力构成重大威胁,因此必须采取有效的筛查策略。本研究探讨了眼底成像中的彩色通道对早产儿视网膜病变诊断的影响,强调了利用较长波长(如红色或绿色)增强深度信息和提高诊断能力的有效性和安全性:方法:利用卷积神经网络端到端分类器对正常、1期、2期和3期ROP眼底图像进行深度学习分类。定量比较了单色通道输入(即红、绿、蓝)和多色通道融合架构(包括早期融合、中期融合和晚期融合)的分类性能:对于单色通道输入,绿色通道(88.00% 的准确率、76.00% 的灵敏度和 92.00% 的特异性)和红色通道(87.25% 的准确率、74.50% 的灵敏度和 91.50% 的特异性)的表现相似,而蓝色通道(78.25% 的准确率、56.50% 的灵敏度和 85.50% 的特异性)的表现则要好得多。对于多色通道融合选项,早期融合和中期融合架构与绿色/红色通道输入相比表现几乎相同,而它们的表现优于后期融合架构:这项研究表明,仅使用绿色或红色图像就能有效地对 ROP 阶段进行分类。结论:这项研究表明,仅使用绿色或红色图像就能有效地对 ROP 阶段进行分类。这一发现使得人们可以排除蓝色图像,因为蓝色图像被认为更容易受到光毒性的影响。
{"title":"Assessing spectral effectiveness in color fundus photography for deep learning classification of retinopathy of prematurity.","authors":"Behrouz Ebrahimi, David Le, Mansour Abtahi, Albert K Dadzie, Alfa Rossi, Mojtaba Rahimi, Taeyoon Son, Susan Ostmo, J Peter Campbell, R V Paul Chan, Xincheng Yao","doi":"10.1117/1.JBO.29.7.076001","DOIUrl":"10.1117/1.JBO.29.7.076001","url":null,"abstract":"<p><strong>Significance: </strong>Retinopathy of prematurity (ROP) poses a significant global threat to childhood vision, necessitating effective screening strategies. This study addresses the impact of color channels in fundus imaging on ROP diagnosis, emphasizing the efficacy and safety of utilizing longer wavelengths, such as red or green for enhanced depth information and improved diagnostic capabilities.</p><p><strong>Aim: </strong>This study aims to assess the spectral effectiveness in color fundus photography for the deep learning classification of ROP.</p><p><strong>Approach: </strong>A convolutional neural network end-to-end classifier was utilized for deep learning classification of normal, stage 1, stage 2, and stage 3 ROP fundus images. The classification performances with individual-color-channel inputs, i.e., red, green, and blue, and multi-color-channel fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared.</p><p><strong>Results: </strong>For individual-color-channel inputs, similar performance was observed for green channel (88.00% accuracy, 76.00% sensitivity, and 92.00% specificity) and red channel (87.25% accuracy, 74.50% sensitivity, and 91.50% specificity), which is substantially outperforming the blue channel (78.25% accuracy, 56.50% sensitivity, and 85.50% specificity). For multi-color-channel fusion options, the early-fusion and intermediate-fusion architecture showed almost the same performance when compared to the green/red channel input, and they outperformed the late-fusion architecture.</p><p><strong>Conclusions: </strong>This study reveals that the classification of ROP stages can be effectively achieved using either the green or red image alone. This finding enables the exclusion of blue images, acknowledged for their increased susceptibility to light toxicity.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 7","pages":"076001"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11188587/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141442764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Biomedical Optics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1