Pub Date : 2024-08-01Epub Date: 2024-08-20DOI: 10.1117/1.JBO.29.8.086005
Wihan Kim, Ryan Long, Zihan Yang, John S Oghalai, Brian E Applegate
Significance: Pathologies within the tympanic membrane (TM) and middle ear (ME) can lead to hearing loss. Imaging tools available in the hearing clinic for diagnosis and management are limited to visual inspection using the classic otoscope. The otoscopic view is limited to the surface of the TM, especially in diseased ears where the TM is opaque. An integrated optical coherence tomography (OCT) otoscope can provide images of the interior of the TM and ME space as well as an otoscope image. This enables the clinicians to correlate the standard otoscopic view with OCT and then use the new information to improve the diagnostic accuracy and management.
Aim: We aim to develop an OCT otoscope that can easily be used in the hearing clinic and demonstrate the system in the hearing clinic, identifying relevant image features of various pathologies not apparent in the standard otoscopic view.
Approach: We developed a portable OCT otoscope device featuring an improved field of view and form-factor that can be operated solely by the clinician using an integrated foot pedal to control image acquisition. The device was used to image patients at a hearing clinic.
Results: The field of view of the imaging system was improved to a 7.4 mm diameter, with lateral and axial resolutions of and , respectively. We developed algorithms to resample the images in Cartesian coordinates after collection in spherical polar coordinates and correct the image aberration. We imaged over 100 patients in the hearing clinic at USC Keck Hospital. Here, we identify some of the pathological features evident in the OCT images and highlight cases in which the OCT image provided clinically relevant information that was not available from traditional otoscopic imaging.
Conclusions: The developed OCT otoscope can readily fit into the hearing clinic workflow and provide new relevant information for diagnosing and managing TM and ME disease.
意义重大:鼓膜(TM)和中耳(ME)的病变可导致听力损失。听力诊所用于诊断和管理的成像工具仅限于使用传统耳镜进行目视检查。耳镜观察仅限于 TM 表面,尤其是在 TM 不透明的病耳。集成光学相干断层扫描(OCT)耳镜可提供 TM 内部和 ME 空间的图像以及耳镜图像。目的:我们旨在开发一种可在听力诊所轻松使用的 OCT 耳镜,并在听力诊所演示该系统,识别标准耳镜视图中不明显的各种病症的相关图像特征:方法:我们开发了一种便携式 OCT 耳镜设备,该设备具有更好的视野和外形,可由临床医生通过集成的脚踏板控制图像采集。该设备用于为听力诊所的患者成像:结果:成像系统的视场改进为直径 7.4 毫米,横向和轴向分辨率分别为 38 μ m 和 33.4 μ m。我们开发了算法,在以球面极坐标采集图像后,以直角坐标对图像进行重新采样,并校正图像像差。我们对南加州大学凯克医院听力诊所的 100 多名患者进行了成像。在此,我们确定了 OCT 图像中明显的一些病理特征,并重点介绍了 OCT 图像提供了传统耳镜成像无法提供的临床相关信息的病例:结论:开发的 OCT 耳镜可轻松融入听力诊所的工作流程,并为 TM 和 ME 疾病的诊断和管理提供新的相关信息。
{"title":"Optical coherence tomography otoscope for imaging of tympanic membrane and middle ear pathology.","authors":"Wihan Kim, Ryan Long, Zihan Yang, John S Oghalai, Brian E Applegate","doi":"10.1117/1.JBO.29.8.086005","DOIUrl":"10.1117/1.JBO.29.8.086005","url":null,"abstract":"<p><strong>Significance: </strong>Pathologies within the tympanic membrane (TM) and middle ear (ME) can lead to hearing loss. Imaging tools available in the hearing clinic for diagnosis and management are limited to visual inspection using the classic otoscope. The otoscopic view is limited to the surface of the TM, especially in diseased ears where the TM is opaque. An integrated optical coherence tomography (OCT) otoscope can provide images of the interior of the TM and ME space as well as an otoscope image. This enables the clinicians to correlate the standard otoscopic view with OCT and then use the new information to improve the diagnostic accuracy and management.</p><p><strong>Aim: </strong>We aim to develop an OCT otoscope that can easily be used in the hearing clinic and demonstrate the system in the hearing clinic, identifying relevant image features of various pathologies not apparent in the standard otoscopic view.</p><p><strong>Approach: </strong>We developed a portable OCT otoscope device featuring an improved field of view and form-factor that can be operated solely by the clinician using an integrated foot pedal to control image acquisition. The device was used to image patients at a hearing clinic.</p><p><strong>Results: </strong>The field of view of the imaging system was improved to a 7.4 mm diameter, with lateral and axial resolutions of <math><mrow><mn>38</mn> <mtext> </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> and <math><mrow><mn>33.4</mn> <mtext> </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> , respectively. We developed algorithms to resample the images in Cartesian coordinates after collection in spherical polar coordinates and correct the image aberration. We imaged over 100 patients in the hearing clinic at USC Keck Hospital. Here, we identify some of the pathological features evident in the OCT images and highlight cases in which the OCT image provided clinically relevant information that was not available from traditional otoscopic imaging.</p><p><strong>Conclusions: </strong>The developed OCT otoscope can readily fit into the hearing clinic workflow and provide new relevant information for diagnosing and managing TM and ME disease.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"086005"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11334941/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142008803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Significance: The multispectral imaging-based tissue oxygen saturation detecting (TOSD) system offers deeper penetration ( to 3 mm) and comprehensive tissue oxygen saturation ( ) assessment and recognizes the wound healing phase at a low cost and computational requirement. The potential for miniaturization and integration of TOSD into telemedicine platforms could revolutionize wound care in the challenging pandemic era.
Aim: We aim to validate TOSD's application in detecting by comparing it with wound closure rates and laser speckle contrast imaging (LSCI), demonstrating TOSD's ability to recognize the wound healing process.
Approach: Utilizing a murine model, we compared TOSD with digital photography and LSCI for comprehensive wound observation in five mice with 6-mm back wounds. Sequential biochemical analysis of wound discharge was investigated for the translational relevance of TOSD.
Results: TOSD demonstrated constant signals on unwounded skin with differential changes on open wounds. Compared with LSCI, TOSD provides indicative recognition of the proliferative phase during wound healing, with a higher correlation coefficient to wound closure rate (TOSD: 0.58; LSCI: 0.44). detected by TOSD was further correlated with proliferative phase angiogenesis markers.
Conclusions: Our findings suggest TOSD's enhanced utility in wound management protocols, evaluating clinical staging and therapeutic outcomes. By offering a noncontact, convenient monitoring tool, TOSD can be applied to telemedicine, aiming to advance wound care and regeneration, potentially improving patient outcomes and reducing healthcare costs associated with chronic wounds.
{"title":"Validation of multispectral imaging-based tissue oxygen saturation detecting system for wound healing recognition on open wounds.","authors":"Yi-Syuan Shin, Kuo-Shu Hung, Chung-Te Tsai, Meng-Hsuan Wu, Chih-Lung Lin, Yuan-Yu Hsueh","doi":"10.1117/1.JBO.29.8.086004","DOIUrl":"10.1117/1.JBO.29.8.086004","url":null,"abstract":"<p><strong>Significance: </strong>The multispectral imaging-based tissue oxygen saturation detecting (TOSD) system offers deeper penetration ( <math><mrow><mo>∼</mo> <mn>2</mn></mrow> </math> to 3 mm) and comprehensive tissue oxygen saturation ( <math> <mrow><msub><mi>StO</mi> <mn>2</mn></msub> </mrow> </math> ) assessment and recognizes the wound healing phase at a low cost and computational requirement. The potential for miniaturization and integration of TOSD into telemedicine platforms could revolutionize wound care in the challenging pandemic era.</p><p><strong>Aim: </strong>We aim to validate TOSD's application in detecting <math> <mrow><msub><mi>StO</mi> <mn>2</mn></msub> </mrow> </math> by comparing it with wound closure rates and laser speckle contrast imaging (LSCI), demonstrating TOSD's ability to recognize the wound healing process.</p><p><strong>Approach: </strong>Utilizing a murine model, we compared TOSD with digital photography and LSCI for comprehensive wound observation in five mice with 6-mm back wounds. Sequential biochemical analysis of wound discharge was investigated for the translational relevance of TOSD.</p><p><strong>Results: </strong>TOSD demonstrated constant signals on unwounded skin with differential changes on open wounds. Compared with LSCI, TOSD provides indicative recognition of the proliferative phase during wound healing, with a higher correlation coefficient to wound closure rate (TOSD: 0.58; LSCI: 0.44). <math> <mrow><msub><mi>StO</mi> <mn>2</mn></msub> </mrow> </math> detected by TOSD was further correlated with proliferative phase angiogenesis markers.</p><p><strong>Conclusions: </strong>Our findings suggest TOSD's enhanced utility in wound management protocols, evaluating clinical staging and therapeutic outcomes. By offering a noncontact, convenient monitoring tool, TOSD can be applied to telemedicine, aiming to advance wound care and regeneration, potentially improving patient outcomes and reducing healthcare costs associated with chronic wounds.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"086004"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11321076/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141975760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-07-25DOI: 10.1117/1.JBO.29.8.086001
Minghao Xue, Shuying Li, Quing Zhu
Significance: Traditional diffuse optical tomography (DOT) reconstructions are hampered by image artifacts arising from factors such as DOT sources being closer to shallow lesions, poor optode-tissue coupling, tissue heterogeneity, and large high-contrast lesions lacking information in deeper regions (known as shadowing effect). Addressing these challenges is crucial for improving the quality of DOT images and obtaining robust lesion diagnosis.
Aim: We address the limitations of current DOT imaging reconstruction by introducing an attention-based U-Net (APU-Net) model to enhance the image quality of DOT reconstruction, ultimately improving lesion diagnostic accuracy.
Approach: We designed an APU-Net model incorporating a contextual transformer attention module to enhance DOT reconstruction. The model was trained on simulation and phantom data, focusing on challenges such as artifact-induced distortions and lesion-shadowing effects. The model was then evaluated by the clinical data.
Results: Transitioning from simulation and phantom data to clinical patients' data, our APU-Net model effectively reduced artifacts with an average artifact contrast decrease of 26.83% and improved image quality. In addition, statistical analyses revealed significant contrast improvements in depth profile with an average contrast increase of 20.28% and 45.31% for the second and third target layers, respectively. These results highlighted the efficacy of our approach in breast cancer diagnosis.
Conclusions: The APU-Net model improves the image quality of DOT reconstruction by reducing DOT image artifacts and improving the target depth profile.
{"title":"Improving diffuse optical tomography imaging quality using APU-Net: an attention-based physical U-Net model.","authors":"Minghao Xue, Shuying Li, Quing Zhu","doi":"10.1117/1.JBO.29.8.086001","DOIUrl":"10.1117/1.JBO.29.8.086001","url":null,"abstract":"<p><strong>Significance: </strong>Traditional diffuse optical tomography (DOT) reconstructions are hampered by image artifacts arising from factors such as DOT sources being closer to shallow lesions, poor optode-tissue coupling, tissue heterogeneity, and large high-contrast lesions lacking information in deeper regions (known as shadowing effect). Addressing these challenges is crucial for improving the quality of DOT images and obtaining robust lesion diagnosis.</p><p><strong>Aim: </strong>We address the limitations of current DOT imaging reconstruction by introducing an attention-based U-Net (APU-Net) model to enhance the image quality of DOT reconstruction, ultimately improving lesion diagnostic accuracy.</p><p><strong>Approach: </strong>We designed an APU-Net model incorporating a contextual transformer attention module to enhance DOT reconstruction. The model was trained on simulation and phantom data, focusing on challenges such as artifact-induced distortions and lesion-shadowing effects. The model was then evaluated by the clinical data.</p><p><strong>Results: </strong>Transitioning from simulation and phantom data to clinical patients' data, our APU-Net model effectively reduced artifacts with an average artifact contrast decrease of 26.83% and improved image quality. In addition, statistical analyses revealed significant contrast improvements in depth profile with an average contrast increase of 20.28% and 45.31% for the second and third target layers, respectively. These results highlighted the efficacy of our approach in breast cancer diagnosis.</p><p><strong>Conclusions: </strong>The APU-Net model improves the image quality of DOT reconstruction by reducing DOT image artifacts and improving the target depth profile.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"086001"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11272096/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141788061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Significance: Accurate identification of epidermal cells on reflectance confocal microscopy (RCM) images is important in the study of epidermal architecture and topology of both healthy and diseased skin. However, analysis of these images is currently done manually and therefore time-consuming and subject to human error and inter-expert interpretation. It is also hindered by low image quality due to noise and heterogeneity.
Aim: We aimed to design an automated pipeline for the analysis of the epidermal structure from RCM images.
Approach: Two attempts have been made at automatically localizing epidermal cells, called keratinocytes, on RCM images: the first is based on a rotationally symmetric error function mask, and the second on cell morphological features. Here, we propose a dual-task network to automatically identify keratinocytes on RCM images. Each task consists of a cycle generative adversarial network. The first task aims to translate real RCM images into binary images, thus learning the noise and texture model of RCM images, whereas the second task maps Gabor-filtered RCM images into binary images, learning the epidermal structure visible on RCM images. The combination of the two tasks allows one task to constrict the solution space of the other, thus improving overall results. We refine our cell identification by applying the pre-trained StarDist algorithm to detect star-convex shapes, thus closing any incomplete membranes and separating neighboring cells.
Results: The results are evaluated both on simulated data and manually annotated real RCM data. Accuracy is measured using recall and precision metrics, which is summarized as the -score.
Conclusions: We demonstrate that the proposed fully unsupervised method successfully identifies keratinocytes on RCM images of the epidermis, with an accuracy on par with experts' cell identification, is not constrained by limited available annotated data, and can be extended to images acquired using various imaging techniques without retraining.
{"title":"DermoGAN: multi-task cycle generative adversarial networks for unsupervised automatic cell identification on <i>in-vivo</i> reflectance confocal microscopy images of the human epidermis.","authors":"Imane Lboukili, Georgios Stamatas, Xavier Descombes","doi":"10.1117/1.JBO.29.8.086003","DOIUrl":"10.1117/1.JBO.29.8.086003","url":null,"abstract":"<p><strong>Significance: </strong>Accurate identification of epidermal cells on reflectance confocal microscopy (RCM) images is important in the study of epidermal architecture and topology of both healthy and diseased skin. However, analysis of these images is currently done manually and therefore time-consuming and subject to human error and inter-expert interpretation. It is also hindered by low image quality due to noise and heterogeneity.</p><p><strong>Aim: </strong>We aimed to design an automated pipeline for the analysis of the epidermal structure from RCM images.</p><p><strong>Approach: </strong>Two attempts have been made at automatically localizing epidermal cells, called keratinocytes, on RCM images: the first is based on a rotationally symmetric error function mask, and the second on cell morphological features. Here, we propose a dual-task network to automatically identify keratinocytes on RCM images. Each task consists of a cycle generative adversarial network. The first task aims to translate real RCM images into binary images, thus learning the noise and texture model of RCM images, whereas the second task maps Gabor-filtered RCM images into binary images, learning the epidermal structure visible on RCM images. The combination of the two tasks allows one task to constrict the solution space of the other, thus improving overall results. We refine our cell identification by applying the pre-trained StarDist algorithm to detect star-convex shapes, thus closing any incomplete membranes and separating neighboring cells.</p><p><strong>Results: </strong>The results are evaluated both on simulated data and manually annotated real RCM data. Accuracy is measured using recall and precision metrics, which is summarized as the <math><mrow><mi>F</mi> <mn>1</mn></mrow> </math> -score.</p><p><strong>Conclusions: </strong>We demonstrate that the proposed fully unsupervised method successfully identifies keratinocytes on RCM images of the epidermis, with an accuracy on par with experts' cell identification, is not constrained by limited available annotated data, and can be extended to images acquired using various imaging techniques without retraining.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"086003"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11294601/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141889301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-08-14DOI: 10.1117/1.JBO.29.8.080801
Lina Hacker, James Joseph, Ledia Lilaj, Srirang Manohar, Aoife M Ivory, Ran Tao, Sarah E Bohndiek
Significance: Photoacoustic imaging (PAI) is an emerging technology that holds high promise in a wide range of clinical applications, but standardized methods for system testing are lacking, impeding objective device performance evaluation, calibration, and inter-device comparisons. To address this shortfall, this tutorial offers readers structured guidance in developing tissue-mimicking phantoms for photoacoustic applications with potential extensions to certain acoustic and optical imaging applications.
Aim: The tutorial review aims to summarize recommendations on phantom development for PAI applications to harmonize efforts in standardization and system calibration in the field.
Approach: The International Photoacoustic Standardization Consortium has conducted a consensus exercise to define recommendations for the development of tissue-mimicking phantoms in PAI.
Results: Recommendations on phantom development are summarized in seven defined steps, expanding from (1) general understanding of the imaging modality, definition of (2) relevant terminology and parameters and (3) phantom purposes, recommendation of (4) basic material properties, (5) material characterization methods, and (6) phantom design to (7) reproducibility efforts.
Conclusions: The tutorial offers a comprehensive framework for the development of tissue-mimicking phantoms in PAI to streamline efforts in system testing and push forward the advancement and translation of the technology.
意义重大:光声成像(PAI)是一项新兴技术,在广泛的临床应用中大有可为,但由于缺乏标准化的系统测试方法,妨碍了客观的设备性能评估、校准和设备间比较。为了弥补这一不足,本教程为读者提供了开发光声应用组织模拟模型的结构化指导,并有可能扩展到某些声学和光学成像应用:方法:国际光声标准化联合会开展了一项共识活动,以确定 PAI 中组织模拟模型的开发建议:关于模型开发的建议总结为七个明确的步骤,从(1)对成像模式的一般理解、(2)相关术语和参数的定义以及(3)模型用途、(4)基本材料特性的建议、(5)材料表征方法、(6)模型设计到(7)可重复性工作:本教程为 PAI 中组织模拟模型的开发提供了一个全面的框架,以简化系统测试工作,推动该技术的进步和转化。
{"title":"Tutorial on phantoms for photoacoustic imaging applications.","authors":"Lina Hacker, James Joseph, Ledia Lilaj, Srirang Manohar, Aoife M Ivory, Ran Tao, Sarah E Bohndiek","doi":"10.1117/1.JBO.29.8.080801","DOIUrl":"10.1117/1.JBO.29.8.080801","url":null,"abstract":"<p><strong>Significance: </strong>Photoacoustic imaging (PAI) is an emerging technology that holds high promise in a wide range of clinical applications, but standardized methods for system testing are lacking, impeding objective device performance evaluation, calibration, and inter-device comparisons. To address this shortfall, this tutorial offers readers structured guidance in developing tissue-mimicking phantoms for photoacoustic applications with potential extensions to certain acoustic and optical imaging applications.</p><p><strong>Aim: </strong>The tutorial review aims to summarize recommendations on phantom development for PAI applications to harmonize efforts in standardization and system calibration in the field.</p><p><strong>Approach: </strong>The International Photoacoustic Standardization Consortium has conducted a consensus exercise to define recommendations for the development of tissue-mimicking phantoms in PAI.</p><p><strong>Results: </strong>Recommendations on phantom development are summarized in seven defined steps, expanding from (1) general understanding of the imaging modality, definition of (2) relevant terminology and parameters and (3) phantom purposes, recommendation of (4) basic material properties, (5) material characterization methods, and (6) phantom design to (7) reproducibility efforts.</p><p><strong>Conclusions: </strong>The tutorial offers a comprehensive framework for the development of tissue-mimicking phantoms in PAI to streamline efforts in system testing and push forward the advancement and translation of the technology.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"080801"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11324153/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141982358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2024-08-28DOI: 10.1117/1.JBO.29.8.080502
Yazdan Al-Kurdi, Cem Direkoǧlu, Meryem Erbilek, Dizem Arifler
Significance: Azimuth-resolved optical scattering signals obtained from cell nuclei are sensitive to changes in their internal refractive index profile. These two-dimensional signals can therefore offer significant insights into chromatin organization.
Aim: We aim to determine whether two-dimensional scattering signals can be used in an inverse scheme to extract the spatial correlation length and extent of subnuclear refractive index fluctuations to provide quantitative information on chromatin distribution.
Approach: Since an analytical formulation that links azimuth-resolved signals to and is not feasible, we set out to assess the potential of machine learning to predict these parameters via a data-driven approach. We carry out a convolutional neural network (CNN)-based regression analysis on 198 numerically computed signals for nuclear models constructed with varying in steps of between 0.4 and , and varying in steps of 0.005 between 0.005 and 0.035. We quantify the performance of our analysis using a five-fold cross-validation technique.
Results: The results show agreement between the true and predicted values for both and , with mean absolute percent errors of 8.5% and 13.5%, respectively. These errors are smaller than the minimum percent increment between successive values for respective parameters characterizing the constructed models and thus signify an extremely good prediction performance over the range of interest.
Conclusions: Our results reveal that CNN-based regression can be a powerful approach for exploiting the information content of two-dimensional optical scattering signals and hence monitoring chromatin organization in a quantitative manner.
意义重大:从细胞核中获得的方位分辨光学散射信号对其内部折射率曲线的变化非常敏感。目的:我们旨在确定二维散射信号是否可用于反向方案,以提取核下折射率波动的空间相关长度ℓ c和范围δ n,从而提供染色质分布的定量信息:由于将方位分辨信号与 ℓ c 和 δ n 联系起来的分析表述不可行,我们开始评估机器学习通过数据驱动方法预测这些参数的潜力。我们对 198 个核模型的数值计算信号进行了基于卷积神经网络(CNN)的回归分析,这些模型的ℓ c 在 0.4 和 1.0 μ m 之间以 0.1 μ m 为单位变化,δ n 在 0.005 和 0.035 之间以 0.005 为单位变化。我们使用五倍交叉验证技术对分析结果进行量化:结果显示,ℓ c 和 δ n 的真实值与预测值一致,平均绝对百分误差分别为 8.5% 和 13.5%。这些误差小于所构建模型的各参数值之间的最小百分比增量,因此在所关注的范围内具有极佳的预测性能:我们的研究结果表明,基于 CNN 的回归可以成为利用二维光学散射信号的信息含量,从而定量监测染色质组织的有力方法。
{"title":"Convolutional neural network-based regression analysis to predict subnuclear chromatin organization from two-dimensional optical scattering signals.","authors":"Yazdan Al-Kurdi, Cem Direkoǧlu, Meryem Erbilek, Dizem Arifler","doi":"10.1117/1.JBO.29.8.080502","DOIUrl":"10.1117/1.JBO.29.8.080502","url":null,"abstract":"<p><strong>Significance: </strong>Azimuth-resolved optical scattering signals obtained from cell nuclei are sensitive to changes in their internal refractive index profile. These two-dimensional signals can therefore offer significant insights into chromatin organization.</p><p><strong>Aim: </strong>We aim to determine whether two-dimensional scattering signals can be used in an inverse scheme to extract the spatial correlation length <math> <mrow><msub><mi>ℓ</mi> <mi>c</mi></msub> </mrow> </math> and extent <math><mrow><mi>δ</mi> <mi>n</mi></mrow> </math> of subnuclear refractive index fluctuations to provide quantitative information on chromatin distribution.</p><p><strong>Approach: </strong>Since an analytical formulation that links azimuth-resolved signals to <math> <mrow><msub><mi>ℓ</mi> <mi>c</mi></msub> </mrow> </math> and <math><mrow><mi>δ</mi> <mi>n</mi></mrow> </math> is not feasible, we set out to assess the potential of machine learning to predict these parameters via a data-driven approach. We carry out a convolutional neural network (CNN)-based regression analysis on 198 numerically computed signals for nuclear models constructed with <math> <mrow><msub><mi>ℓ</mi> <mi>c</mi></msub> </mrow> </math> varying in steps of <math><mrow><mn>0.1</mn> <mtext> </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> between 0.4 and <math><mrow><mn>1.0</mn> <mtext> </mtext> <mi>μ</mi> <mi>m</mi></mrow> </math> , and <math><mrow><mi>δ</mi> <mi>n</mi></mrow> </math> varying in steps of 0.005 between 0.005 and 0.035. We quantify the performance of our analysis using a five-fold cross-validation technique.</p><p><strong>Results: </strong>The results show agreement between the true and predicted values for both <math> <mrow><msub><mi>ℓ</mi> <mi>c</mi></msub> </mrow> </math> and <math><mrow><mi>δ</mi> <mi>n</mi></mrow> </math> , with mean absolute percent errors of 8.5% and 13.5%, respectively. These errors are smaller than the minimum percent increment between successive values for respective parameters characterizing the constructed models and thus signify an extremely good prediction performance over the range of interest.</p><p><strong>Conclusions: </strong>Our results reveal that CNN-based regression can be a powerful approach for exploiting the information content of two-dimensional optical scattering signals and hence monitoring chromatin organization in a quantitative manner.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 8","pages":"080502"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11350520/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142107840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-06-18DOI: 10.1117/1.JBO.29.7.076501
Chaitanya Kolluru, Naomi Joseph, James Seckler, Farzad Fereidouni, Richard Levenson, Andrew Shoffstall, Michael Jenkins, David Wilson
Significance: Information about the spatial organization of fibers within a nerve is crucial to our understanding of nerve anatomy and its response to neuromodulation therapies. A serial block-face microscopy method [three-dimensional microscopy with ultraviolet surface excitation (3D-MUSE)] has been developed to image nerves over extended depths ex vivo. To routinely visualize and track nerve fibers in these datasets, a dedicated and customizable software tool is required.
Aim: Our objective was to develop custom software that includes image processing and visualization methods to perform microscopic tractography along the length of a peripheral nerve sample.
Approach: We modified common computer vision algorithms (optic flow and structure tensor) to track groups of peripheral nerve fibers along the length of the nerve. Interactive streamline visualization and manual editing tools are provided. Optionally, deep learning segmentation of fascicles (fiber bundles) can be applied to constrain the tracts from inadvertently crossing into the epineurium. As an example, we performed tractography on vagus and tibial nerve datasets and assessed accuracy by comparing the resulting nerve tracts with segmentations of fascicles as they split and merge with each other in the nerve sample stack.
Results: We found that a normalized Dice overlap ( ) metric had a mean value above 0.75 across several millimeters along the nerve. We also found that the tractograms were robust to changes in certain image properties (e.g., downsampling in-plane and out-of-plane), which resulted in only a 2% to 9% change to the mean values. In a vagus nerve sample, tractography allowed us to readily identify that subsets of fibers from four distinct fascicles merge into a single fascicle as we move along the nerve's length.
Conclusions: Overall, we demonstrated the feasibility of performing automated microscopic tractography on 3D-MUSE datasets of peripheral nerves. The software should be applicable to other imaging approaches. The code is available at https://github.com/ckolluru/NerveTracker.
{"title":"NerveTracker: a Python-based software toolkit for visualizing and tracking groups of nerve fibers in serial block-face microscopy with ultraviolet surface excitation images.","authors":"Chaitanya Kolluru, Naomi Joseph, James Seckler, Farzad Fereidouni, Richard Levenson, Andrew Shoffstall, Michael Jenkins, David Wilson","doi":"10.1117/1.JBO.29.7.076501","DOIUrl":"10.1117/1.JBO.29.7.076501","url":null,"abstract":"<p><strong>Significance: </strong>Information about the spatial organization of fibers within a nerve is crucial to our understanding of nerve anatomy and its response to neuromodulation therapies. A serial block-face microscopy method [three-dimensional microscopy with ultraviolet surface excitation (3D-MUSE)] has been developed to image nerves over extended depths <i>ex vivo</i>. To routinely visualize and track nerve fibers in these datasets, a dedicated and customizable software tool is required.</p><p><strong>Aim: </strong>Our objective was to develop custom software that includes image processing and visualization methods to perform microscopic tractography along the length of a peripheral nerve sample.</p><p><strong>Approach: </strong>We modified common computer vision algorithms (optic flow and structure tensor) to track groups of peripheral nerve fibers along the length of the nerve. Interactive streamline visualization and manual editing tools are provided. Optionally, deep learning segmentation of fascicles (fiber bundles) can be applied to constrain the tracts from inadvertently crossing into the epineurium. As an example, we performed tractography on vagus and tibial nerve datasets and assessed accuracy by comparing the resulting nerve tracts with segmentations of fascicles as they split and merge with each other in the nerve sample stack.</p><p><strong>Results: </strong>We found that a normalized Dice overlap ( <math> <mrow> <msub><mrow><mtext>Dice</mtext></mrow> <mrow><mtext>norm</mtext></mrow> </msub> </mrow> </math> ) metric had a mean value above 0.75 across several millimeters along the nerve. We also found that the tractograms were robust to changes in certain image properties (e.g., downsampling in-plane and out-of-plane), which resulted in only a 2% to 9% change to the mean <math> <mrow> <msub><mrow><mtext>Dice</mtext></mrow> <mrow><mtext>norm</mtext></mrow> </msub> </mrow> </math> values. In a vagus nerve sample, tractography allowed us to readily identify that subsets of fibers from four distinct fascicles merge into a single fascicle as we move <math><mrow><mo>∼</mo> <mn>5</mn> <mtext> </mtext> <mi>mm</mi></mrow> </math> along the nerve's length.</p><p><strong>Conclusions: </strong>Overall, we demonstrated the feasibility of performing automated microscopic tractography on 3D-MUSE datasets of peripheral nerves. The software should be applicable to other imaging approaches. The code is available at https://github.com/ckolluru/NerveTracker.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 7","pages":"076501"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11188586/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141442766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-07-24DOI: 10.1117/1.JBO.29.7.076006
Jacob J Watson, Rachel Hecht, Yuankai K Tao
Significance: Handheld optical coherence tomography (HH-OCT) systems enable point-of-care ophthalmic imaging in bedridden, uncooperative, and pediatric patients. Handheld spectrally encoded coherence tomography and reflectometry (HH-SECTR) combines OCT and spectrally encoded reflectometry (SER) to address critical clinical challenges in HH-OCT imaging with real-time en face retinal aiming for OCT volume alignment and volumetric correction of motion artifacts that occur during HH-OCT imaging.
Aim: We aim to enable robust clinical translation of HH-SECTR and improve clinical ergonomics during point-of-care OCT imaging for ophthalmic diagnostics.
Approach: HH-SECTR is redesigned with (1) optimized SER optical imaging for en face retinal aiming and retinal tracking for motion correction, (2) a modular aluminum form factor for sustained alignment and probe stability for longitudinal clinical studies, and (3) one-handed photographer-ergonomic motorized focus adjustment.
Results: We demonstrate an HH-SECTR imaging probe with micron-scale optical-optomechanical stability and use it for in vivo human retinal imaging and volumetric motion correction.
Conclusions: This research will benefit the clinical translation of HH-SECTR for point-of-care ophthalmic diagnostics.
{"title":"Optimization of handheld spectrally encoded coherence tomography and reflectometry for point-of-care ophthalmic diagnostic imaging.","authors":"Jacob J Watson, Rachel Hecht, Yuankai K Tao","doi":"10.1117/1.JBO.29.7.076006","DOIUrl":"10.1117/1.JBO.29.7.076006","url":null,"abstract":"<p><strong>Significance: </strong>Handheld optical coherence tomography (HH-OCT) systems enable point-of-care ophthalmic imaging in bedridden, uncooperative, and pediatric patients. Handheld spectrally encoded coherence tomography and reflectometry (HH-SECTR) combines OCT and spectrally encoded reflectometry (SER) to address critical clinical challenges in HH-OCT imaging with real-time <i>en face</i> retinal aiming for OCT volume alignment and volumetric correction of motion artifacts that occur during HH-OCT imaging.</p><p><strong>Aim: </strong>We aim to enable robust clinical translation of HH-SECTR and improve clinical ergonomics during point-of-care OCT imaging for ophthalmic diagnostics.</p><p><strong>Approach: </strong>HH-SECTR is redesigned with (1) optimized SER optical imaging for <i>en face</i> retinal aiming and retinal tracking for motion correction, (2) a modular aluminum form factor for sustained alignment and probe stability for longitudinal clinical studies, and (3) one-handed photographer-ergonomic motorized focus adjustment.</p><p><strong>Results: </strong>We demonstrate an HH-SECTR imaging probe with micron-scale optical-optomechanical stability and use it for <i>in vivo</i> human retinal imaging and volumetric motion correction.</p><p><strong>Conclusions: </strong>This research will benefit the clinical translation of HH-SECTR for point-of-care ophthalmic diagnostics.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 7","pages":"076006"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11267400/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141758977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Significance: Tissues' biomechanical properties, such as elasticity, are related to tissue health. Optical coherence elastography produces images of tissues based on their elasticity, but its performance is constrained by the laser power used, working distance, and excitation methods.
Aim: We develop a new method to reconstruct the elasticity contrast image over a long working distance, with only low-intensity illumination, and by non-contact acoustic wave excitation.
Approach: We combine single-photon vibrometry and quantum parametric mode sorting (QPMS) to measure the oscillating backscattered signals at a single-photon level and derive the phantoms' relative elasticity.
Results: We test our system on tissue-mimicking phantoms consisting of contrast sections with different concentrations and thus stiffness. Our results show that as the driving acoustic frequency is swept, the phantoms' vibrational responses are mapped onto the photon-counting histograms from which their mechanical properties-including elasticity-can be derived. Through lateral and longitudinal laser scanning at a fixed frequency, a contrast image based on samples' elasticity can be reliably reconstructed upon photon level signals.
Conclusions: We demonstrated the reliability of QPMS-based elasticity contrast imaging of agar phantoms in a long working distance, low-intensity environment. This technique has the potential for in-depth images of real biological tissue and provides a new approach to elastography research and applications.
{"title":"Non-contact elasticity contrast imaging using photon counting.","authors":"Zipei Zheng, Yong Meng Sua, Shenyu Zhu, Patrick Rehain, Yu-Ping Huang","doi":"10.1117/1.JBO.29.7.076003","DOIUrl":"10.1117/1.JBO.29.7.076003","url":null,"abstract":"<p><strong>Significance: </strong>Tissues' biomechanical properties, such as elasticity, are related to tissue health. Optical coherence elastography produces images of tissues based on their elasticity, but its performance is constrained by the laser power used, working distance, and excitation methods.</p><p><strong>Aim: </strong>We develop a new method to reconstruct the elasticity contrast image over a long working distance, with only low-intensity illumination, and by non-contact acoustic wave excitation.</p><p><strong>Approach: </strong>We combine single-photon vibrometry and quantum parametric mode sorting (QPMS) to measure the oscillating backscattered signals at a single-photon level and derive the phantoms' relative elasticity.</p><p><strong>Results: </strong>We test our system on tissue-mimicking phantoms consisting of contrast sections with different concentrations and thus stiffness. Our results show that as the driving acoustic frequency is swept, the phantoms' vibrational responses are mapped onto the photon-counting histograms from which their mechanical properties-including elasticity-can be derived. Through lateral and longitudinal laser scanning at a fixed frequency, a contrast image based on samples' elasticity can be reliably reconstructed upon photon level signals.</p><p><strong>Conclusions: </strong>We demonstrated the reliability of QPMS-based elasticity contrast imaging of agar phantoms in a long working distance, low-intensity environment. This technique has the potential for in-depth images of real biological tissue and provides a new approach to elastography research and applications.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 7","pages":"076003"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11234449/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141579808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01Epub Date: 2024-06-18DOI: 10.1117/1.JBO.29.7.076001
Behrouz Ebrahimi, David Le, Mansour Abtahi, Albert K Dadzie, Alfa Rossi, Mojtaba Rahimi, Taeyoon Son, Susan Ostmo, J Peter Campbell, R V Paul Chan, Xincheng Yao
Significance: Retinopathy of prematurity (ROP) poses a significant global threat to childhood vision, necessitating effective screening strategies. This study addresses the impact of color channels in fundus imaging on ROP diagnosis, emphasizing the efficacy and safety of utilizing longer wavelengths, such as red or green for enhanced depth information and improved diagnostic capabilities.
Aim: This study aims to assess the spectral effectiveness in color fundus photography for the deep learning classification of ROP.
Approach: A convolutional neural network end-to-end classifier was utilized for deep learning classification of normal, stage 1, stage 2, and stage 3 ROP fundus images. The classification performances with individual-color-channel inputs, i.e., red, green, and blue, and multi-color-channel fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared.
Results: For individual-color-channel inputs, similar performance was observed for green channel (88.00% accuracy, 76.00% sensitivity, and 92.00% specificity) and red channel (87.25% accuracy, 74.50% sensitivity, and 91.50% specificity), which is substantially outperforming the blue channel (78.25% accuracy, 56.50% sensitivity, and 85.50% specificity). For multi-color-channel fusion options, the early-fusion and intermediate-fusion architecture showed almost the same performance when compared to the green/red channel input, and they outperformed the late-fusion architecture.
Conclusions: This study reveals that the classification of ROP stages can be effectively achieved using either the green or red image alone. This finding enables the exclusion of blue images, acknowledged for their increased susceptibility to light toxicity.
{"title":"Assessing spectral effectiveness in color fundus photography for deep learning classification of retinopathy of prematurity.","authors":"Behrouz Ebrahimi, David Le, Mansour Abtahi, Albert K Dadzie, Alfa Rossi, Mojtaba Rahimi, Taeyoon Son, Susan Ostmo, J Peter Campbell, R V Paul Chan, Xincheng Yao","doi":"10.1117/1.JBO.29.7.076001","DOIUrl":"10.1117/1.JBO.29.7.076001","url":null,"abstract":"<p><strong>Significance: </strong>Retinopathy of prematurity (ROP) poses a significant global threat to childhood vision, necessitating effective screening strategies. This study addresses the impact of color channels in fundus imaging on ROP diagnosis, emphasizing the efficacy and safety of utilizing longer wavelengths, such as red or green for enhanced depth information and improved diagnostic capabilities.</p><p><strong>Aim: </strong>This study aims to assess the spectral effectiveness in color fundus photography for the deep learning classification of ROP.</p><p><strong>Approach: </strong>A convolutional neural network end-to-end classifier was utilized for deep learning classification of normal, stage 1, stage 2, and stage 3 ROP fundus images. The classification performances with individual-color-channel inputs, i.e., red, green, and blue, and multi-color-channel fusion architectures, including early-fusion, intermediate-fusion, and late-fusion, were quantitatively compared.</p><p><strong>Results: </strong>For individual-color-channel inputs, similar performance was observed for green channel (88.00% accuracy, 76.00% sensitivity, and 92.00% specificity) and red channel (87.25% accuracy, 74.50% sensitivity, and 91.50% specificity), which is substantially outperforming the blue channel (78.25% accuracy, 56.50% sensitivity, and 85.50% specificity). For multi-color-channel fusion options, the early-fusion and intermediate-fusion architecture showed almost the same performance when compared to the green/red channel input, and they outperformed the late-fusion architecture.</p><p><strong>Conclusions: </strong>This study reveals that the classification of ROP stages can be effectively achieved using either the green or red image alone. This finding enables the exclusion of blue images, acknowledged for their increased susceptibility to light toxicity.</p>","PeriodicalId":15264,"journal":{"name":"Journal of Biomedical Optics","volume":"29 7","pages":"076001"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11188587/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141442764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}