Meghdoot Mozumder, Pauliina Hirvi, Ilkka Nissilä, Andreas Hauptmann, Jorge Ripoll, David E. Singh
Diffuse optical tomography (DOT) uses near-infrared light to image spatially varying optical parameters in biological tissues. In functional brain imaging, DOT uses a perturbation model to estimate the changes in optical parameters, corresponding to changes in measured data due to brain activity. The perturbation model typically uses approximate baseline optical parameters of the different brain compartments, since the actual baseline optical parameters are unknown. We simulated the effects of these approximate baseline optical parameters using parameter variations earlier reported in literature, and brain atlases from four adult subjects. We report the errors in estimated activation contrast, localization, and area when incorrect baseline values were used. Further, we developed a post-processing technique based on deep learning methods that can reduce the effects due to inaccurate baseline optical parameters. The method improved imaging of brain activation changes in the presence of such errors.
{"title":"Diffuse optical tomography of the brain: effects of inaccurate baseline optical parameters and refinements using learned post-processing","authors":"Meghdoot Mozumder, Pauliina Hirvi, Ilkka Nissilä, Andreas Hauptmann, Jorge Ripoll, David E. Singh","doi":"10.1364/boe.524245","DOIUrl":"https://doi.org/10.1364/boe.524245","url":null,"abstract":"Diffuse optical tomography (DOT) uses near-infrared light to image spatially varying optical parameters in biological tissues. In functional brain imaging, DOT uses a perturbation model to estimate the changes in optical parameters, corresponding to changes in measured data due to brain activity. The perturbation model typically uses approximate baseline optical parameters of the different brain compartments, since the actual baseline optical parameters are unknown. We simulated the effects of these approximate baseline optical parameters using parameter variations earlier reported in literature, and brain atlases from four adult subjects. We report the errors in estimated activation contrast, localization, and area when incorrect baseline values were used. Further, we developed a post-processing technique based on deep learning methods that can reduce the effects due to inaccurate baseline optical parameters. The method improved imaging of brain activation changes in the presence of such errors.","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141868426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cristina Rodríguez, Daisong Pan, Ryan G. Natan, Manuel A. Mohr, Max Miao, Xiaoke Chen, Trent R. Northen, John P. Vogel, Na Ji
Third-harmonic generation microscopy is a powerful label-free nonlinear imaging technique, providing essential information about structural characteristics of cells and tissues without requiring external labelling agents. In this work, we integrated a recently developed compact adaptive optics module into a third-harmonic generation microscope, to measure and correct for optical aberrations in complex tissues. Taking advantage of the high sensitivity of the third-harmonic generation process to material interfaces and thin membranes, along with the 1,300-nm excitation wavelength used here, our adaptive optical third-harmonic generation microscope enabled high-resolution in vivo imaging within highly scattering biological model systems. Examples include imaging of myelinated axons and vascular structures within the mouse spinal cord and deep cortical layers of the mouse brain, along with imaging of key anatomical features in the roots of the model plant Brachypodium distachyon. In all instances, aberration correction led to enhancements in image quality.
{"title":"Adaptive optical third-harmonic generation microscopy for in vivo imaging of tissues","authors":"Cristina Rodríguez, Daisong Pan, Ryan G. Natan, Manuel A. Mohr, Max Miao, Xiaoke Chen, Trent R. Northen, John P. Vogel, Na Ji","doi":"10.1364/boe.527357","DOIUrl":"https://doi.org/10.1364/boe.527357","url":null,"abstract":"Third-harmonic generation microscopy is a powerful label-free nonlinear imaging technique, providing essential information about structural characteristics of cells and tissues without requiring external labelling agents. In this work, we integrated a recently developed compact adaptive optics module into a third-harmonic generation microscope, to measure and correct for optical aberrations in complex tissues. Taking advantage of the high sensitivity of the third-harmonic generation process to material interfaces and thin membranes, along with the 1,300-nm excitation wavelength used here, our adaptive optical third-harmonic generation microscope enabled high-resolution in vivo imaging within highly scattering biological model systems. Examples include imaging of myelinated axons and vascular structures within the mouse spinal cord and deep cortical layers of the mouse brain, along with imaging of key anatomical features in the roots of the model plant <jats:italic>Brachypodium distachyon</jats:italic>. In all instances, aberration correction led to enhancements in image quality.","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141868425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bhaskara Rao Chintada, Sebastián Ruiz-Lopera, René Restrepo, Brett E. Bouma, Martin Villiger, Néstor Uribe-Patarroyo
We present a deep learning framework for volumetric speckle reduction in optical coherence tomography (OCT) based on a conditional generative adversarial network (cGAN) that leverages the volumetric nature of OCT data. In order to utilize the volumetric nature of OCT data, our network takes partial OCT volumes as input, resulting in artifact-free despeckled volumes that exhibit excellent speckle reduction and resolution preservation in all three dimensions. Furthermore, we address the ongoing challenge of generating ground truth data for supervised speckle suppression deep learning frameworks by using volumetric non-local means despeckling–TNode– to generate training data. We show that, while TNode processing is computationally demanding, it serves as a convenient, accessible gold-standard source for training data; our cGAN replicates efficient suppression of speckle while preserving tissue structures with dimensions approaching the system resolution of non-local means despeckling while being two orders of magnitude faster than TNode. We demonstrate fast, effective, and high-quality despeckling of the proposed network in different tissue types that are not part of the training. This was achieved with training data composed of just three OCT volumes and demonstrated in three different OCT systems. The open-source nature of our work facilitates re-training and deployment in any OCT system with an all-software implementation, working around the challenge of generating high-quality, speckle-free training data.
我们基于条件生成对抗网络(cGAN),利用光学相干断层扫描(OCT)数据的体积特性,提出了一种用于减少光学相干断层扫描(OCT)中体积斑点的深度学习框架。为了利用光学相干断层扫描数据的体积特性,我们的网络将部分光学相干断层扫描体积作为输入,从而产生无伪影去斑体积,在所有三个维度上都表现出出色的斑点减少和分辨率保持能力。此外,我们通过使用体积非局部手段去斑--TNode 来生成训练数据,从而解决了为有监督的斑点抑制深度学习框架生成基本真实数据这一持续存在的挑战。我们的研究表明,虽然 TNode 处理对计算要求很高,但它是一种方便、可访问的黄金标准训练数据源;我们的 cGAN 在保留组织结构的同时复制了有效的斑点抑制,其维度接近非局部手段去斑的系统分辨率,速度比 TNode 快两个数量级。我们展示了所提出的网络在不同组织类型中快速、有效、高质量地去斑,而这些组织类型并不是训练的一部分。这是在由三个 OCT 体积组成的训练数据中实现的,并在三个不同的 OCT 系统中进行了演示。我们的工作具有开源性质,可以通过全软件实现在任何 OCT 系统中进行再训练和部署,解决了生成高质量无斑点训练数据的难题。
{"title":"Probabilistic volumetric speckle suppression in OCT using deep learning","authors":"Bhaskara Rao Chintada, Sebastián Ruiz-Lopera, René Restrepo, Brett E. Bouma, Martin Villiger, Néstor Uribe-Patarroyo","doi":"10.1364/boe.523716","DOIUrl":"https://doi.org/10.1364/boe.523716","url":null,"abstract":"We present a deep learning framework for volumetric speckle reduction in optical coherence tomography (OCT) based on a conditional generative adversarial network (cGAN) that leverages the volumetric nature of OCT data. In order to utilize the volumetric nature of OCT data, our network takes partial OCT volumes as input, resulting in artifact-free despeckled volumes that exhibit excellent speckle reduction and resolution preservation in all three dimensions. Furthermore, we address the ongoing challenge of generating ground truth data for supervised speckle suppression deep learning frameworks by using volumetric non-local means despeckling–TNode– to generate training data. We show that, while TNode processing is computationally demanding, it serves as a convenient, accessible gold-standard source for training data; our cGAN replicates efficient suppression of speckle while preserving tissue structures with dimensions approaching the system resolution of non-local means despeckling while being two orders of magnitude faster than TNode. We demonstrate fast, effective, and high-quality despeckling of the proposed network in different tissue types that are not part of the training. This was achieved with training data composed of just three OCT volumes and demonstrated in three different OCT systems. The open-source nature of our work facilitates re-training and deployment in any OCT system with an all-software implementation, working around the challenge of generating high-quality, speckle-free training data.","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141868631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinwei Tian, Chao Li, Zhifeng Qin, Yanwen Zhang, Qinglu Xu, Yuqi Zheng, Xiangyu Meng, Peng Zhao, Kaiwen Li, Suhong Zhao, Shan Zhong, Xinyu Hou, Xiang Peng, Yuxin Yang, Yu Liu, Songzhi Wu, Yidan Wang, Xiangwen Xi, Yanan Tian, Wenbo Qu, Na Sun, Fan Wang, Yan Wang, Jie Xiong, Xiaofang Ban, Taishi Yonetsu, Rocco Vergallo, Bo Zhang, Bo Yu, Zhao Wang
Coronary artery calcification (CAC) is a marker of atherosclerosis and is thought to be associated with worse clinical outcomes. However, evidence from large-scale high-resolution imaging data is lacking. We proposed a novel deep learning method that can automatically identify and quantify CAC in massive intravascular OCT data trained using efficiently generated sparse labels. 1,106,291 OCT images from 1,048 patients were collected and utilized to train and evaluate the method. The Dice similarity coefficient for CAC segmentation and the accuracy for CAC classification are 0.693 and 0.932, respectively, close to human-level performance. Applying the method to 1259 ST-segment elevated myocardial infarction patients imaged with OCT, we found that patients with a greater extent and more severe calcification in the culprit vessels were significantly more likely to have major adverse cardiovascular and cerebrovascular events (MACCE) (p < 0.05), while the CAC in non-culprit vessels did not differ significantly between MACCE and non-MACCE groups.
冠状动脉钙化(CAC)是动脉粥样硬化的标志物,被认为与较差的临床预后有关。然而,目前还缺乏来自大规模高分辨率成像数据的证据。我们提出了一种新颖的深度学习方法,它能在使用高效生成的稀疏标签训练的海量血管内 OCT 数据中自动识别和量化 CAC。我们收集了来自 1,048 名患者的 1,106,291 张 OCT 图像,并利用这些图像对该方法进行了训练和评估。CAC 分割的 Dice 相似系数和 CAC 分类的准确率分别为 0.693 和 0.932,接近人类水平。将该方法应用于 1259 例 ST 段抬高的心肌梗死患者的 OCT 图像,我们发现,罪魁祸首血管钙化范围更大、更严重的患者发生重大不良心脑血管事件(MACCE)的可能性明显更高(p <0.05),而非罪魁祸首血管的 CAC 在 MACCE 组和非 MACCE 组之间没有显著差异。
{"title":"Coronary artery calcification and cardiovascular outcome as assessed by intravascular OCT and artificial intelligence","authors":"Jinwei Tian, Chao Li, Zhifeng Qin, Yanwen Zhang, Qinglu Xu, Yuqi Zheng, Xiangyu Meng, Peng Zhao, Kaiwen Li, Suhong Zhao, Shan Zhong, Xinyu Hou, Xiang Peng, Yuxin Yang, Yu Liu, Songzhi Wu, Yidan Wang, Xiangwen Xi, Yanan Tian, Wenbo Qu, Na Sun, Fan Wang, Yan Wang, Jie Xiong, Xiaofang Ban, Taishi Yonetsu, Rocco Vergallo, Bo Zhang, Bo Yu, Zhao Wang","doi":"10.1364/boe.524946","DOIUrl":"https://doi.org/10.1364/boe.524946","url":null,"abstract":"Coronary artery calcification (CAC) is a marker of atherosclerosis and is thought to be associated with worse clinical outcomes. However, evidence from large-scale high-resolution imaging data is lacking. We proposed a novel deep learning method that can automatically identify and quantify CAC in massive intravascular OCT data trained using efficiently generated sparse labels. 1,106,291 OCT images from 1,048 patients were collected and utilized to train and evaluate the method. The Dice similarity coefficient for CAC segmentation and the accuracy for CAC classification are 0.693 and 0.932, respectively, close to human-level performance. Applying the method to 1259 ST-segment elevated myocardial infarction patients imaged with OCT, we found that patients with a greater extent and more severe calcification in the culprit vessels were significantly more likely to have major adverse cardiovascular and cerebrovascular events (MACCE) (p < 0.05), while the CAC in non-culprit vessels did not differ significantly between MACCE and non-MACCE groups.","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141868430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-17eCollection Date: 2024-07-01DOI: 10.1364/BOE.530483
Oumeng Zhang, Nic Dahlquist, Zachary Leete, Michael Xu, Dean Schneider, Changhuei Yang
Imaging three-dimensional microbial development and behavior over extended periods is crucial for advancing microbiological studies. Here, we introduce an upgraded ePetri dish system specifically designed for extended microbial culturing and 3D imaging, addressing the limitations of existing methods. Our approach includes a sealed growth chamber to enable long-term culturing, and a multi-step reconstruction algorithm that integrates 3D deconvolution, image filtering, ridge, and skeleton detection for detailed visualization of the hyphal network. The system effectively monitored the development of Aspergillus brasiliensis hyphae over a seven-day period, demonstrating the growth medium's stability within the chamber. The system's 3D imaging capability was validated in a volume of 5.5 mm × 4 mm × 0.5 mm, revealing a radial growth pattern of fungal hyphae. Additionally, we show that the system can identify potential filter failures that are undetectable with 2D imaging. With these capabilities, the upgraded ePetri dish represents a significant advancement in long-term 3D microbial imaging, promising new insights into microbial development and behavior across various microbiological research areas.
{"title":"Long-term imaging of three-dimensional hyphal development using the ePetri dish.","authors":"Oumeng Zhang, Nic Dahlquist, Zachary Leete, Michael Xu, Dean Schneider, Changhuei Yang","doi":"10.1364/BOE.530483","DOIUrl":"10.1364/BOE.530483","url":null,"abstract":"<p><p>Imaging three-dimensional microbial development and behavior over extended periods is crucial for advancing microbiological studies. Here, we introduce an upgraded ePetri dish system specifically designed for extended microbial culturing and 3D imaging, addressing the limitations of existing methods. Our approach includes a sealed growth chamber to enable long-term culturing, and a multi-step reconstruction algorithm that integrates 3D deconvolution, image filtering, ridge, and skeleton detection for detailed visualization of the hyphal network. The system effectively monitored the development of Aspergillus brasiliensis hyphae over a seven-day period, demonstrating the growth medium's stability within the chamber. The system's 3D imaging capability was validated in a volume of 5.5 mm × 4 mm × 0.5 mm, revealing a radial growth pattern of fungal hyphae. Additionally, we show that the system can identify potential filter failures that are undetectable with 2D imaging. With these capabilities, the upgraded ePetri dish represents a significant advancement in long-term 3D microbial imaging, promising new insights into microbial development and behavior across various microbiological research areas.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11249690/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141632548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a novel method to design and implement a tunable dynamical tissue phantom for laser speckle-based in-vivo blood flow imaging. This approach relies on stochastic differential equations (SDE) to control a piezoelectric actuator which, upon illuminated with a laser source, generates speckles of pre-defined probability density function and auto-correlation. The validation experiments show that the phantom can generate dynamic speckles that closely replicate both surfaces as well as deep tissue blood flow for a reasonably wide range and accuracy.
{"title":"Tunable dynamical tissue phantom for laser speckle imaging","authors":"Soumyajit Sarkar, Murali K, Hari M. Varma","doi":"10.1364/boe.528286","DOIUrl":"https://doi.org/10.1364/boe.528286","url":null,"abstract":"We introduce a novel method to design and implement a tunable dynamical tissue phantom for laser speckle-based <jats:italic>in-vivo</jats:italic> blood flow imaging. This approach relies on stochastic differential equations (SDE) to control a piezoelectric actuator which, upon illuminated with a laser source, generates speckles of pre-defined probability density function and auto-correlation. The validation experiments show that the phantom can generate dynamic speckles that closely replicate both surfaces as well as deep tissue blood flow for a reasonably wide range and accuracy.","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141868428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanda Cheng, Wenhan Zheng, Robert Bing, Huijuan Zhang, Chuqin Huang, Peizhou Huang, Leslie Ying, Jun Xia
In this study, we implemented an unsupervised deep learning method, the Noise2Noise network, for the improvement of linear-array-based photoacoustic (PA) imaging. Unlike supervised learning, which requires a noise-free ground truth, the Noise2Noise network can learn noise patterns from a pair of noisy images. This is particularly important for in vivo PA imaging, where the ground truth is not available. In this study, we developed a method to generate noise pairs from a single set of PA images and verified our approach through simulation and experimental studies. Our results reveal that the method can effectively remove noise, improve signal-to-noise ratio, and enhance vascular structures at deeper depths. The denoised images show clear and detailed vascular structure at different depths, providing valuable insights for preclinical research and potential clinical applications.
在这项研究中,我们采用了一种无监督深度学习方法--Noise2Noise 网络,用于改进基于线性阵列的光声(PA)成像。与需要无噪声地面实况的监督学习不同,Noise2Noise 网络可以从一对噪声图像中学习噪声模式。这对于没有地面实况的活体 PA 成像尤为重要。在本研究中,我们开发了一种从单组 PA 图像生成噪声对的方法,并通过模拟和实验研究验证了我们的方法。结果表明,该方法能有效去除噪声,提高信噪比,并增强深部血管结构。去噪后的图像显示出不同深度的清晰而详细的血管结构,为临床前研究和潜在的临床应用提供了有价值的见解。
{"title":"Unsupervised denoising of photoacoustic images based on the Noise2Noise network","authors":"Yanda Cheng, Wenhan Zheng, Robert Bing, Huijuan Zhang, Chuqin Huang, Peizhou Huang, Leslie Ying, Jun Xia","doi":"10.1364/boe.529253","DOIUrl":"https://doi.org/10.1364/boe.529253","url":null,"abstract":"In this study, we implemented an unsupervised deep learning method, the Noise2Noise network, for the improvement of linear-array-based photoacoustic (PA) imaging. Unlike supervised learning, which requires a noise-free ground truth, the Noise2Noise network can learn noise patterns from a pair of noisy images. This is particularly important for in vivo PA imaging, where the ground truth is not available. In this study, we developed a method to generate noise pairs from a single set of PA images and verified our approach through simulation and experimental studies. Our results reveal that the method can effectively remove noise, improve signal-to-noise ratio, and enhance vascular structures at deeper depths. The denoised images show clear and detailed vascular structure at different depths, providing valuable insights for preclinical research and potential clinical applications.","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":null,"pages":null},"PeriodicalIF":3.4,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141868429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13eCollection Date: 2024-07-01DOI: 10.1364/BOE.527248
Baptiste Moeglen-Paget, Jayakumar Perumal, Georges Humbert, Malini Olivo, U S Dinish
Biosensing plays a pivotal role in various scientific domains, offering significant contributions to medical diagnostics, environmental monitoring, and biotechnology. Fluorescence biosensing relies on the fluorescence emission from labelled biomolecules to enable sensitive and selective identification and quantification of specific biological targets in various samples. Photonic crystal fibers (PCFs) have led to the development of optofluidic fibers enabling efficient light-liquid interaction within small liquid volume. Herein, we present the development of a user-friendly optofluidic-fiber platform with simple hardware requirements for sensitive and reliable fluorescence biosensing with high measurement repeatability. We demonstrate a sensitivity improvement of the fluorescence emission up to 17 times compared to standard cuvette measurement, with a limit of detection of Cy5 fluorophore as low as 100 pM. The improvement in measurement repeatability is exploited for detecting haptoglobin protein, a relevant biomarker to diagnose several diseases, by using commercially available Cy5 labelled antibodies. The study aims to showcase an optofluidic platform leveraging the benefits provided by optofluidic fibers, which encompass easy light injection, robustness, and high sensitivity.
{"title":"Optofluidic photonic crystal fiber platform for sensitive and reliable fluorescence based biosensing.","authors":"Baptiste Moeglen-Paget, Jayakumar Perumal, Georges Humbert, Malini Olivo, U S Dinish","doi":"10.1364/BOE.527248","DOIUrl":"10.1364/BOE.527248","url":null,"abstract":"<p><p>Biosensing plays a pivotal role in various scientific domains, offering significant contributions to medical diagnostics, environmental monitoring, and biotechnology. Fluorescence biosensing relies on the fluorescence emission from labelled biomolecules to enable sensitive and selective identification and quantification of specific biological targets in various samples. Photonic crystal fibers (PCFs) have led to the development of optofluidic fibers enabling efficient light-liquid interaction within small liquid volume. Herein, we present the development of a user-friendly optofluidic-fiber platform with simple hardware requirements for sensitive and reliable fluorescence biosensing with high measurement repeatability. We demonstrate a sensitivity improvement of the fluorescence emission up to 17 times compared to standard cuvette measurement, with a limit of detection of Cy5 fluorophore as low as 100 pM. The improvement in measurement repeatability is exploited for detecting haptoglobin protein, a relevant biomarker to diagnose several diseases, by using commercially available Cy5 labelled antibodies. The study aims to showcase an optofluidic platform leveraging the benefits provided by optofluidic fibers, which encompass easy light injection, robustness, and high sensitivity.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11249680/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141632565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13eCollection Date: 2024-07-01DOI: 10.1364/BOE.520171
Hiroki Cook, Anna Crisford, Konstantinos Bourdakos, Douglas Dunlop, Richard O C Oreffo, Sumeet Mahajan
Osteoarthritis (OA) is the most common degenerative joint disease, presented as wearing down of articular cartilage and resulting in pain and limited mobility for 1 in 10 adults in the UK [Osteoarthr. Cartil.28(6), 792 (2020)10.1016/j.joca.2020.03.004]. There is an unmet need for patient friendly paradigms for clinical assessment that do not use ionizing radiation (CT), exogenous contrast enhancing dyes (MRI), and biopsy. Hence, techniques that use non-destructive, near- and shortwave infrared light (NIR, SWIR) may be ideal for providing label-free, deep tissue interrogation. This study demonstrates multimodal "spectromics", low-level abstraction data fusion of non-destructive NIR Raman scattering spectroscopy and NIR-SWIR absorption spectroscopy, providing an enhanced, interpretable "fingerprint" for diagnosis of OA in human cartilage. This is proposed as method level innovation applicable to both arthro- or endoscopic (minimally invasive) or potential exoscopic (non-invasive) optical approaches. Samples were excised from femoral heads post hip arthroplasty from OA patients (n = 13) and age-matched control (osteoporosis) patients (n = 14). Under multivariate statistical analysis and supervised machine learning, tissue was classified to high precision: 100% segregation of tissue classes (using 10 principal components), and a classification accuracy of 95% (control) and 80% (OA), using the combined vibrational data. There was a marked performance improvement (5 to 6-fold for multivariate analysis) using the spectromics fingerprint compared to results obtained from solely Raman or NIR-SWIR data. Furthermore, clinically relevant tissue components were identified through discriminatory spectral features - spectromics biomarkers - allowing interpretable feedback from the enhanced fingerprint. In summary, spectromics provides comprehensive information for early OA detection and disease stratification, imperative for effective intervention in treating the degenerative onset disease for an aging demographic. This novel and elegant approach for data fusion is compatible with various NIR-SWIR optical devices that will allow deep non-destructive penetration.
骨关节炎(OA)是最常见的退行性关节疾病,表现为关节软骨磨损,英国每 10 个成年人中就有 1 人因此而感到疼痛和活动受限 [Osteoarthr.Cartil.28(6),792(2020)10.1016/j.joca.2020.03.004]。对于不使用电离辐射(CT)、外源性对比增强染料(MRI)和活组织检查的患者友好型临床评估范例的需求尚未得到满足。因此,使用非破坏性近红外和短波红外光(NIR、SWIR)的技术可能是提供无标记深层组织检查的理想选择。本研究展示了多模态 "光谱学"、非破坏性近红外拉曼散射光谱和近红外-短波红外吸收光谱的低级抽象数据融合,为诊断人体软骨的 OA 提供了增强的、可解释的 "指纹"。这是方法层面的创新,适用于关节镜或内窥镜(微创)或潜在的外窥镜(无创)光学方法。样本取自髋关节置换术后的股骨头,分别来自 OA 患者(13 人)和年龄匹配的对照组(骨质疏松症)患者(14 人)。通过多变量统计分析和监督机器学习,对组织进行了高精度分类:使用综合振动数据,组织类别分离率达 100%(使用 10 个主成分),分类准确率达 95%(对照组)和 80%(OA)。与仅使用拉曼或近红外-西红外数据得出的结果相比,使用光谱指纹的性能有了显著提高(多变量分析提高了 5 到 6 倍)。此外,通过鉴别性光谱特征(光谱生物标记)确定了与临床相关的组织成分,从而可以从增强的指纹中获得可解释的反馈。总之,光谱学为早期 OA 检测和疾病分层提供了全面的信息,对于有效干预老龄人口的退行性疾病治疗至关重要。这种新颖、优雅的数据融合方法与各种近红外-西红外光学设备兼容,可实现非破坏性的深度渗透。
{"title":"Holistic vibrational spectromics assessment of human cartilage for osteoarthritis diagnosis.","authors":"Hiroki Cook, Anna Crisford, Konstantinos Bourdakos, Douglas Dunlop, Richard O C Oreffo, Sumeet Mahajan","doi":"10.1364/BOE.520171","DOIUrl":"10.1364/BOE.520171","url":null,"abstract":"<p><p>Osteoarthritis (OA) is the most common degenerative joint disease, presented as wearing down of articular cartilage and resulting in pain and limited mobility for 1 in 10 adults in the UK [Osteoarthr. Cartil.28(6), 792 (2020)10.1016/j.joca.2020.03.004]. There is an unmet need for patient friendly paradigms for clinical assessment that do not use ionizing radiation (CT), exogenous contrast enhancing dyes (MRI), and biopsy. Hence, techniques that use non-destructive, near- and shortwave infrared light (NIR, SWIR) may be ideal for providing label-free, deep tissue interrogation. This study demonstrates multimodal \"spectromics\", low-level abstraction data fusion of non-destructive NIR Raman scattering spectroscopy and NIR-SWIR absorption spectroscopy, providing an enhanced, interpretable \"fingerprint\" for diagnosis of OA in human cartilage. This is proposed as method level innovation applicable to both arthro- or endoscopic (minimally invasive) or potential exoscopic (non-invasive) optical approaches. Samples were excised from femoral heads post hip arthroplasty from OA patients (n = 13) and age-matched control (osteoporosis) patients (n = 14). Under multivariate statistical analysis and supervised machine learning, tissue was classified to high precision: 100% segregation of tissue classes (using 10 principal components), and a classification accuracy of 95% (control) and 80% (OA), using the combined vibrational data. There was a marked performance improvement (5 to 6-fold for multivariate analysis) using the spectromics fingerprint compared to results obtained from solely Raman or NIR-SWIR data. Furthermore, clinically relevant tissue components were identified through discriminatory spectral features - spectromics biomarkers - allowing interpretable feedback from the enhanced fingerprint. In summary, spectromics provides comprehensive information for early OA detection and disease stratification, imperative for effective intervention in treating the degenerative onset disease for an aging demographic. This novel and elegant approach for data fusion is compatible with various NIR-SWIR optical devices that will allow deep non-destructive penetration.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11249685/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141632546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Optical coherence elastography (OCE) is a functional extension of optical coherence tomography (OCT). It offers high-resolution elasticity assessment with nanoscale tissue displacement sensitivity and high quantification accuracy, promising to enhance diagnostic precision. However, in vivo endoscopic OCE imaging has not been demonstrated yet, which needs to overcome key challenges related to probe miniaturization, high excitation efficiency and speed. This study presents a novel endoscopic OCE system, achieving the first endoscopic OCE imaging in vivo. The system features the smallest integrated OCE probe with an outer diameter of only 0.9 mm (with a 1.2-mm protective tube during imaging). Utilizing a single 38-MHz high-frequency ultrasound transducer, the system induced rapid deformation in tissues with enhanced excitation efficiency. In phantom studies, the OCE quantification results match well with compression testing results, showing the system's high accuracy. The in vivo imaging of the rat vagina demonstrated the system's capability to detect changes in tissue elasticity continually and distinguish between normal tissue, hematomas, and tissue with increased collagen fibers precisely. This research narrows the gap for the clinical implementation of the endoscopic OCE system, offering the potential for the early diagnosis of intraluminal diseases.
{"title":"<i>In vivo</i> endoscopic optical coherence elastography based on a miniature probe.","authors":"Haoxing Xu, Qingrong Xia, Chengyou Shu, Jiale Lan, Xiatian Wang, Wen Gao, Shengmiao Lv, Riqiang Lin, Zhihua Xie, Xiaohui Xiong, Fei Li, Jinke Zhang, Xiaojing Gong","doi":"10.1364/BOE.521154","DOIUrl":"10.1364/BOE.521154","url":null,"abstract":"<p><p>Optical coherence elastography (OCE) is a functional extension of optical coherence tomography (OCT). It offers high-resolution elasticity assessment with nanoscale tissue displacement sensitivity and high quantification accuracy, promising to enhance diagnostic precision. However, <i>in vivo</i> endoscopic OCE imaging has not been demonstrated yet, which needs to overcome key challenges related to probe miniaturization, high excitation efficiency and speed. This study presents a novel endoscopic OCE system, achieving the first endoscopic OCE imaging <i>in vivo</i>. The system features the smallest integrated OCE probe with an outer diameter of only 0.9 mm (with a 1.2-mm protective tube during imaging). Utilizing a single 38-MHz high-frequency ultrasound transducer, the system induced rapid deformation in tissues with enhanced excitation efficiency. In phantom studies, the OCE quantification results match well with compression testing results, showing the system's high accuracy. The <i>in vivo</i> imaging of the rat vagina demonstrated the system's capability to detect changes in tissue elasticity continually and distinguish between normal tissue, hematomas, and tissue with increased collagen fibers precisely. This research narrows the gap for the clinical implementation of the endoscopic OCE system, offering the potential for the early diagnosis of intraluminal diseases.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":null,"pages":null},"PeriodicalIF":2.9,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11249679/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141632541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}