The change in ocular wavefront aberrations with visual angle determines the isoplanatic patch, defined as the largest field of view over which diffraction-limited retinal imaging can be achieved. Here, we study how the isoplanatic patch at the foveal center varies across 32 schematic eyes, each individualized with optical biometry estimates of corneal and crystalline lens surface topography, assuming a homogeneous refractive index for the crystalline lens. The foveal isoplanatic patches were calculated using real ray tracing through 2, 4, 6 and 8 mm pupil diameters for wavelengths of 400-1200 nm, simulating five adaptive optics (AO) strategies. Three of these strategies, used in flood illumination, point-scanning, and line-scanning ophthalmoscopes, apply the same wavefront correction across the entire field of view, resulting in almost identical isoplanatic patches. Two time-division multiplexing (TDM) strategies are proposed to increase the isoplanatic patch of AO scanning ophthalmoscopes through field-varying wavefront correction. Results revealed substantial variation in isoplanatic patch size across eyes (40-500%), indicating that the field of view in AO ophthalmoscopes should be adjusted for each eye. The median isoplanatic patch size decreases with increasing pupil diameter, coarsely following a power law. No statistically significant correlations were found between isoplanatic patch size and axial length. The foveal isoplanatic patch increases linearly with wavelength, primarily due to its wavelength-dependent definition (wavefront root-mean-squared, RMS <λ/14), rather than aberration chromatism. Additionally, ray tracing reveals that in strongly ametropic eyes, induced aberrations can result in wavefront RMS errors as large as λ/3 for an 8-mm pupil, with implications for wavefront sensing, open-loop ophthalmic AO, spectacle prescription and refractive surgery.
{"title":"Biometry study of foveal isoplanatic patch variation for adaptive optics retinal imaging.","authors":"Xiaojing Huang, Aubrey Hargrave, Julie Bentley, Alfredo Dubra","doi":"10.1364/BOE.536645","DOIUrl":"https://doi.org/10.1364/BOE.536645","url":null,"abstract":"<p><p>The change in ocular wavefront aberrations with visual angle determines the isoplanatic patch, defined as the largest field of view over which diffraction-limited retinal imaging can be achieved. Here, we study how the isoplanatic patch at the foveal center varies across 32 schematic eyes, each individualized with optical biometry estimates of corneal and crystalline lens surface topography, assuming a homogeneous refractive index for the crystalline lens. The foveal isoplanatic patches were calculated using real ray tracing through 2, 4, 6 and 8 mm pupil diameters for wavelengths of 400-1200 nm, simulating five adaptive optics (AO) strategies. Three of these strategies, used in flood illumination, point-scanning, and line-scanning ophthalmoscopes, apply the same wavefront correction across the entire field of view, resulting in almost identical isoplanatic patches. Two time-division multiplexing (TDM) strategies are proposed to increase the isoplanatic patch of AO scanning ophthalmoscopes through field-varying wavefront correction. Results revealed substantial variation in isoplanatic patch size across eyes (40-500%), indicating that the field of view in AO ophthalmoscopes should be adjusted for each eye. The median isoplanatic patch size decreases with increasing pupil diameter, coarsely following a power law. No statistically significant correlations were found between isoplanatic patch size and axial length. The foveal isoplanatic patch increases linearly with wavelength, primarily due to its wavelength-dependent definition (wavefront root-mean-squared, RMS <λ/14), rather than aberration chromatism. Additionally, ray tracing reveals that in strongly ametropic eyes, induced aberrations can result in wavefront RMS errors as large as λ/3 for an 8-mm pupil, with implications for wavefront sensing, open-loop ophthalmic AO, spectacle prescription and refractive surgery.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"15 10","pages":"5674-5690"},"PeriodicalIF":2.9,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11482173/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142457245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04eCollection Date: 2024-10-01DOI: 10.1364/BOE.528568
Zofia Bratasz, Olivier Martinache, Julia Sverdlin, Damien Gatinel, Michael Atlan
The process of obtaining images of capillary vessels in the human eye's fundus using Doppler holography encounters difficulties due to ocular aberrations. To enhance the accuracy of these images, it is advantageous to apply an adaptive aberration correction technique. This study focuses on numerical Shack-Hartmann, which employs sub-pupil correlation as the wavefront sensing method. Application of this technique to Doppler holography encounters unique challenges due to the holographic detection properties. A detailed comparative analysis of the regularization technique against direct gradient integration in the estimation of aberrations is made. Two different reference images for the measurement of image shifts across subapertures are considered. The comparison reveals that direct gradient integration exhibits greater effectiveness in correcting asymmetrical aberrations.
{"title":"Aberration compensation in Doppler holography of the human eye fundus by subaperture signal correlation.","authors":"Zofia Bratasz, Olivier Martinache, Julia Sverdlin, Damien Gatinel, Michael Atlan","doi":"10.1364/BOE.528568","DOIUrl":"https://doi.org/10.1364/BOE.528568","url":null,"abstract":"<p><p>The process of obtaining images of capillary vessels in the human eye's fundus using Doppler holography encounters difficulties due to ocular aberrations. To enhance the accuracy of these images, it is advantageous to apply an adaptive aberration correction technique. This study focuses on numerical Shack-Hartmann, which employs sub-pupil correlation as the wavefront sensing method. Application of this technique to Doppler holography encounters unique challenges due to the holographic detection properties. A detailed comparative analysis of the regularization technique against direct gradient integration in the estimation of aberrations is made. Two different reference images for the measurement of image shifts across subapertures are considered. The comparison reveals that direct gradient integration exhibits greater effectiveness in correcting asymmetrical aberrations.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"15 10","pages":"5660-5673"},"PeriodicalIF":2.9,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11482168/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142457243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04eCollection Date: 2024-10-01DOI: 10.1364/BOE.527313
Jean Commère, Marie Glanc, Laurent Bourdieu, Raphaël Galicher, Éric Gendron, Gérard Rousset
Optical microscopy techniques have become essential tools for studying normal and pathological biological systems. However, in many situations, image quality deteriorates rapidly in the field of view due to optical aberrations and scattering induced by thick tissues. To compensate for these aberrations and restore the microscope's image quality, adaptive optics (AO) techniques have been proposed for the past 15 years. A key parameter for the AO implementation lies in the limited isoplanatic dimension over which the image quality remains uniform. Here, we propose a method for measuring this dimension and deducing the anisoplanatism and intensity transmission of the samples. We apply this approach to fixed slices of mouse cortices as a function of their thickness. We find a typical mid-maximum width of 20 µm for the isoplanatic spot, which is independent of sample thickness.
{"title":"Experimental characterization of an isoplanatic patch in mouse cortex using adaptive optics.","authors":"Jean Commère, Marie Glanc, Laurent Bourdieu, Raphaël Galicher, Éric Gendron, Gérard Rousset","doi":"10.1364/BOE.527313","DOIUrl":"https://doi.org/10.1364/BOE.527313","url":null,"abstract":"<p><p>Optical microscopy techniques have become essential tools for studying normal and pathological biological systems. However, in many situations, image quality deteriorates rapidly in the field of view due to optical aberrations and scattering induced by thick tissues. To compensate for these aberrations and restore the microscope's image quality, adaptive optics (AO) techniques have been proposed for the past 15 years. A key parameter for the AO implementation lies in the limited isoplanatic dimension over which the image quality remains uniform. Here, we propose a method for measuring this dimension and deducing the anisoplanatism and intensity transmission of the samples. We apply this approach to fixed slices of mouse cortices as a function of their thickness. We find a typical mid-maximum width of 20 µm for the isoplanatic spot, which is independent of sample thickness.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"15 10","pages":"5645-5659"},"PeriodicalIF":2.9,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11482162/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142457252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04eCollection Date: 2024-10-01DOI: 10.1364/BOE.531501
W Joseph O'Brien, Laura Carlton, Johnathan Muhvich, Sreekanth Kura, Antonio Ortega-Martinez, Jay Dubb, Sudan Duwadi, Eric Hazen, Meryem A Yücel, Alexander von Lühmann, David A Boas, Bernhard B Zimmermann
Functional near-infrared spectroscopy (fNIRS) technology has been steadily advancing since the first measurements of human brain activity over 30 years ago. Initially, efforts were focused on increasing the channel count of fNIRS systems and then to moving from sparse to high density arrays of sources and detectors, enhancing spatial resolution through overlapping measurements. Over the last ten years, there have been rapid developments in wearable fNIRS systems that place the light sources and detectors on the head as opposed to the original approach of using fiber optics to deliver the light between the hardware and the head. The miniaturization of the electronics and increased computational power continues to permit impressive advances in wearable fNIRS systems. Here we detail our design for a wearable fNIRS system that covers the whole head of an adult human with a high-density array of 56 sources and up to 192 detectors. We provide characterization of the system showing that its performance is among the best in published systems. Additionally, we provide demonstrative images of brain activation during a ball squeezing task. We have released the hardware design to the public, with the hope that the community will build upon our foundational work and drive further advancements.
{"title":"ninjaNIRS: an open hardware solution for wearable whole-head high-density functional near-infrared spectroscopy.","authors":"W Joseph O'Brien, Laura Carlton, Johnathan Muhvich, Sreekanth Kura, Antonio Ortega-Martinez, Jay Dubb, Sudan Duwadi, Eric Hazen, Meryem A Yücel, Alexander von Lühmann, David A Boas, Bernhard B Zimmermann","doi":"10.1364/BOE.531501","DOIUrl":"https://doi.org/10.1364/BOE.531501","url":null,"abstract":"<p><p>Functional near-infrared spectroscopy (fNIRS) technology has been steadily advancing since the first measurements of human brain activity over 30 years ago. Initially, efforts were focused on increasing the channel count of fNIRS systems and then to moving from sparse to high density arrays of sources and detectors, enhancing spatial resolution through overlapping measurements. Over the last ten years, there have been rapid developments in wearable fNIRS systems that place the light sources and detectors on the head as opposed to the original approach of using fiber optics to deliver the light between the hardware and the head. The miniaturization of the electronics and increased computational power continues to permit impressive advances in wearable fNIRS systems. Here we detail our design for a wearable fNIRS system that covers the whole head of an adult human with a high-density array of 56 sources and up to 192 detectors. We provide characterization of the system showing that its performance is among the best in published systems. Additionally, we provide demonstrative images of brain activation during a ball squeezing task. We have released the hardware design to the public, with the hope that the community will build upon our foundational work and drive further advancements.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"15 10","pages":"5625-5644"},"PeriodicalIF":2.9,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11482177/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142457270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-03eCollection Date: 2024-10-01DOI: 10.1364/BOE.533072
Zizheng Wang, Xiao Xiao, Ziwen Zhou, Yunyin Chen, Tianqi Xia, Xiangyi Sheng, Yiping Han, Wei Gong, Ke Si
Many clearing methods achieve high transparency by removing lipid components from tissues, which damages microstructure and limits their application in lipid research. As for methods which preserve lipid, it is difficult to balance transparency, fluorescence preservation and clearing speed. In this study, we propose a rapid water-based clearing method that is fluorescence-friendly and preserves lipid components. FLUID allows for preservation of endogenous fluorescence over 60 days. It shows negligible tissue distortion and is compatible with various types of fluorescent labeling and tissue staining methods. High quality imaging of human brain tissue and compatibility with pathological staining demonstrated the potential of our method for three-dimensional (3D) biopsy and clinical pathological diagnosis.
{"title":"FLUID: a fluorescence-friendly lipid-compatible ultrafast clearing method.","authors":"Zizheng Wang, Xiao Xiao, Ziwen Zhou, Yunyin Chen, Tianqi Xia, Xiangyi Sheng, Yiping Han, Wei Gong, Ke Si","doi":"10.1364/BOE.533072","DOIUrl":"https://doi.org/10.1364/BOE.533072","url":null,"abstract":"<p><p>Many clearing methods achieve high transparency by removing lipid components from tissues, which damages microstructure and limits their application in lipid research. As for methods which preserve lipid, it is difficult to balance transparency, fluorescence preservation and clearing speed. In this study, we propose a rapid water-based clearing method that is fluorescence-friendly and preserves lipid components. FLUID allows for preservation of endogenous fluorescence over 60 days. It shows negligible tissue distortion and is compatible with various types of fluorescent labeling and tissue staining methods. High quality imaging of human brain tissue and compatibility with pathological staining demonstrated the potential of our method for three-dimensional (3D) biopsy and clinical pathological diagnosis.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"15 10","pages":"5609-5624"},"PeriodicalIF":2.9,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11482171/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142457253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-03eCollection Date: 2024-10-01DOI: 10.1364/BOE.525928
Mohammad Rashidi, Georgy Kalenkov, Daniel J Green, Robert A McLaughlin
Skin microvasculature is essential for cardiovascular health and thermoregulation in humans, yet its imaging and analysis pose significant challenges. Established methods, such as speckle decorrelation applied to optical coherence tomography (OCT) B-scans for OCT-angiography (OCTA), often require a high number of B-scans, leading to long acquisition times that are prone to motion artifacts. In our study, we propose a novel approach integrating a deep learning algorithm within our OCTA processing. By integrating a convolutional neural network with a squeeze-and-excitation block, we address these challenges in microvascular imaging. Our method enhances accuracy and reduces measurement time by efficiently utilizing local information. The Squeeze-and-Excitation block further improves stability and accuracy by dynamically recalibrating features, highlighting the advantages of deep learning in this domain.
{"title":"Enhanced microvascular imaging through deep learning-driven OCTA reconstruction with squeeze-and-excitation block integration.","authors":"Mohammad Rashidi, Georgy Kalenkov, Daniel J Green, Robert A McLaughlin","doi":"10.1364/BOE.525928","DOIUrl":"https://doi.org/10.1364/BOE.525928","url":null,"abstract":"<p><p>Skin microvasculature is essential for cardiovascular health and thermoregulation in humans, yet its imaging and analysis pose significant challenges. Established methods, such as speckle decorrelation applied to optical coherence tomography (OCT) B-scans for OCT-angiography (OCTA), often require a high number of B-scans, leading to long acquisition times that are prone to motion artifacts. In our study, we propose a novel approach integrating a deep learning algorithm within our OCTA processing. By integrating a convolutional neural network with a squeeze-and-excitation block, we address these challenges in microvascular imaging. Our method enhances accuracy and reduces measurement time by efficiently utilizing local information. The Squeeze-and-Excitation block further improves stability and accuracy by dynamically recalibrating features, highlighting the advantages of deep learning in this domain.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"15 10","pages":"5592-5608"},"PeriodicalIF":2.9,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11482165/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142457250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Because conventional low-light cameras used in single-molecule localization microscopy (SMLM) do not have the ability to distinguish colors, it is often necessary to employ a dedicated optical system and/or a complicated image analysis procedure to realize multi-color SMLM. Recently, researchers explored the potential of a new kind of low-light camera called colorimetry camera as an alternative detector in multi-color SMLM, and achieved two-color SMLM under a simple optical system, with a comparable cross-talk to the best reported values. However, extracting images from all color channels is a necessary but lengthy process in colorimetry camera-based SMLM (called CC-STORM), because this process requires the sequential traversal of a massive number of pixels. By taking advantage of the parallelism and pipeline characteristics of FPGA, in this paper, we report an updated multi-color SMLM method called HCC-STORM, which integrated the data processing tasks in CC-STORM into a home-built CPU-GPU-FPGA heterogeneous computing platform. We show that, without scarifying the original performance of CC-STORM, the execution speed of HCC-STORM was increased by approximately three times. Actually, in HCC-STORM, the total data processing time for each raw image with 1024 × 1024 pixels was 26.9 ms. This improvement enabled real-time data processing for a field of view of 1024 × 1024 pixels and an exposure time of 30 ms (a typical exposure time in CC-STORM). Furthermore, to reduce the difficulty of deploying algorithms into the heterogeneous computing platform, we also report the necessary interfaces for four commonly used high-level programming languages, including C/C++, Python, Java, and Matlab. This study not only pushes forward the mature of CC-STORM, but also presents a powerful computing platform for tasks with heavy computation load.
{"title":"Real-time data processing in colorimetry camera-based single-molecule localization microscopy via CPU-GPU-FPGA heterogeneous computation.","authors":"Jiaxun Lin,Kun Wang,Zhen-Li Huang","doi":"10.1364/boe.534941","DOIUrl":"https://doi.org/10.1364/boe.534941","url":null,"abstract":"Because conventional low-light cameras used in single-molecule localization microscopy (SMLM) do not have the ability to distinguish colors, it is often necessary to employ a dedicated optical system and/or a complicated image analysis procedure to realize multi-color SMLM. Recently, researchers explored the potential of a new kind of low-light camera called colorimetry camera as an alternative detector in multi-color SMLM, and achieved two-color SMLM under a simple optical system, with a comparable cross-talk to the best reported values. However, extracting images from all color channels is a necessary but lengthy process in colorimetry camera-based SMLM (called CC-STORM), because this process requires the sequential traversal of a massive number of pixels. By taking advantage of the parallelism and pipeline characteristics of FPGA, in this paper, we report an updated multi-color SMLM method called HCC-STORM, which integrated the data processing tasks in CC-STORM into a home-built CPU-GPU-FPGA heterogeneous computing platform. We show that, without scarifying the original performance of CC-STORM, the execution speed of HCC-STORM was increased by approximately three times. Actually, in HCC-STORM, the total data processing time for each raw image with 1024 × 1024 pixels was 26.9 ms. This improvement enabled real-time data processing for a field of view of 1024 × 1024 pixels and an exposure time of 30 ms (a typical exposure time in CC-STORM). Furthermore, to reduce the difficulty of deploying algorithms into the heterogeneous computing platform, we also report the necessary interfaces for four commonly used high-level programming languages, including C/C++, Python, Java, and Matlab. This study not only pushes forward the mature of CC-STORM, but also presents a powerful computing platform for tasks with heavy computation load.","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"64 1","pages":"5560-5573"},"PeriodicalIF":3.4,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142268104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruizhi Zuo,Shuwen Wei,Yaning Wang,Kristina Irsch,Jin U Kang
Optical coherence tomography (OCT) allows high-resolution volumetric imaging of biological tissues in vivo. However, 3D-image acquisition often suffers from motion artifacts due to slow frame rates and involuntary and physiological movements of living tissue. To solve these issues, we implement a real-time 4D-OCT system capable of reconstructing near-distortion-free volumetric images based on a deep learning-based reconstruction algorithm. The system initially collects undersampled volumetric images at a high speed and then upsamples the images in real-time by a convolutional neural network (CNN) that generates high-frequency features using a deep learning algorithm. We compare and analyze both dual-2D- and 3D-UNet-based networks for the OCT 3D high-resolution image reconstruction. We refine the network architecture by incorporating multi-level information to accelerate convergence and improve accuracy. The network is optimized by utilizing the 16-bit floating-point precision for network parameters to conserve GPU memory and enhance efficiency. The result shows that the refined and optimized 3D-network is capable of retrieving the tissue structure more precisely and enable real-time 4D-OCT imaging at a rate greater than 10 Hz with a root mean square error (RMSE) of ∼0.03.
光学相干断层扫描(OCT)可对活体生物组织进行高分辨率容积成像。然而,由于帧频较慢以及活体组织的非自主和生理运动,三维图像采集经常会出现运动伪影。为了解决这些问题,我们实施了一种实时 4D-OCT 系统,该系统能够基于基于深度学习的重建算法重建近乎无失真容积图像。该系统最初以高速收集未采样的容积图像,然后通过卷积神经网络(CNN)对图像进行实时上采样,并利用深度学习算法生成高频特征。我们比较并分析了基于双 2D 网络和 3DUNet 网络的 OCT 3D 高分辨率图像重建。我们通过整合多层次信息来完善网络架构,从而加快收敛速度并提高准确性。网络参数采用 16 位浮点精度,以节省 GPU 内存并提高效率。结果表明,经过改进和优化的三维网络能够更精确地检索组织结构,并能以大于 10 Hz 的速率进行实时 4D-OCT 成像,均方根误差(RMSE)为 ∼0.03。
{"title":"High-resolution in vivo 4D-OCT fish-eye imaging using 3D-UNet with multi-level residue decoder.","authors":"Ruizhi Zuo,Shuwen Wei,Yaning Wang,Kristina Irsch,Jin U Kang","doi":"10.1364/boe.532258","DOIUrl":"https://doi.org/10.1364/boe.532258","url":null,"abstract":"Optical coherence tomography (OCT) allows high-resolution volumetric imaging of biological tissues in vivo. However, 3D-image acquisition often suffers from motion artifacts due to slow frame rates and involuntary and physiological movements of living tissue. To solve these issues, we implement a real-time 4D-OCT system capable of reconstructing near-distortion-free volumetric images based on a deep learning-based reconstruction algorithm. The system initially collects undersampled volumetric images at a high speed and then upsamples the images in real-time by a convolutional neural network (CNN) that generates high-frequency features using a deep learning algorithm. We compare and analyze both dual-2D- and 3D-UNet-based networks for the OCT 3D high-resolution image reconstruction. We refine the network architecture by incorporating multi-level information to accelerate convergence and improve accuracy. The network is optimized by utilizing the 16-bit floating-point precision for network parameters to conserve GPU memory and enhance efficiency. The result shows that the refined and optimized 3D-network is capable of retrieving the tissue structure more precisely and enable real-time 4D-OCT imaging at a rate greater than 10 Hz with a root mean square error (RMSE) of ∼0.03.","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"10 1","pages":"5533-5546"},"PeriodicalIF":3.4,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We developed an algorithm for automatically analyzing scattering-based light sheet microscopy (sLSM) images of anal squamous intraepithelial lesions. We developed a method for automatically segmenting sLSM images for nuclei and calculating seven features: nuclear intensity, intensity slope as a function of depth, nuclear-to-nuclear distance, nuclear-to-cytoplasm ratio, cell density, nuclear area, and proportion of pixels corresponding to nuclei. 187 images from 80 anal biopsies were used for feature analysis and classifier development. The automated nuclear segmentation method provided reliable performance with the precision of 0.97 and recall of 0.91 when compared with the manual segmentation. Among the seven features, six showed statistically significant differences between high-grade squamous intraepithelial lesion (HSIL) and non-HSIL (non-dysplastic or low-grade squamous intraepithelial lesion, LSIL). A classifier using linear support vector machine (SVM) achieved promising performance in diagnosing HSIL versus non-HSIL: sensitivity of 90%, specificity of 70%, and area under the curve (AUC) of 0.89 for per-image diagnosis, and sensitivity of 90%, specificity of 80%, and AUC of 0.92 for per-biopsy diagnosis.
{"title":"Automated analysis of scattering-based light sheet microscopy images of anal squamous intraepithelial lesions.","authors":"Yongjun Kim,Jingwei Zhao,Brooke Liang,Momoka Sugimura,Kenneth Marcelino,Rafael Romero,Ameer Nessaee,Carmella Ocaya,Koeun Lim,Denise Roe,Michelle J Khan,Eric J Yang,Dongkyun Kang","doi":"10.1364/boe.531700","DOIUrl":"https://doi.org/10.1364/boe.531700","url":null,"abstract":"We developed an algorithm for automatically analyzing scattering-based light sheet microscopy (sLSM) images of anal squamous intraepithelial lesions. We developed a method for automatically segmenting sLSM images for nuclei and calculating seven features: nuclear intensity, intensity slope as a function of depth, nuclear-to-nuclear distance, nuclear-to-cytoplasm ratio, cell density, nuclear area, and proportion of pixels corresponding to nuclei. 187 images from 80 anal biopsies were used for feature analysis and classifier development. The automated nuclear segmentation method provided reliable performance with the precision of 0.97 and recall of 0.91 when compared with the manual segmentation. Among the seven features, six showed statistically significant differences between high-grade squamous intraepithelial lesion (HSIL) and non-HSIL (non-dysplastic or low-grade squamous intraepithelial lesion, LSIL). A classifier using linear support vector machine (SVM) achieved promising performance in diagnosing HSIL versus non-HSIL: sensitivity of 90%, specificity of 70%, and area under the curve (AUC) of 0.89 for per-image diagnosis, and sensitivity of 90%, specificity of 80%, and AUC of 0.92 for per-biopsy diagnosis.","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"14 1","pages":"5547-5559"},"PeriodicalIF":3.4,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wesley B Baker,Rodrigo M Forti,Pascal Heye,Kristina Heye,Jennifer M Lynch,Arjun G Yodh,Daniel J Licht,Brian R White,Misun Hwang,Tiffany S Ko,Todd J Kilbaugh
We introduce a frequency-domain modified Beer-Lambert algorithm for diffuse correlation spectroscopy to non-invasively measure flow pulsatility and thus critical closing pressure (CrCP). Using the same optical measurements, CrCP was obtained with the new algorithm and with traditional nonlinear diffusion fitting. Results were compared to invasive determination of intracranial pressure (ICP) in piglets (n = 18). The new algorithm better predicted ICP elevations; the area under curve (AUC) from logistic regression analysis was 0.85 for ICP ≥ 20 mmHg. The corresponding AUC for traditional analysis was 0.60. Improved diagnostic performance likely results from better filtering of extra-cerebral tissue contamination and measurement noise.
{"title":"Modified Beer-Lambert algorithm to measure pulsatile blood flow, critical closing pressure, and intracranial hypertension.","authors":"Wesley B Baker,Rodrigo M Forti,Pascal Heye,Kristina Heye,Jennifer M Lynch,Arjun G Yodh,Daniel J Licht,Brian R White,Misun Hwang,Tiffany S Ko,Todd J Kilbaugh","doi":"10.1364/boe.529150","DOIUrl":"https://doi.org/10.1364/boe.529150","url":null,"abstract":"We introduce a frequency-domain modified Beer-Lambert algorithm for diffuse correlation spectroscopy to non-invasively measure flow pulsatility and thus critical closing pressure (CrCP). Using the same optical measurements, CrCP was obtained with the new algorithm and with traditional nonlinear diffusion fitting. Results were compared to invasive determination of intracranial pressure (ICP) in piglets (n = 18). The new algorithm better predicted ICP elevations; the area under curve (AUC) from logistic regression analysis was 0.85 for ICP ≥ 20 mmHg. The corresponding AUC for traditional analysis was 0.60. Improved diagnostic performance likely results from better filtering of extra-cerebral tissue contamination and measurement noise.","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"10 1","pages":"5511-5532"},"PeriodicalIF":3.4,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142247593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}