Pub Date : 2025-12-18eCollection Date: 2026-01-01DOI: 10.1364/BOE.573843
Edmund Sumpena, Andrew Cornelio, Ana Collazo, Shu Jie Ting, Tim Kowalczyk, Xuejuan Jiang, Alexa Beiser, Sudha Seshadri, Amir H Kashani, Craig K Jones
Eye movements, optical opacities, and other factors can introduce artifacts during the acquisition of optical coherence tomography angiography volumes, resulting in suboptimal imaging quality. We aim to develop an automated deep learning model to separate excellent-quality from suboptimal-quality volumes in a quantitative and objective manner. Existing works use supervised classifiers trained on 2D en face images, which 1) represent quality as rigid and discrete classes, 2) require large amounts of labeled data for every type of artifact to generalize effectively, and 3) discard valuable depth information from the original volume. We propose OCTA-GAN, an efficient 3D generative adversarial network architecture that incorporates multi-scale processing layers to assess the quality of scans by fusing fine vasculature details with larger anatomical context. The unsupervised model learns patterns associated with excellent-quality volumes and accurately determines the quality of unseen volumes. Experimental results show OCTA-GAN's discriminator distinguishes excellent-quality from suboptimal-quality volumes with an AUC of 0.92, a sensitivity of 95.7%, and a specificity of 76.6%, surpassing the baseline 3D architecture (AUC = 0.55, sensitivity = 97.8%, specificity = 12.8%). Further analysis attributes the improved performance to the synergy between the generator model and discriminator architecture, whose robust feature representations effectively capture the intricate vasculature. Comparison with state-of-the-art 2D supervised en face classifiers demonstrates OCTA-GAN's ability to generalize across diverse artifacts and provides an interpretable organization of the output scores based on severity.
{"title":"Unsupervised quality assessment with generative adversarial networks for 3D OCTA microvascular imaging.","authors":"Edmund Sumpena, Andrew Cornelio, Ana Collazo, Shu Jie Ting, Tim Kowalczyk, Xuejuan Jiang, Alexa Beiser, Sudha Seshadri, Amir H Kashani, Craig K Jones","doi":"10.1364/BOE.573843","DOIUrl":"10.1364/BOE.573843","url":null,"abstract":"<p><p>Eye movements, optical opacities, and other factors can introduce artifacts during the acquisition of optical coherence tomography angiography volumes, resulting in suboptimal imaging quality. We aim to develop an automated deep learning model to separate excellent-quality from suboptimal-quality volumes in a quantitative and objective manner. Existing works use supervised classifiers trained on 2D <i>en face</i> images, which 1) represent quality as rigid and discrete classes, 2) require large amounts of labeled data for every type of artifact to generalize effectively, and 3) discard valuable depth information from the original volume. We propose OCTA-GAN, an efficient 3D generative adversarial network architecture that incorporates multi-scale processing layers to assess the quality of scans by fusing fine vasculature details with larger anatomical context. The unsupervised model learns patterns associated with excellent-quality volumes and accurately determines the quality of unseen volumes. Experimental results show OCTA-GAN's discriminator distinguishes excellent-quality from suboptimal-quality volumes with an AUC of 0.92, a sensitivity of 95.7%, and a specificity of 76.6%, surpassing the baseline 3D architecture (AUC = 0.55, sensitivity = 97.8%, specificity = 12.8%). Further analysis attributes the improved performance to the synergy between the generator model and discriminator architecture, whose robust feature representations effectively capture the intricate vasculature. Comparison with state-of-the-art 2D supervised <i>en face</i> classifiers demonstrates OCTA-GAN's ability to generalize across diverse artifacts and provides an interpretable organization of the output scores based on severity.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 1","pages":"378-393"},"PeriodicalIF":3.2,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12795412/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145964660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17eCollection Date: 2026-01-01DOI: 10.1364/BOE.578245
Weijin Chen, Xin Cheng, Ning Wang, Xuming Zhang, Min Su, Weichang Wu, Peitian Mu, Quan Du, Hwa-Yaw Tam, Weibao Qiu, Jiyan Dai
In this study, we developed a minimally invasive intravascular catheter integrating ultrasonic imaging with fiber Bragg grating (FBG)-based mechanical sensing. By co-integrating a high-frequency miniature ultrasound transducer and a ZEONEX-based polymer optical fiber Bragg grating at the tip of a 1.2 mm catheter, synchronized monitoring of vascular structure visualization and acquisition of hemodynamic pressure data was achieved. In vitro experiments demonstrated that the device attained an axial resolution of 50 µm and a pressure sensitivity of 6.81 pm/kPa when operating in an isotonic saline solution. This technology combines dynamic pressure sensing capabilities with ultrasonic structural imaging in a vascular interventional catheter to overcome the limitations of traditional single-modality catheters in assessing the extent of arterial stenosis. In vitro experiments demonstrated that the pressure sensitivity of this composite catheter was significantly higher than that of commercial pressure wires. Animal experiments successfully captured systolic pressure and diastolic pressure, confirming that the composite catheter is capable of detecting dynamic changes in intravascular stress, and therefore, facilitating a multimodal diagnostic approach for the diagnosis of cardiovascular diseases.
{"title":"Multimodality catheter composed of intravascular ultrasound imaging and polymer optical fiber FFR functions for the diagnosis of cardiac disease.","authors":"Weijin Chen, Xin Cheng, Ning Wang, Xuming Zhang, Min Su, Weichang Wu, Peitian Mu, Quan Du, Hwa-Yaw Tam, Weibao Qiu, Jiyan Dai","doi":"10.1364/BOE.578245","DOIUrl":"10.1364/BOE.578245","url":null,"abstract":"<p><p>In this study, we developed a minimally invasive intravascular catheter integrating ultrasonic imaging with fiber Bragg grating (FBG)-based mechanical sensing. By co-integrating a high-frequency miniature ultrasound transducer and a ZEONEX-based polymer optical fiber Bragg grating at the tip of a 1.2 mm catheter, synchronized monitoring of vascular structure visualization and acquisition of hemodynamic pressure data was achieved. In vitro experiments demonstrated that the device attained an axial resolution of 50 µm and a pressure sensitivity of 6.81 pm/kPa when operating in an isotonic saline solution. This technology combines dynamic pressure sensing capabilities with ultrasonic structural imaging in a vascular interventional catheter to overcome the limitations of traditional single-modality catheters in assessing the extent of arterial stenosis. In vitro experiments demonstrated that the pressure sensitivity of this composite catheter was significantly higher than that of commercial pressure wires. Animal experiments successfully captured systolic pressure and diastolic pressure, confirming that the composite catheter is capable of detecting dynamic changes in intravascular stress, and therefore, facilitating a multimodal diagnostic approach for the diagnosis of cardiovascular diseases.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 1","pages":"365-377"},"PeriodicalIF":3.2,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12795434/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145965220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16eCollection Date: 2026-01-01DOI: 10.1364/BOE.582534
Sebastián Ruiz-Lopera, David Veysset, Brett E Bouma, Néstor Uribe-Patarroyo
Adaptive-optics optical coherence tomography (AO-OCT) allows the visualization of cellular-scale retinal structures; however, its adoption both at research and clinical levels has been restricted by hardware and software complexity. Based on the observation that aberrations other than defocus are depth-independent, we propose an approach for wavefront sensorless AO-OCT that utilizes the interferometric fringe modulation in wavenumber (k-) space to optimize the wavefront correction. This approach avoids the need for tomogram reconstruction at each optimization iteration and increases robustness against axial motion. The proposed routine combines k-space optimization with focal plane shifting (i.e., defocus optimization) and evaluates the objective function B-scan-wise, achieving 8 Zernike modes correction in ∼1.89 s. Experimental testing with a phantom model eye and computational complexity analysis show the proposed algorithm has a lower computational complexity and faster optimization time per mode while performing at least as well as depth-resolved optimization, using a LabVIEW implementation without the need for high-performance dedicated software or GPU acceleration. We demonstrate its performance in human retinal imaging in vivo.
{"title":"Wavenumber-space wavefront sensorless adaptive-optics for optical coherence tomography.","authors":"Sebastián Ruiz-Lopera, David Veysset, Brett E Bouma, Néstor Uribe-Patarroyo","doi":"10.1364/BOE.582534","DOIUrl":"10.1364/BOE.582534","url":null,"abstract":"<p><p>Adaptive-optics optical coherence tomography (AO-OCT) allows the visualization of cellular-scale retinal structures; however, its adoption both at research and clinical levels has been restricted by hardware and software complexity. Based on the observation that aberrations other than defocus are depth-independent, we propose an approach for wavefront sensorless AO-OCT that utilizes the interferometric fringe modulation in wavenumber (<i>k</i>-) space to optimize the wavefront correction. This approach avoids the need for tomogram reconstruction at each optimization iteration and increases robustness against axial motion. The proposed routine combines <i>k</i>-space optimization with focal plane shifting (i.e., defocus optimization) and evaluates the objective function B-scan-wise, achieving 8 Zernike modes correction in ∼1.89 s. Experimental testing with a phantom model eye and computational complexity analysis show the proposed algorithm has a lower computational complexity and faster optimization time per mode while performing at least as well as depth-resolved optimization, using a LabVIEW implementation without the need for high-performance dedicated software or GPU acceleration. We demonstrate its performance in human retinal imaging <i>in vivo</i>.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 1","pages":"282-293"},"PeriodicalIF":3.2,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12795413/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145964965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16eCollection Date: 2026-01-01DOI: 10.1364/BOE.581923
Qiuzhi Ji, Marcel T Bernucci, Yan Liu, James A Crowell, Davin J Miller, Donald T Miller
Adaptive optics optical coherence tomography (AO-OCT) enables high-resolution, 3-dimensional imaging of cone photoreceptors in the living human retina. Histological studies have shown that short-wavelength-sensitive (S) cones are structurally distinct from medium- (M) and long-wavelength-sensitive (L) cones. However, current in vivo methods for classifying cones-such as retinal densitometry and optoretinography-are technically demanding because they require measuring cone function. Quantifying structural differences with AO-OCT may provide a simpler and faster alternative and offer new biomarkers for understanding how disease differentially affects photoreceptor subtypes. Here, we present a quantitative method that applies a support vector machine (SVM) classifier to structural measurements of AO-OCT volumes to identify individual S cones. We measured six structural parameters related to the inner and outer segments of each cone. Among 13,836 cones analyzed across six subjects, we found S cones exhibited significantly longer inner segments, shorter outer segments, and wider diameters at the inner/outer segment junction than M and L cones. Although M and L cones are widely regarded as morphologically indistinguishable, we also found that L cones, on average, had longer outer segments than M cones. These structural differences were consistent across five of the six subjects at a single retinal eccentricity of 3.7° and across eccentricities from 2° to 12° temporal in one subject. Our SVM model used these features to achieve high classification accuracy for S cones. Validation of classification performance against optoretinography on the same eyes yielded F1 scores ranging from 0.78 to 0.93 in five of the six subjects.
{"title":"Structural analysis of cone photoreceptors in AO-OCT enables S-cone identification by a support vector machine classifier.","authors":"Qiuzhi Ji, Marcel T Bernucci, Yan Liu, James A Crowell, Davin J Miller, Donald T Miller","doi":"10.1364/BOE.581923","DOIUrl":"10.1364/BOE.581923","url":null,"abstract":"<p><p>Adaptive optics optical coherence tomography (AO-OCT) enables high-resolution, 3-dimensional imaging of cone photoreceptors in the living human retina. Histological studies have shown that short-wavelength-sensitive (S) cones are structurally distinct from medium- (M) and long-wavelength-sensitive (L) cones. However, current <i>in vivo</i> methods for classifying cones-such as retinal densitometry and optoretinography-are technically demanding because they require measuring cone function. Quantifying structural differences with AO-OCT may provide a simpler and faster alternative and offer new biomarkers for understanding how disease differentially affects photoreceptor subtypes. Here, we present a quantitative method that applies a support vector machine (SVM) classifier to structural measurements of AO-OCT volumes to identify individual S cones. We measured six structural parameters related to the inner and outer segments of each cone. Among 13,836 cones analyzed across six subjects, we found S cones exhibited significantly longer inner segments, shorter outer segments, and wider diameters at the inner/outer segment junction than M and L cones. Although M and L cones are widely regarded as morphologically indistinguishable, we also found that L cones, on average, had longer outer segments than M cones. These structural differences were consistent across five of the six subjects at a single retinal eccentricity of 3.7° and across eccentricities from 2° to 12° temporal in one subject. Our SVM model used these features to achieve high classification accuracy for S cones. Validation of classification performance against optoretinography on the same eyes yielded F1 scores ranging from 0.78 to 0.93 in five of the six subjects.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 1","pages":"346-364"},"PeriodicalIF":3.2,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12795414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145965252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16eCollection Date: 2026-01-01DOI: 10.1364/BOE.575722
Joyce E Farrell, Xi Mou, Brian A Wandell
Spectroradiometric fluorescence measurements were collected from the dorsal tongue and inner lip of healthy volunteers. These sites were chosen to represent the distinct spectral features that differentiate keratinized from non-keratinized oral tissues, as documented in previous studies. A computational model was then applied to estimate the relative contributions of key fluorophores and to quantify the influence of blood absorption on the observed fluorescence spectra. The resulting dataset and model, both freely available, serve as reference standards for healthy oral tissue and support the development of quantitative, non-invasive imaging systems for consistent and reproducible assessment of oral mucosal health.
{"title":"Modeling spectroradiometric measurements of oral mucosal tissue autofluorescence.","authors":"Joyce E Farrell, Xi Mou, Brian A Wandell","doi":"10.1364/BOE.575722","DOIUrl":"10.1364/BOE.575722","url":null,"abstract":"<p><p>Spectroradiometric fluorescence measurements were collected from the dorsal tongue and inner lip of healthy volunteers. These sites were chosen to represent the distinct spectral features that differentiate keratinized from non-keratinized oral tissues, as documented in previous studies. A computational model was then applied to estimate the relative contributions of key fluorophores and to quantify the influence of blood absorption on the observed fluorescence spectra. The resulting dataset and model, both freely available, serve as reference standards for healthy oral tissue and support the development of quantitative, non-invasive imaging systems for consistent and reproducible assessment of oral mucosal health.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 1","pages":"305-321"},"PeriodicalIF":3.2,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12795424/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145965230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oblique back-illumination microscopy (OBM) is a label-free imaging technique that captures differential forward scattering in reflection mode to generate high-contrast pseudo-transmission images of cells and microvessels. While OBM benefits from multiple light scattering to detect forward-scattered signals, its imaging depth is constrained by tissue scattering between the objective lens and the imaging plane. In this study, we introduce a long-wavelength OBM system operating at 1650 nm-significantly longer than previous implementations-to mitigate scattering effects and extend imaging depth. Compared to a similar system using an 800 nm light source, our 1650 nm OBM achieves markedly deeper in vivo imaging of the mouse brain. This advancement in high-contrast, deep-tissue imaging holds promise for more detailed investigations into the pathophysiology of living biological systems.
{"title":"Long-wavelength oblique back-illumination microscopy for deep <i>in vivo</i> imaging.","authors":"Ye-Chan Cho, Jin Hee Hong, Sungsam Kang, Wonjun Choi, Wonshik Choi, Yookyung Jung","doi":"10.1364/BOE.579269","DOIUrl":"10.1364/BOE.579269","url":null,"abstract":"<p><p>Oblique back-illumination microscopy (OBM) is a label-free imaging technique that captures differential forward scattering in reflection mode to generate high-contrast pseudo-transmission images of cells and microvessels. While OBM benefits from multiple light scattering to detect forward-scattered signals, its imaging depth is constrained by tissue scattering between the objective lens and the imaging plane. In this study, we introduce a long-wavelength OBM system operating at 1650 nm-significantly longer than previous implementations-to mitigate scattering effects and extend imaging depth. Compared to a similar system using an 800 nm light source, our 1650 nm OBM achieves markedly deeper <i>in vivo</i> imaging of the mouse brain. This advancement in high-contrast, deep-tissue imaging holds promise for more detailed investigations into the pathophysiology of living biological systems.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 1","pages":"294-304"},"PeriodicalIF":3.2,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12795425/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145965233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic optical coherence tomography (DOCT) statistically analyzes fluctuations in time-sequential OCT signals, enabling label-free and three-dimensional visualization of intratissue and intracellular activities. Current DOCT methods, such as logarithmic intensity variance (LIV) and OCT correlation decay speed (OCDS), have several limitations. Namely, the DOCT values and intratissue motions are not directly related, and hence DOCT values are not interpretable in the context of the tissue motility. We introduce an open-source DOCT algorithm that provides a more direct interpretation of DOCT in the context of dynamic scatterer ratio and scatterer speed in the tissue. The detailed properties of the new and conventional DOCT methods are investigated by numerical simulations based on our open-source DOCT simulation framework, and the experimental validation with in vitro and ex vivo samples demonstrates the feasibility of the method.
{"title":"Dynamic optical coherence tomography algorithm for label-free assessment of swiftness and occupancy of intratissue moving scatterers.","authors":"Rion Morishita, Pradipta Mukherjee, Ibrahim Abd El-Sadek, Tanatchaya Seesan, Tomoko Mori, Atsuko Furukawa, Shinichi Fukuda, Donny Lukmanto, Satoshi Matsusaka, Shuichi Makita, Yoshiaki Yasuno","doi":"10.1364/BOE.574972","DOIUrl":"10.1364/BOE.574972","url":null,"abstract":"<p><p>Dynamic optical coherence tomography (DOCT) statistically analyzes fluctuations in time-sequential OCT signals, enabling label-free and three-dimensional visualization of intratissue and intracellular activities. Current DOCT methods, such as logarithmic intensity variance (LIV) and OCT correlation decay speed (OCDS), have several limitations. Namely, the DOCT values and intratissue motions are not directly related, and hence DOCT values are not interpretable in the context of the tissue motility. We introduce an open-source DOCT algorithm that provides a more direct interpretation of DOCT in the context of dynamic scatterer ratio and scatterer speed in the tissue. The detailed properties of the new and conventional DOCT methods are investigated by numerical simulations based on our open-source DOCT simulation framework, and the experimental validation with <i>in vitro</i> and <i>ex vivo</i> samples demonstrates the feasibility of the method.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 1","pages":"322-345"},"PeriodicalIF":3.2,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12795443/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145965078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15eCollection Date: 2026-01-01DOI: 10.1364/BOE.580164
Neha Goswami, Mark A Anastasio
The space-bandwidth product (SBP) imposes a fundamental limitation in achieving high-resolution and large field-of-view image acquisitions simultaneously. High-NA objectives provide fine structural detail at the cost of reduced spatial coverage and slower scanning as compared to a low-NA objective, while low-NA objectives offer wide fields of view but compromised resolution. Here, we introduce LensPlus, a deep learning-based framework that enhances the SBP of quantitative phase imaging (QPI) without requiring hardware modifications. By training on paired datasets acquired with low-NA and high-NA objectives, LensPlus learns to recover high-frequency features lost in low-NA measurements, effectively bridging the resolution gap while preserving the large field of view, thereby increasing the SBP. We demonstrate that LensPlus can transform images acquired with a 10x/0.3 NA objective (40x/0.95 NA for another model) to a quality comparable to that obtained using a 40x/0.95 NA objective (100x/1.45NA for the second model), resulting in a 2D-SBP improvement of approximately 3.5x (2.04x for the second model). Importantly, unlike adversarial models, LensPlus employs a non-generative model to minimize image hallucinations and ensure quantitative fidelity as verified through spectral analysis. Beyond QPI, LensPlus is broadly applicable to other lens-based imaging modalities, enabling wide-field, high-resolution imaging for time-lapse studies, large-area tissue mapping, and applications where high-NA oil objectives are impractical.
{"title":"LensPlus: a high space-bandwidth optical imaging technique.","authors":"Neha Goswami, Mark A Anastasio","doi":"10.1364/BOE.580164","DOIUrl":"10.1364/BOE.580164","url":null,"abstract":"<p><p>The space-bandwidth product (SBP) imposes a fundamental limitation in achieving high-resolution and large field-of-view image acquisitions simultaneously. High-NA objectives provide fine structural detail at the cost of reduced spatial coverage and slower scanning as compared to a low-NA objective, while low-NA objectives offer wide fields of view but compromised resolution. Here, we introduce LensPlus, a deep learning-based framework that enhances the SBP of quantitative phase imaging (QPI) without requiring hardware modifications. By training on paired datasets acquired with low-NA and high-NA objectives, LensPlus learns to recover high-frequency features lost in low-NA measurements, effectively bridging the resolution gap while preserving the large field of view, thereby increasing the SBP. We demonstrate that LensPlus can transform images acquired with a 10x/0.3 NA objective (40x/0.95 NA for another model) to a quality comparable to that obtained using a 40x/0.95 NA objective (100x/1.45NA for the second model), resulting in a 2D-SBP improvement of approximately 3.5x (2.04x for the second model). Importantly, unlike adversarial models, LensPlus employs a non-generative model to minimize image hallucinations and ensure quantitative fidelity as verified through spectral analysis. Beyond QPI, LensPlus is broadly applicable to other lens-based imaging modalities, enabling wide-field, high-resolution imaging for time-lapse studies, large-area tissue mapping, and applications where high-NA oil objectives are impractical.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 1","pages":"265-281"},"PeriodicalIF":3.2,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12795420/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145965123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15eCollection Date: 2026-01-01DOI: 10.1364/BOE.579043
Miguel Cardoso Mestre, Jacob R Lamb, Madeline A Lancaster, James D Manton
Expansion microscopy (ExM) has enabled nanoscale imaging of tissues by physically enlarging biological samples in a swellable hydrogel. However, the increased sample size and water-based environment pose challenges for deep imaging using conventional inverted confocal microscopes, particularly due to the limited working distance of high-numerical-aperture (NA) water immersion objectives. Here, we introduce a practical imaging alternative that utilizes an inverted water-dipping objective and a refractive-index-matched optical path using fluorinated ethylene propylene (FEP) film. Through point spread function (PSF) measurements and simulations, we show that the FEP film introduces predominantly defocus-like wavefront profiles characteristic of high NA systems, which result in an easily correctable axial shift of the focal plane. To ensure stable immersion and refractive index continuity, we use an arrangement relying on an FEP film, Immersol W, water and a FEP-based imaging dish. This configuration achieves sub-micron lateral and axial resolution, supports large tile-scan acquisitions, and maintains image quality across depths exceeding 800 µm. We validate the system by imaging 4×-expanded U2OS cells and human cerebral organoids. Our approach provides a low-cost, plug-and-play solution for high-resolution volumetric imaging of expanded samples using standard inverted microscopes.
{"title":"Maximising imaging volumes of expanded tissues for inverted fluorescence microscopy.","authors":"Miguel Cardoso Mestre, Jacob R Lamb, Madeline A Lancaster, James D Manton","doi":"10.1364/BOE.579043","DOIUrl":"10.1364/BOE.579043","url":null,"abstract":"<p><p>Expansion microscopy (ExM) has enabled nanoscale imaging of tissues by physically enlarging biological samples in a swellable hydrogel. However, the increased sample size and water-based environment pose challenges for deep imaging using conventional inverted confocal microscopes, particularly due to the limited working distance of high-numerical-aperture (NA) water immersion objectives. Here, we introduce a practical imaging alternative that utilizes an inverted water-dipping objective and a refractive-index-matched optical path using fluorinated ethylene propylene (FEP) film. Through point spread function (PSF) measurements and simulations, we show that the FEP film introduces predominantly defocus-like wavefront profiles characteristic of high NA systems, which result in an easily correctable axial shift of the focal plane. To ensure stable immersion and refractive index continuity, we use an arrangement relying on an FEP film, Immersol W, water and a FEP-based imaging dish. This configuration achieves sub-micron lateral and axial resolution, supports large tile-scan acquisitions, and maintains image quality across depths exceeding 800 µm. We validate the system by imaging 4×-expanded U2OS cells and human cerebral organoids. Our approach provides a low-cost, plug-and-play solution for high-resolution volumetric imaging of expanded samples using standard inverted microscopes.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 1","pages":"256-264"},"PeriodicalIF":3.2,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12795441/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145965246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-12eCollection Date: 2026-01-01DOI: 10.1364/BOE.583504
Haibin Li, Yuye Wang, Zelong Wang, Ning Mu, Tunan Chen, Hua Feng, Degang Xu, Jianquan Yao
The diagnosis and treatment of gliomas depend greatly on the precise delineation of tumor boundaries and the rapid extraction of molecular pathological features. The development of high-resolution and high-sensitivity terahertz (THz) attenuated total reflection (ATR) imaging technology can greatly expand its application in the clinical medical field. In this study, we demonstrated a THz ATR imaging system based on a solid immersion lens (SIL). The resolution improvement mechanism by a solid immersion lens in the THz ATR imaging system has been studied theoretically and experimentally. According to the theoretical analysis results, the optimal parameters of the system have been selected. The spatial resolution of the THz imaging system was up to 120μm × 140μm. On this basis, the THz reflectivity of fresh normal brain tissue and glioma tissue in a mouse model was studied. Compared with the visible, MR, and H&E-stained images, the accurate identification of the glioma region boundary and microscopic structures in brain tissues was realized. The glioma regions in H&E-stained and THz ATR images were segmented automatically based on the Chan-Vese active contour model, where the performance evaluation rates were all above 95%. These promising results suggest that THz ATR imaging based on SIL could be used as a tool for label-free, high-sensitivity, and real-time imaging of brain gliomas.
{"title":"Terahertz attenuated total reflection imaging of fresh brain glioma based on a solid immersion lens.","authors":"Haibin Li, Yuye Wang, Zelong Wang, Ning Mu, Tunan Chen, Hua Feng, Degang Xu, Jianquan Yao","doi":"10.1364/BOE.583504","DOIUrl":"10.1364/BOE.583504","url":null,"abstract":"<p><p>The diagnosis and treatment of gliomas depend greatly on the precise delineation of tumor boundaries and the rapid extraction of molecular pathological features. The development of high-resolution and high-sensitivity terahertz (THz) attenuated total reflection (ATR) imaging technology can greatly expand its application in the clinical medical field. In this study, we demonstrated a THz ATR imaging system based on a solid immersion lens (SIL). The resolution improvement mechanism by a solid immersion lens in the THz ATR imaging system has been studied theoretically and experimentally. According to the theoretical analysis results, the optimal parameters of the system have been selected. The spatial resolution of the THz imaging system was up to 120μm × 140μm. On this basis, the THz reflectivity of fresh normal brain tissue and glioma tissue in a mouse model was studied. Compared with the visible, MR, and H&E-stained images, the accurate identification of the glioma region boundary and microscopic structures in brain tissues was realized. The glioma regions in H&E-stained and THz ATR images were segmented automatically based on the Chan-Vese active contour model, where the performance evaluation rates were all above 95%. These promising results suggest that THz ATR imaging based on SIL could be used as a tool for label-free, high-sensitivity, and real-time imaging of brain gliomas.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"17 1","pages":"242-255"},"PeriodicalIF":3.2,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12795445/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145965239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}