Strain-induced variation of the refractive index is the main mechanism of strain detection in photoacoustic experiments. However, weak strain-optic coupling in many materials limits the application of photoacoustics as an imaging tool. A straightforward deposition of a transparent thin film as a top layer has previously been shown to provide signal enhancement due to elastic boundary effects. In this paper, we study photoacoustic signal formation in metal covered by thin transparent films of different thicknesses and demonstrate that in addition to boundary effects, the photoacoustic response is affected by optical effects caused by the presence of the top layer. The interplay of optical effects leads to a complex temporal signal shape that strongly depends on the thickness of the thin film.
We present a sensitive and compact quantum cascade laser-based photoacoustic greenhouse gas sensor for the detection of CO2, CH4 and CO and discuss its applicability toward on-line real-time trace greenhouse gas analysis. Differential photoacoustic resonators with different dimensions were used and optimized to balance sensitivity with signal saturation. The effects of ambient parameters, gas flow rate, pressure and humidity on the photoacoustic signal and the spectral cross-interference were investigated. Thanks to the combined operation of in-house designed laser control and lock-in amplifier, the gas detection sensitivities achieved were 5.6 ppb for CH4, 0.8 ppb for CO and 17.2 ppb for CO2, signal averaging time 1 s and an excellent dynamic range beyond 6 orders of magnitude. A continuous outdoor five-day test was performed in an observation station in China’s Qinling National Botanical Garden (E longitude 108°29’, N latitude 33°43’) which demonstrated the stability and reliability of the greenhouse gas sensor.
This study highlights the potential of scanning optoacoustic angiography (OA) in identifying alterations of superficial vasculature in patients with post-thrombotic syndrome (PTS) of the foot, a venous stress disorder associated with significant morbidity developing from long-term effects of deep venous thrombosis. The traditional angiography methods available in the clinics are not capable of reliably assessing the state of peripheral veins that provide blood outflow from the skin, a key hallmark of personalized risks of PTS formation after venous thrombosis. Our findings indicate that OA can detect an increase in blood volume, diameter, and tortuosity of superficial blood vessels. The inability to spatially separate vascular plexuses of the dermis and subcutaneous adipose tissue serves as a crucial criterion for distinguishing PTS from normal vasculature. Furthermore, our study demonstrates the ability of scanning optoacoustic angiography to detect blood filling decrease in an elevated limb position versus increase in a lowered position.
Microscopic defects in flip chips, originating from manufacturing, significantly affect performance and longevity. Post-fabrication sampling methods ensure product functionality but lack in-line defect monitoring to enhance chip yield and lifespan in real-time. This study introduces a photoacoustic remote sensing (PARS) system for in-line imaging and defect recognition during flip-chip fabrication. We first propose a real-time PARS imaging method based on continuous acquisition combined with parallel processing image reconstruction to achieve real-time imaging during the scanning of flip-chip samples, reducing reconstruction time from an average of approximately 1134 ms to 38 ms. Subsequently, we propose improved YOLOv7 with space-to-depth block (IYOLOv7-SPD), an enhanced deep learning defect recognition method, for accurate in-line recognition and localization of microscopic defects during the PARS real-time imaging process. The experimental results validate the viability of the proposed system for enhancing the lifespan and yield of flip-chip products in chip manufacturing facilities.
Traditional methods under sparse view for reconstruction of photoacoustic tomography (PAT) often result in significant artifacts. Here, a novel image to image transformation method based on unsupervised learning artifact disentanglement network (ADN), named PAT-ADN, was proposed to address the issue. This network is equipped with specialized encoders and decoders that are responsible for encoding and decoding the artifacts and content components of unpaired images, respectively. The performance of the proposed PAT-ADN was evaluated using circular phantom data and the animal in vivo experimental data. The results demonstrate that PAT-ADN exhibits excellent performance in effectively removing artifacts. In particular, under extremely sparse view (e.g., 16 projections), structural similarity index and peak signal-to-noise ratio are improved by ∼188 % and ∼85 % in in vivo experimental data using the proposed method compared to traditional reconstruction methods. PAT-ADN improves the imaging performance of PAT, opening up possibilities for its application in multiple domains.
Expansion microscopy (ExM) is a promising technology that enables nanoscale imaging on conventional optical microscopes by physically magnifying the specimens. Here, we report the development of a strategy that enables i) on-demand labeling of subcellular organelles in live cells for ExM through transfection of fluorescent proteins that are well-retained during the expansion procedure; and ii) non-fluorescent chromogenic color-development towards efficient bright-field and photoacoustic imaging in both planar and volumetric formats, which is applicable to both cultured cells and biological tissues. Compared to the conventional ExM methods, our strategy provides an expanded toolkit, which we term as expansion fluorescence and photoacoustic microscopy (ExFLPAM), by allowing on-demand fluorescent protein labeling of cultured cells, as well as non-fluorescent absorption contrast-imaging of biological samples.
A miniaturized photoacoustic spectroscopy-based gas sensor is proposed for the purpose of detecting sub-ppm-level carbonyl sulfide (OCS) using a tunable mid-infrared interband cascade laser (ICL) and a Helmholtz photoacoustic cell. The tuning characteristics of the tunable ICL with a center wavelength of 4823.3 nm were investigated to achieve the optimal driving parameters. A Helmholtz photoacoustic cell with a volume of ∼2.45 mL was designed and optimized to miniaturize the measurement system. By optimizing the modulation parameters and signal processing, the system was verified to have a good linear response to OCS concentration. With a lock-in amplifier integration time of 10 s, the 1σ noise standard deviation in differential mode was 0.84 mV and a minimum detection limit (MDL) of 409.2 ppbV was achieved at atmospheric pressure and room temperature.
Photoacoustic microscopy (PAM) has gained increasing popularity in biomedical imaging, providing new opportunities for tissue monitoring and characterization. With the development of deep learning techniques, convolutional neural networks have been used for PAM image resolution enhancement and denoising. However, there exist several inherent challenges for this approach. This work presents a Unified PhotoAcoustic Microscopy image reconstruction Network (UPAMNet) for both PAM image super-resolution and denoising. The proposed method takes advantage of deep image priors by incorporating three effective attention-based modules and a mixed training constraint at both pixel and perception levels. The generalization ability of the model is evaluated in details and experimental results on different PAM datasets demonstrate the superior performance of the method. Experimental results show improvements of 0.59 dB and 1.37 dB, respectively, for 1/4 and 1/16 sparse image reconstruction, and 3.9 dB for image denoising in peak signal-to-noise ratio.
Quantitative photoacoustic tomography (qPAT) holds great potential in estimating chromophore concentrations, whereas the involved optical inverse problem, aiming to recover absorption coefficient distributions from photoacoustic images, remains challenging. To address this problem, we propose an extractor-attention-predictor network architecture (EAPNet), which employs a contracting–expanding structure to capture contextual information alongside a multilayer perceptron to enhance nonlinear modeling capability. A spatial attention module is introduced to facilitate the utilization of important information. We also use a balanced loss function to prevent network parameter updates from being biased towards specific regions. Our method obtains satisfactory quantitative metrics in simulated and real-world validations. Moreover, it demonstrates superior robustness to target properties and yields reliable results for targets with small size, deep location, or relatively low absorption intensity, indicating its broader applicability. The EAPNet, compared to the conventional UNet, exhibits improved efficiency, which significantly enhances performance while maintaining similar network size and computational complexity.