Nonlinear metasurfaces have experienced rapid growth recently due to their potential in various applications, including infrared imaging and spectroscopy. However, due to the low conversion efficiencies of metasurfaces, several strategies have been adopted to enhance their performances, including employing resonances at signal or nonlinear emission wavelengths. This strategy results in a narrow operational band of the nonlinear metasurfaces, which has bottlenecked many applications, including nonlinear holography, image encoding, and nonlinear metalenses. Here, we overcome this issue by introducing a new nonlinear imaging platform utilizing a pump beam to enhance signal conversion through four-wave mixing (FWM), whereby the metasurface is resonant at the pump wavelength rather than the signal or nonlinear emissions. As a result, we demonstrate broadband nonlinear imaging for arbitrary objects using metasurfaces. A silicon disk-on-slab metasurface is introduced with an excitable guided-mode resonance at the pump wavelength. This enabled direct conversion of a broad IR image ranging from >1000 to 4000 nm into visible. Importantly, adopting FWM substantially reduces the dependence on high-power signal inputs or resonant features at the signal beam of nonlinear imaging by utilizing the quadratic relationship between the pump beam intensity and the signal conversion efficiency. Our results, therefore, unlock the potential for broadband infrared imaging capabilities with metasurfaces, making a promising advancement for next-generation all-optical infrared imaging techniques with chip-scale photonic devices.
The study of mitochondria is a formidable challenge for super-resolution microscopy due to their dynamic nature and complex membrane architecture. In this issue, Ren et al. introduce HBmito Crimson, a fluorogenic and photostable mitochondrial probe for STED microscopy and investigate how mitochondrial dynamics influence the spatial organization of mitochondrial DNA.
Quasicrystal metasurfaces, a kind of two-dimensional artificial optical materials with subwavelength meta-atoms arranged in quasi-periodic tiling schemes, have attracted extensive attentions due to their novel optical properties. In a recent work, a dual-functional quasicrystal metasurface, which can be used to simultaneously generate the diffraction pattern and holographic image, is experimentally demonstrated. The proposed method expands the manipulation dimensions for multi-functional quasicrystal metasurfaces and may have important applications in microscopy, optical information processing, optical encryption, etc.
Advanced light management techniques can enhance the sunlight absorption of perovskite solar cells (PSCs). When located at the front, they may act as a UV barrier, which is paramount for protecting the perovskite layer against UV-enabled degradation. Although it was recently shown that photonic structures such as Escher-like patterns could approach the theoretical Lambertian-limit of light trapping, it remains challenging to also implement UV protection properties for these diffractive structures while maintaining broadband absorption gains. Here, we propose a checkerboard (CB) tile pattern with designated UV photon conversion capability. Through a combined optical and electrical modeling approach, this photonic structure can increase photocurrent and power conversion efficiency in ultrathin PSCs by 25.9% and 28.2%, respectively. We further introduce a luminescent down-shifting encapsulant that converts the UV irradiation into Visible photons matching the solar cell absorption spectrum. To this end, experimentally obtained absorption and emission profiles of state-of-the-art down-shifting materials (i.e., lanthanide-based organic-inorganic hybrids) are used to predict potential gains from harnessing the UV energy. We demonstrate that at least 94% of the impinging UV radiation can be effectively converted into the Visible spectral range. Photonic protection from high-energy photons contributes to the market deployment of perovskite solar cell technology, and may become crucial for Space applications under AM0 illumination. By combining light trapping with luminescent downshifting layers, this work unravels a potential photonic solution to overcome UV degradation in PSCs while circumventing optical losses in ultrathin cells, thus improving both performance and stability.
Lens-free on-chip microscopy is a powerful and promising high-throughput computational microscopy technique due to its unique advantage of creating high-resolution images across the full field-of-view (FOV) of the imaging sensor. Nevertheless, most current lens-free microscopy methods have been designed for imaging only two-dimensional thin samples. Lens-free on-chip tomography (LFOCT) with a uniform resolution across the entire FOV and at a subpixel level remains a critical challenge. In this paper, we demonstrated a new LFOCT technique and associated imaging platform based on wavelength scanning Fourier ptychographic diffraction tomography (wsFPDT). Instead of using angularly-variable illuminations, in wsFPDT, the sample is illuminated by on-axis wavelength-variable illuminations, ranging from 430 to 1200 nm. The corresponding under-sampled diffraction patterns are recorded, and then an iterative ptychographic reconstruction procedure is applied to fill the spectrum of the three-dimensional (3D) scattering potential to recover the sample’s 3D refractive index (RI) distribution. The wavelength-scanning scheme not only eliminates the need for mechanical motion during image acquisition and precise registration of the raw images but secures a quasi-uniform, pixel-super-resolved imaging resolution across the entire imaging FOV. With wsFPDT, we demonstrate the high-throughput, billion-voxel 3D tomographic imaging results with a half-pitch lateral resolution of 775 nm and an axial resolution of 5.43 μm across a large FOV of 29.85 mm2 and an imaging depth of >200 μm. The effectiveness of the proposed method was demonstrated by imaging various types of samples, including micro-polystyrene beads, diatoms, and mouse mononuclear macrophage cells. The unique capability to reveal quantitative morphological properties, such as area, volume, and sphericity index of single cell over large cell populations makes wsFPDT a powerful quantitative and label-free tool for high-throughput biological applications.
Depth sensing plays a crucial role in various applications, including robotics, augmented reality, and autonomous driving. Monocular passive depth sensing techniques have come into their own for the cost-effectiveness and compact design, offering an alternative to the expensive and bulky active depth sensors and stereo vision systems. While the light-field camera can address the defocus ambiguity inherent in 2D cameras and achieve unambiguous depth perception, it compromises the spatial resolution and usually struggles with the effect of optical aberration. In contrast, our previously proposed meta-imaging sensor1 has overcome such hurdles by reconciling the spatial-angular resolution trade-off and achieving the multi-site aberration correction for high-resolution imaging. Here, we present a compact meta-imaging camera and an analytical framework for the quantification of monocular depth sensing precision by calculating the Cramér–Rao lower bound of depth estimation. Quantitative evaluations reveal that the meta-imaging camera exhibits not only higher precision over a broader depth range than the light-field camera but also superior robustness against changes in signal-background ratio. Moreover, both the simulation and experimental results demonstrate that the meta-imaging camera maintains the capability of providing precise depth information even in the presence of aberrations. Showing the promising compatibility with other point-spread-function engineering methods, we anticipate that the meta-imaging camera may facilitate the advancement of monocular passive depth sensing in various applications.
Lasing threshold in the conventional lasers is the minimum input power required to initiate laser oscillation. It has been widely accepted that the conventional laser threshold occurring around a unity intracavity photon number can be eliminated in the input-output curve by making the so-called β parameter approach unity. The recent experiments, however, have revealed that even in this case the photon statistics still undergo a transition from coherent to thermal statistics when the intracavity mean photon number is decreased below unity. Since the coherent output is only available above the diminished threshold, the long-sought promise of thresholdless lasers to produce always coherent light has become questionable. Here, we present an always-coherent thresholdless laser based on superradiance by two-level atoms in a quantum superposition state with the same phase traversing a high-Q cavity. Superradiant lasing was observed without the conventional lasing threshold around the unity photon number and the photon statistics remained near coherent even below it. The coherence was improved by reducing the coupling constant as well as the excited-state amplitude in the superposition state. Our results pave a way toward always-coherent thresholdless lasers with more practical media such as quantum dots, nitrogen-vacancy centers and doped ions in crystals.
In recent years, the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging. A compelling trend in this field involves deliberately compromising certain measurement metrics to engineer better bioimaging tools in terms of e.g., cost, speed, and form-factor, followed by compensating for the resulting defects through the utilization of deep learning models trained on a large amount of ideal, superior or alternative data. This strategic approach has found increasing popularity due to its potential to enhance various aspects of biophotonic imaging. One of the primary motivations for employing this strategy is the pursuit of higher temporal resolution or increased imaging speed, critical for capturing fine dynamic biological processes. Additionally, this approach offers the prospect of simplifying hardware requirements and complexities, thereby making advanced imaging standards more accessible in terms of cost and/or size. This article provides an in-depth review of the diverse measurement aspects that researchers intentionally impair in their biophotonic setups, including the point spread function (PSF), signal-to-noise ratio (SNR), sampling density, and pixel resolution. By deliberately compromising these metrics, researchers aim to not only recuperate them through the application of deep learning networks, but also bolster in return other crucial parameters, such as the field of view (FOV), depth of field (DOF), and space-bandwidth product (SBP). Throughout this article, we discuss various biophotonic methods that have successfully employed this strategic approach. These techniques span a wide range of applications and showcase the versatility and effectiveness of deep learning in the context of compromised biophotonic data. Finally, by offering our perspectives on the exciting future possibilities of this rapidly evolving concept, we hope to motivate our readers from various disciplines to explore novel ways of balancing hardware compromises with compensation via artificial intelligence (AI).
Vertical cavity surface emitting lasers (VCSELs) have emerged as a versatile and promising platform for developing advanced integrated photonic devices and systems due to their low power consumption, high modulation bandwidth, small footprint, excellent scalability, and compatibility with monolithic integration. By combining these unique capabilities of VCSELs with the functionalities offered by micro/nano optical structures (e.g. metasurfaces), it enables various versatile energy-efficient integrated photonic devices and systems with compact size, enhanced performance, and improved reliability and functionality. This review provides a comprehensive overview of the state-of-the-art versatile integrated photonic devices/systems based on VCSELs, including photonic neural networks, vortex beam emitters, holographic devices, beam deflectors, atomic sensors, and biosensors. By leveraging the capabilities of VCSELs, these integrated photonic devices/systems open up new opportunities in various fields, including artificial intelligence, large-capacity optical communication, imaging, biosensing, and so on. Through this comprehensive review, we aim to provide a detailed understanding of the pivotal role played by VCSELs in integrated photonics and highlight their significance in advancing the field towards efficient, compact, and versatile photonic solutions.