Optical differentiation has the advantages of ultrahigh speed and low power consumption over digital electronic computing. Various methods for single and switchable-order differential operations have been extensively studied and applied in fields such as image processing and optical analog computing. Here, we report a parallel multiplexing scheme of optical spatial differentiations via a superposition of multiple complex amplitude filters. The isotropic and anisotropic first- to fourth-order differentiation multiplexing, as well as various types of differentiation multiplexing are demonstrated both theoretically and experimentally. Multifunctional differential operations can be generated simultaneously, realizing the extraction of multiple feature information about amplitude and phase objects. This proof-of-principle work provides an approach for multiplexing optical spatial differentiation and a promising possibility for efficient information processing.
{"title":"Parallel multiplexing optical spatial differentiation based on a superposed complex amplitude filter","authors":"Xiangwei Wang, Ding Yan, Yizhe Chen, Tong Qi, Wei Gao","doi":"10.1016/j.optlaseng.2024.108669","DOIUrl":"10.1016/j.optlaseng.2024.108669","url":null,"abstract":"<div><div>Optical differentiation has the advantages of ultrahigh speed and low power consumption over digital electronic computing. Various methods for single and switchable-order differential operations have been extensively studied and applied in fields such as image processing and optical analog computing. Here, we report a parallel multiplexing scheme of optical spatial differentiations via a superposition of multiple complex amplitude filters. The isotropic and anisotropic first- to fourth-order differentiation multiplexing, as well as various types of differentiation multiplexing are demonstrated both theoretically and experimentally. Multifunctional differential operations can be generated simultaneously, realizing the extraction of multiple feature information about amplitude and phase objects. This proof-of-principle work provides an approach for multiplexing optical spatial differentiation and a promising possibility for efficient information processing.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108669"},"PeriodicalIF":3.5,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-29DOI: 10.1016/j.optlaseng.2024.108656
Qi Hu , Jiahao Yang , Jin Duan , Youfei Hao , Huateng Ding , Xinming Zhang , Wenbo Zhu , Weijie Fu
The geometric attenuation factor plays an important role in the construction of polarized bidirectional reflection distribution function (pBRDF) model, but the traditional geometric attenuation factor theory neglects the influence of microsurface height on the shadowing and masking effects of light. Therefore, we present a geometric attenuation factor related to the height of the discrete Gaussian microsurface based on microfacet theory. We correspond each sampled point on the microsurface to an element in the attenuation matrix, and assign values to the elements of the attenuation matrix by determining whether the sampling points are illuminated or observable. Finally, we can get the numerical solution of the geometric attenuation factor of the 3D discrete Gaussian microsurface by calculating the attenuation matrix. The results show that the presented geometric attenuation factor is reasonable and effective, and can be better applied to pBRDF model to improve the accuracy of pBRDF model.
{"title":"3D geometric attenuation factor for discrete Gaussian microsurfaces","authors":"Qi Hu , Jiahao Yang , Jin Duan , Youfei Hao , Huateng Ding , Xinming Zhang , Wenbo Zhu , Weijie Fu","doi":"10.1016/j.optlaseng.2024.108656","DOIUrl":"10.1016/j.optlaseng.2024.108656","url":null,"abstract":"<div><div>The geometric attenuation factor plays an important role in the construction of polarized bidirectional reflection distribution function (pBRDF) model, but the traditional geometric attenuation factor theory neglects the influence of microsurface height on the shadowing and masking effects of light. Therefore, we present a geometric attenuation factor related to the height of the discrete Gaussian microsurface based on microfacet theory. We correspond each sampled point on the microsurface to an element in the attenuation matrix, and assign values to the elements of the attenuation matrix by determining whether the sampling points are illuminated or observable. Finally, we can get the numerical solution of the geometric attenuation factor of the 3D discrete Gaussian microsurface by calculating the attenuation matrix. The results show that the presented geometric attenuation factor is reasonable and effective, and can be better applied to pBRDF model to improve the accuracy of pBRDF model.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108656"},"PeriodicalIF":3.5,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose and demonstrate a multifunctional tapered optical fiber tweezers (MTOFT) for capturing and manipulating micro particles. By employing the wavelength division multiplexing technology, two wavelengths, 980 nm and 650 nm, are coupled into optical fiber tweezers to achieve the flexibility of capture, transport and release of particles with different refractive indexes using fabricated tapered optical fiber probe (TOFP). Wherein, the 980 nm light wave excites LP01 and LP11 modes beams, and the 650 nm light wave excites LP01, LP11, LP21 and LP02 modes beams. Simulations and experiments demonstrated that the capture of yeast and the ejection of silica are achieved with the laser beam at 980nm wavelength. At 650 nm laser beam, the capture of silica and the ejection of yeast are achieved. This structure enables flexible manipulation of different particles by combining multiple wavelengths, expanding the direction of combining particle transport and particle emission functions.
{"title":"Selective manipulation of particles for multifunctional optical fiber tweezers with wavelength division multiplexing technology","authors":"Peng Chen, Liuting Zhou, Liguo Li, Yuting Dang, Chunlei Jiang","doi":"10.1016/j.optlaseng.2024.108661","DOIUrl":"10.1016/j.optlaseng.2024.108661","url":null,"abstract":"<div><div>We propose and demonstrate a multifunctional tapered optical fiber tweezers (MTOFT) for capturing and manipulating micro particles. By employing the wavelength division multiplexing technology, two wavelengths, 980 nm and 650 nm, are coupled into optical fiber tweezers to achieve the flexibility of capture, transport and release of particles with different refractive indexes using fabricated tapered optical fiber probe (TOFP). Wherein, the 980 nm light wave excites LP<sub>01</sub> and LP<sub>11</sub> modes beams, and the 650 nm light wave excites LP<sub>01</sub>, LP<sub>11</sub>, LP<sub>21</sub> and LP<sub>02</sub> modes beams. Simulations and experiments demonstrated that the capture of yeast and the ejection of silica are achieved with the laser beam at 980nm wavelength. At 650 nm laser beam, the capture of silica and the ejection of yeast are achieved. This structure enables flexible manipulation of different particles by combining multiple wavelengths, expanding the direction of combining particle transport and particle emission functions.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108661"},"PeriodicalIF":3.5,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.1016/j.optlaseng.2024.108654
Limei Song , Tenglong Zheng , Yunpeng Li , Haozhen Huang , Yangang Yang , Xinjun Zhu , Zonghua Zhang
To address the issues of low tracking efficiency and poor localization accuracy of artificial coded targets under complex background interference conditions, a new method for dynamic tracking of coded targets is proposed. This method includes a lightweight feature tracker (CBAM-Slim-Net) for adaptive localization of coded circles and a large-capacity coded target solver (CSN-BSSCT). The CBAM-Slim-Net feature tracker achieves a detection accuracy of 0.987 with only 6.030 M parameters. In actual measurement environments with complex background interference, CSN-BSSCT can decode quickly and accurately, with a mean error of 0.036 mm in the three-dimensional Euclidean distance between coded circles in static measurement scenarios. Additionally, this method can analyze the motion trajectory of the target and perform dynamic stitching for 3D measurement from multiple perspectives, making it highly significant for applications in robot motion control and large-field-of-view 3D measurement.
针对人工编码目标在复杂背景干扰条件下跟踪效率低、定位精度差的问题,提出了一种新的编码目标动态跟踪方法。该方法包括一个用于编码圆自适应定位的轻量级特征跟踪器(CBAM-Slim-Net)和一个大容量编码目标求解器(CSN-BSSCT)。CBAM-Slim-Net 特征跟踪器只需 6.030 M 个参数就能达到 0.987 的检测精度。在具有复杂背景干扰的实际测量环境中,CSN-BSSCT 可以快速准确地解码,在静态测量场景中,编码圆之间的三维欧氏距离平均误差为 0.036 毫米。此外,该方法还能分析目标的运动轨迹,并进行动态拼接,以实现多角度三维测量,因此在机器人运动控制和大视场三维测量中的应用意义重大。
{"title":"A novel dynamic tracking method for coded targets with complex background noise","authors":"Limei Song , Tenglong Zheng , Yunpeng Li , Haozhen Huang , Yangang Yang , Xinjun Zhu , Zonghua Zhang","doi":"10.1016/j.optlaseng.2024.108654","DOIUrl":"10.1016/j.optlaseng.2024.108654","url":null,"abstract":"<div><div>To address the issues of low tracking efficiency and poor localization accuracy of artificial coded targets under complex background interference conditions, a new method for dynamic tracking of coded targets is proposed. This method includes a lightweight feature tracker (CBAM-Slim-Net) for adaptive localization of coded circles and a large-capacity coded target solver (CSN-BSSCT). The CBAM-Slim-Net feature tracker achieves a detection accuracy of 0.987 with only 6.030 M parameters. In actual measurement environments with complex background interference, CSN-BSSCT can decode quickly and accurately, with a mean error of 0.036 mm in the three-dimensional Euclidean distance between coded circles in static measurement scenarios. Additionally, this method can analyze the motion trajectory of the target and perform dynamic stitching for 3D measurement from multiple perspectives, making it highly significant for applications in robot motion control and large-field-of-view 3D measurement.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108654"},"PeriodicalIF":3.5,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.1016/j.optlaseng.2024.108652
Lei Deng , Guihua Liu , Huiming Huang , Yunxin Gong , Tianci Liu , Tao Song , Fuping Qin
To address the issue of cumulative error leading to poor registration results in multi-view laser point cloud registration aided by circular markers, caused by the reconstruction error of the three-dimensional (3D) coordinates of marker centres and local view transformation matrix estimation error, an Adaptive-Weighted Bundle Adjustment (AWBA) method is proposed. Firstly, coarse registration is achieved based on Euclidean distance matching and angle constraints. Then, an adaptive weighting strategy is introduced to incorporate the accuracy information into the computation of the transformation matrix to improve its estimation accuracy and inhibit the single-view registration error by considering the accuracy difference of the circle centre 3D coordinates. Next, global marker coordinates are optimised by first removing outliers from the sets of homologous points using statistical methods followed by iteratively solving the global marker coordinates using remaining weighted homologous points to improve the accuracy of the global marker. AWBA adopts a synchronous optimisation strategy to calculate the current view transformation matrix based on the latest optimised global markers when the data of a new view is acquired, and it continuously optimises the coordinates of the global marker throughout the reconstruction process to suppress the backward cumulative error. Experimental results demonstrate that AWBA enjoys state-of-the-art performance compared with other methods, with Absolute Error (AE) <0.094 mm for the standard ball radius, model Mean Absolute Distance (MAD) <0.093 mm, and Successful Registration Rate (SRR) greater than 93.010. AWBA can enhance the registration effect of multi-view laser point clouds with a wide range of applications in industrial inspection, robotic navigation and cultural heritage preservation.
{"title":"Circular marker-aided multi-view laser point cloud registration based on adaptive-weighted bundle adjustment","authors":"Lei Deng , Guihua Liu , Huiming Huang , Yunxin Gong , Tianci Liu , Tao Song , Fuping Qin","doi":"10.1016/j.optlaseng.2024.108652","DOIUrl":"10.1016/j.optlaseng.2024.108652","url":null,"abstract":"<div><div>To address the issue of cumulative error leading to poor registration results in multi-view laser point cloud registration aided by circular markers, caused by the reconstruction error of the three-dimensional (3D) coordinates of marker centres and local view transformation matrix estimation error, an Adaptive-Weighted Bundle Adjustment (AWBA) method is proposed. Firstly, coarse registration is achieved based on Euclidean distance matching and angle constraints. Then, an adaptive weighting strategy is introduced to incorporate the accuracy information into the computation of the transformation matrix to improve its estimation accuracy and inhibit the single-view registration error by considering the accuracy difference of the circle centre 3D coordinates. Next, global marker coordinates are optimised by first removing outliers from the sets of homologous points using statistical methods followed by iteratively solving the global marker coordinates using remaining weighted homologous points to improve the accuracy of the global marker. AWBA adopts a synchronous optimisation strategy to calculate the current view transformation matrix based on the latest optimised global markers when the data of a new view is acquired, and it continuously optimises the coordinates of the global marker throughout the reconstruction process to suppress the backward cumulative error. Experimental results demonstrate that AWBA enjoys state-of-the-art performance compared with other methods, with Absolute Error (AE) <0.094 mm for the standard ball radius, model Mean Absolute Distance (MAD) <0.093 mm, and Successful Registration Rate (SRR) greater than 93.010. AWBA can enhance the registration effect of multi-view laser point clouds with a wide range of applications in industrial inspection, robotic navigation and cultural heritage preservation.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108652"},"PeriodicalIF":3.5,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24DOI: 10.1016/j.optlaseng.2024.108645
Dmitry A. Rymov, Andrey S. Svistunov, Rostislav S. Starikov, Anna V. Shifrina, Vladislav G. Rodin, Nikolay N. Evtikhiev, Pavel A. Cheremkhin
Computer generated holograms can create arbitrary light distributions through computation of light propagation. 3D-computer-generated-hologram generation requires significant computation time, especially so for 3D-computer-generated-holograms where it is important to calculate the interactions between different planes on top of the planes themselves. In this paper we propose a neural-network-based method for 3D-computer-generated-hologram generation in order to improve the hologram computation speed. The trained model can be used to generate holograms with arbitrary light propagation parameters. The neural network was numerically and optically tested against the GS algorithm using 3D-computer-generated-holograms with resolution up to 1024×1024 pixels. 3D-holograms with 16 object planes were generated, which is, to our knowledge, the highest number currently achieved with a neural-network-based-method. The experiments show that proposed model can create holograms significantly faster than some conventional algorithms and, overall, results better-quality images. The trained network can also be used using different propagation parameters, such as wavelength and focal distance.
{"title":"3D-CGH-Net: Customizable 3D-hologram generation via deep learning","authors":"Dmitry A. Rymov, Andrey S. Svistunov, Rostislav S. Starikov, Anna V. Shifrina, Vladislav G. Rodin, Nikolay N. Evtikhiev, Pavel A. Cheremkhin","doi":"10.1016/j.optlaseng.2024.108645","DOIUrl":"10.1016/j.optlaseng.2024.108645","url":null,"abstract":"<div><div>Computer generated holograms can create arbitrary light distributions through computation of light propagation. 3D-computer-generated-hologram generation requires significant computation time, especially so for 3D-computer-generated-holograms where it is important to calculate the interactions between different planes on top of the planes themselves. In this paper we propose a neural-network-based method for 3D-computer-generated-hologram generation in order to improve the hologram computation speed. The trained model can be used to generate holograms with arbitrary light propagation parameters. The neural network was numerically and optically tested against the GS algorithm using 3D-computer-generated-holograms with resolution up to 1024×1024 pixels. 3D-holograms with 16 object planes were generated, which is, to our knowledge, the highest number currently achieved with a neural-network-based-method. The experiments show that proposed model can create holograms significantly faster than some conventional algorithms and, overall, results better-quality images. The trained network can also be used using different propagation parameters, such as wavelength and focal distance.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108645"},"PeriodicalIF":3.5,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The National Metrology Institute of Japan (NMIJ) has developed a high-precision sphericity calibration system utilizing a Fizeau interferometer and conducted a thorough evaluation of the system's measurement uncertainty. Generally, there are two principal sources of uncertainty in the measurement outcomes when utilizing a spherical Fizeau interferometer. First, the accuracy of the system is significantly influenced by the prior knowledge of the reference sphere surface's absolute profile. To address this, the study established a straightforward yet effective calibration system for the reference sphere surface using a random ball test method. Second, the system's precision is affected by misalignment aberration, which occurs when there is any lateral or longitudinal displacement of the test sphere from the confocal position relative to the reference sphere. This misalignment can introduce both high-order shape errors (misalignment aberrations) and low-order shape errors (alignment errors). Through analytical consideration of an observation coordinate system on the camera plane, this study delves into misalignment aberrations, suggesting that the impact of misalignment should be determined experimentally for each reference sphere unit due to potential imperfections that may be revealed by misalignment. Furthermore, the study proposes that uncertainties related to misalignment aberrations are theoretically confirmed to be smaller than previous studies by conducting an in-depth uncertainty analysis of the misalignment, with a focus on the observation coordinate system on the camera plane. Notably, the research demonstrates that a measurement uncertainty level of λ/100 is achievable, maintaining a broader tolerance for misalignment than previously reported studies. The uncertainty of calibration of the reference sphere unit's absolute profile was 4.2 nm and the uncertainty of sample measurement was determined to be 4.6 nm with misalignment tolerance of . This advancement marks a stride toward improving the accuracy and reliability of sphericity measurements, offering potential for widespread application in precision engineering and metrology.
{"title":"Novel analysis of alignment error on spherical Fizeau interferometer and uncertainty evaluation of sphericity calibration system based on random ball test","authors":"Natsumi Kawashima, Yohan Kondo, Akiko Hirai, Youichi Bitou","doi":"10.1016/j.optlaseng.2024.108646","DOIUrl":"10.1016/j.optlaseng.2024.108646","url":null,"abstract":"<div><div>The National Metrology Institute of Japan (NMIJ) has developed a high-precision sphericity calibration system utilizing a Fizeau interferometer and conducted a thorough evaluation of the system's measurement uncertainty. Generally, there are two principal sources of uncertainty in the measurement outcomes when utilizing a spherical Fizeau interferometer. First, the accuracy of the system is significantly influenced by the prior knowledge of the reference sphere surface's absolute profile. To address this, the study established a straightforward yet effective calibration system for the reference sphere surface using a random ball test method. Second, the system's precision is affected by misalignment aberration, which occurs when there is any lateral or longitudinal displacement of the test sphere from the confocal position relative to the reference sphere. This misalignment can introduce both high-order shape errors (misalignment aberrations) and low-order shape errors (alignment errors). Through analytical consideration of an observation coordinate system on the camera plane, this study delves into misalignment aberrations, suggesting that the impact of misalignment should be determined experimentally for each reference sphere unit due to potential imperfections that may be revealed by misalignment. Furthermore, the study proposes that uncertainties related to misalignment aberrations are theoretically confirmed to be smaller than previous studies by conducting an in-depth uncertainty analysis of the misalignment, with a focus on the observation coordinate system on the camera plane. Notably, the research demonstrates that a measurement uncertainty level of λ/100 is achievable, maintaining a broader tolerance for misalignment than previously reported studies. The uncertainty of calibration of the reference sphere unit's absolute profile was 4.2 nm and the uncertainty of sample measurement was determined to be 4.6 nm with misalignment tolerance of <span><math><mrow><mo>±</mo><mi>λ</mi><mo>/</mo><mn>10</mn></mrow></math></span>. This advancement marks a stride toward improving the accuracy and reliability of sphericity measurements, offering potential for widespread application in precision engineering and metrology.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108646"},"PeriodicalIF":3.5,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1016/j.optlaseng.2024.108651
Yang Ju , Yating Wang , Lingtao Mao , Zhangyu Ren , Qing Qiao
Quantification of the internal deformation of fractured solids such as rock masses under external loads, especially the deformation around internal fractures, is crucial for understanding and predicting the failure of fractured solids. However, it is very difficult to achieve the goal using conventional experimental methods. In this study, we proposed a novel internal deformation measurement method based on three-dimensional (3D) printing and digital speckle techniques. The 3D printing technology was applied to fabricate a 3D transparent model with an embedded elliptical crack and to set a speckle pattern on the internal section of the transparent model. The digital image correlation (DIC) method was used to determine the internal deformation field of the 3D fractured model. The measured displacements and strains of the internal sections in the fractured model were compared with the numerical simulation results. The comparison indicates that the proposed experimental method can well determine the deformation inside the 3D model. This built-in speckle method can realize real-time observation of the internal deformation of a 3D solid model in a non-contact and non-destructive manner.
{"title":"Experimental method for internal deformation measurement of 3D solids with embedded cracks based on 3D printing and digital speckle techniques","authors":"Yang Ju , Yating Wang , Lingtao Mao , Zhangyu Ren , Qing Qiao","doi":"10.1016/j.optlaseng.2024.108651","DOIUrl":"10.1016/j.optlaseng.2024.108651","url":null,"abstract":"<div><div>Quantification of the internal deformation of fractured solids such as rock masses under external loads, especially the deformation around internal fractures, is crucial for understanding and predicting the failure of fractured solids. However, it is very difficult to achieve the goal using conventional experimental methods. In this study, we proposed a novel internal deformation measurement method based on three-dimensional (3D) printing and digital speckle techniques. The 3D printing technology was applied to fabricate a 3D transparent model with an embedded elliptical crack and to set a speckle pattern on the internal section of the transparent model. The digital image correlation (DIC) method was used to determine the internal deformation field of the 3D fractured model. The measured displacements and strains of the internal sections in the fractured model were compared with the numerical simulation results. The comparison indicates that the proposed experimental method can well determine the deformation inside the 3D model. This built-in speckle method can realize real-time observation of the internal deformation of a 3D solid model in a non-contact and non-destructive manner.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108651"},"PeriodicalIF":3.5,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1016/j.optlaseng.2024.108602
Ahmed Elaksher , Islam Omar , David Sanjenis , Jose R. Velasco , Mark Lao
The focus of this manuscript is on integrating optical images and laser point clouds carried on low-cost UAVs to create an automated system capable of generating urban city models. After pre-processing both datasets, we co-registered both datasets using the DLT transformation model. We estimated structure heights from the LiDAR dataset through a progressive morphological filter followed by removing bare ground. Unsupervised and supervised image classification techniques were applied to a six-band image created from the optical and LiDAR datasets. After finding building footprints, we traced their edges, outlined their borderlines, and identified their geometric boundaries through several image processing and rule-based feature identification algorithms. Comparison between manually digitized and automatically extracted buildings showed a detection rate of about 92.3 % with an average of 7.4 % falsely identified areas with the six-band image in contrast to classifying only the RGB image that detected about 63.2 % of the building pixels with 25.3 % pixels incorrectly identified. Moreover, our building detection rate with the 6-band image was superior to that attained by performing traditional image segmentation for only the LiDAR DEM. Shifts in the horizontal coordinates between corner points identified by a human operator and those detected by the proposed system were in the range of 10–15 cm. This is an improvement over traditional satellite and manned-aerial large mapping systems that have lower accuracies due to sensor limitations and platform altitude. These findings demonstrate the benefits of fusing multiple UAV remote sensing datasets over utilizing a single dataset for urban area mapping and 3D city modeling.
{"title":"An automated system for 2D building detection from UAV-based geospatial datasets","authors":"Ahmed Elaksher , Islam Omar , David Sanjenis , Jose R. Velasco , Mark Lao","doi":"10.1016/j.optlaseng.2024.108602","DOIUrl":"10.1016/j.optlaseng.2024.108602","url":null,"abstract":"<div><div>The focus of this manuscript is on integrating optical images and laser point clouds carried on low-cost UAVs to create an automated system capable of generating urban city models. After pre-processing both datasets, we co-registered both datasets using the DLT transformation model. We estimated structure heights from the LiDAR dataset through a progressive morphological filter followed by removing bare ground. Unsupervised and supervised image classification techniques were applied to a six-band image created from the optical and LiDAR datasets. After finding building footprints, we traced their edges, outlined their borderlines, and identified their geometric boundaries through several image processing and rule-based feature identification algorithms. Comparison between manually digitized and automatically extracted buildings showed a detection rate of about 92.3 % with an average of 7.4 % falsely identified areas with the six-band image in contrast to classifying only the RGB image that detected about 63.2 % of the building pixels with 25.3 % pixels incorrectly identified. Moreover, our building detection rate with the 6-band image was superior to that attained by performing traditional image segmentation for only the LiDAR DEM. Shifts in the horizontal coordinates between corner points identified by a human operator and those detected by the proposed system were in the range of 10–15 cm. This is an improvement over traditional satellite and manned-aerial large mapping systems that have lower accuracies due to sensor limitations and platform altitude. These findings demonstrate the benefits of fusing multiple UAV remote sensing datasets over utilizing a single dataset for urban area mapping and 3D city modeling.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108602"},"PeriodicalIF":3.5,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1016/j.optlaseng.2024.108639
Zhiling Zhang , Yuecheng Shen , Shile Yang , Jiawei Luo , Zhengyang Wang , Daixuan Wu , Xiaodie Hu , Zhengqi Huang , Yu He , Mengdi Guo , Huajie Chen , Dalong Qi , Yunhua Yao , Lianzhong Deng , Zhenrong Sun , Shian Zhang
Optical fiber tweezers have proven highly effective in precisely manipulating and trapping microscopic particles. Most existing demonstrations use single-mode fibers, which require tapered ends and are limited to single-particle control. Although multimode fibers (MMFs) can generate arbitrary structured light fields by transmitting multiple spatial modes simultaneously, inherent mode crosstalk renders the transmitted light field uncontrollable. In this study, we demonstrate MMF optical tweezers capable of manipulating and trapping multiple microspheres by projecting structured light, achieving performance comparable to that of holographic optical tweezers. By employing neural networks to guide active wavefront shaping and mitigate mode crosstalk, we achieved precise projection of structured light fields. Our experimental setup, which includes a green laser and a digital micromirror device, enabled the generation of focused and structured light through the MMF. We successfully manipulated single microspheres along a defined path and trapped multiple microspheres simultaneously using ring-shaped structured light. These results highlight the versatility and potential of MMF optical tweezers for advanced optical manipulation applications.
{"title":"Active wavefront shaping for multimode fiber optical tweezers with structured light","authors":"Zhiling Zhang , Yuecheng Shen , Shile Yang , Jiawei Luo , Zhengyang Wang , Daixuan Wu , Xiaodie Hu , Zhengqi Huang , Yu He , Mengdi Guo , Huajie Chen , Dalong Qi , Yunhua Yao , Lianzhong Deng , Zhenrong Sun , Shian Zhang","doi":"10.1016/j.optlaseng.2024.108639","DOIUrl":"10.1016/j.optlaseng.2024.108639","url":null,"abstract":"<div><div>Optical fiber tweezers have proven highly effective in precisely manipulating and trapping microscopic particles. Most existing demonstrations use single-mode fibers, which require tapered ends and are limited to single-particle control. Although multimode fibers (MMFs) can generate arbitrary structured light fields by transmitting multiple spatial modes simultaneously, inherent mode crosstalk renders the transmitted light field uncontrollable. In this study, we demonstrate MMF optical tweezers capable of manipulating and trapping multiple microspheres by projecting structured light, achieving performance comparable to that of holographic optical tweezers. By employing neural networks to guide active wavefront shaping and mitigate mode crosstalk, we achieved precise projection of structured light fields. Our experimental setup, which includes a green laser and a digital micromirror device, enabled the generation of focused and structured light through the MMF. We successfully manipulated single microspheres along a defined path and trapped multiple microspheres simultaneously using ring-shaped structured light. These results highlight the versatility and potential of MMF optical tweezers for advanced optical manipulation applications.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108639"},"PeriodicalIF":3.5,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}