Freeform optics have been extensively utilized in optical systems during the last few decades. Compared to their refractive counterparts, freeform reflective optics can yield larger angle of deflection, and more flexible geometry in three-dimensional space. Moreover, they are dispersion-free, and superior in thermal management. However, designing freeform reflective optics in highly tilted geometry is still not well addressed. In this paper, we propose a general formulation to design freeform off-axis reflective optics for precise illumination/intensity tailoring in highly tilted geometry. The superiority and effectiveness of the proposed method are verified by both numerical simulation and experimental results.
{"title":"Tailoring freeform off-axis reflective beam shaping systems","authors":"Zijun Zhang , Linyue Fang , Zhihui Ding , Fengxu Guo , Jiacheng Shi , Rengmao Wu","doi":"10.1016/j.optlaseng.2024.108665","DOIUrl":"10.1016/j.optlaseng.2024.108665","url":null,"abstract":"<div><div>Freeform optics have been extensively utilized in optical systems during the last few decades. Compared to their refractive counterparts, freeform reflective optics can yield larger angle of deflection, and more flexible geometry in three-dimensional space. Moreover, they are dispersion-free, and superior in thermal management. However, designing freeform reflective optics in highly tilted geometry is still not well addressed. In this paper, we propose a general formulation to design freeform off-axis reflective optics for precise illumination/intensity tailoring in highly tilted geometry. The superiority and effectiveness of the proposed method are verified by both numerical simulation and experimental results.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108665"},"PeriodicalIF":3.5,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1016/j.optlaseng.2024.108666
Zijin Deng , Changwei Li , Sijiong Zhang
The focal length, radius of curvature, and refractive index are key parameters of a spherical lens. Here, an approach for measuring lens parameters based on the Shack-Hartmann wavefront sensor (SHS) is proposed. Firstly, the position of the reference point for measuring focal length is determined by the figure-of-merit function, called the least square sum of centroids shifts (LSSCS), from the spot array formed by the microlens array of SHS. The focal length is estimated by measuring radii of curvatures of two spherical waves. Each spherical wave is caused by the distance between the focal point of the lens and the determined reference. Secondly, the radius of curvature is the difference between two coordinate locations of the lens. Each location, corresponding to a collimated beam reflected from the lens, is determined by the figure-of-merit function LSSCS. Thirdly, the refractive index can be further estimated by lens maker's equation through the measured focal length and radius of curvature. A positive and a negative lens are both tested by the proposed method. Experimental results show that the lens parameters measured by the proposed method are in good agreement with the nominal values. The proposed method does not require wavefront reconstruction, and is simple, accurate and noise-resistant.
{"title":"Measurement of lens parameters based on Shack-Hartmann wavefront sensor","authors":"Zijin Deng , Changwei Li , Sijiong Zhang","doi":"10.1016/j.optlaseng.2024.108666","DOIUrl":"10.1016/j.optlaseng.2024.108666","url":null,"abstract":"<div><div>The focal length, radius of curvature, and refractive index are key parameters of a spherical lens. Here, an approach for measuring lens parameters based on the Shack-Hartmann wavefront sensor (SHS) is proposed. Firstly, the position of the reference point for measuring focal length is determined by the figure-of-merit function, called the least square sum of centroids shifts (LSSCS), from the spot array formed by the microlens array of SHS. The focal length is estimated by measuring radii of curvatures of two spherical waves. Each spherical wave is caused by the distance between the focal point of the lens and the determined reference. Secondly, the radius of curvature is the difference between two coordinate locations of the lens. Each location, corresponding to a collimated beam reflected from the lens, is determined by the figure-of-merit function LSSCS. Thirdly, the refractive index can be further estimated by lens maker's equation through the measured focal length and radius of curvature. A positive and a negative lens are both tested by the proposed method. Experimental results show that the lens parameters measured by the proposed method are in good agreement with the nominal values. The proposed method does not require wavefront reconstruction, and is simple, accurate and noise-resistant.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108666"},"PeriodicalIF":3.5,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142555633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1016/j.optlaseng.2024.108657
Shun Zhou , Yanbo Jin , Jiaji Li , Jie Zhou , Linpeng Lu , Kun Gui , Yanling Jin , Yingying Sun , Wanyuan Chen , Qian Chen , Chao Zuo
Tumor suppressor gene TP53 plays a crucial role in cancer diagnosis and prognosis. The gene encodes the tumor suppressor protein p53, which can be identified through immunohistochemical (IHC) staining in various cancers, including gastric carcinoma. However, IHC staining is more costly and therefore not as prevalent as routine hematoxylin-eosin (H&E) staining. In this study, we present a semi-supervised learning-based approach for immunological detection (SSID) of TP53 mutation directly on H&E-stained gastric tissue sections, intending to improve gastric cancer diagnosis. SSID is trained on a small set of annotated image pairs and a larger unannotated dataset of H&E-stained images. It can detect the regions showing strong p53 expression, indicating TP53 mutation, and we validate the accuracy of our approach through both qualitative assessment (pathologists' average score of 2.22/3) and quantitative evaluation (e.g., averaged mean Intersection-over-Union of 0.73). Moreover, we introduce Bayesian uncertainty to assess the credibility of the detected masks, aiming to prevent misdiagnosis and inappropriate treatment. Our results demonstrate that SSID can circumvent the expensive and laborious IHC staining procedures and enable the diagnosis and prognosis of gastric cancer through immunological detection of TP53 mutation.
{"title":"Uncertainty-assisted virtual immunohistochemical detection on morphological staining via semi-supervised learning","authors":"Shun Zhou , Yanbo Jin , Jiaji Li , Jie Zhou , Linpeng Lu , Kun Gui , Yanling Jin , Yingying Sun , Wanyuan Chen , Qian Chen , Chao Zuo","doi":"10.1016/j.optlaseng.2024.108657","DOIUrl":"10.1016/j.optlaseng.2024.108657","url":null,"abstract":"<div><div>Tumor suppressor gene TP53 plays a crucial role in cancer diagnosis and prognosis. The gene encodes the tumor suppressor protein p53, which can be identified through immunohistochemical (IHC) staining in various cancers, including gastric carcinoma. However, IHC staining is more costly and therefore not as prevalent as routine hematoxylin-eosin (H&E) staining. In this study, we present a semi-supervised learning-based approach for immunological detection (SSID) of TP53 mutation directly on H&E-stained gastric tissue sections, intending to improve gastric cancer diagnosis. SSID is trained on a small set of annotated image pairs and a larger unannotated dataset of H&E-stained images. It can detect the regions showing strong p53 expression, indicating TP53 mutation, and we validate the accuracy of our approach through both qualitative assessment (pathologists' average score of 2.22/3) and quantitative evaluation (e.g., averaged mean Intersection-over-Union of 0.73). Moreover, we introduce Bayesian uncertainty to assess the credibility of the detected masks, aiming to prevent misdiagnosis and inappropriate treatment. Our results demonstrate that SSID can circumvent the expensive and laborious IHC staining procedures and enable the diagnosis and prognosis of gastric cancer through immunological detection of TP53 mutation.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108657"},"PeriodicalIF":3.5,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1016/j.optlaseng.2024.108662
Xinjun Zhu , Ruiqin Tian , Limei Song , Hongyi Wang , Qinghua Guo
Estimating depth from light field images is a critical issue in light field applications. While learning-based methods have made significant strides in light field depth estimation, achieving high accuracy and speed simultaneously remains a major challenge. This paper proposes a light field depth estimation network based on edge enhancement and feature modulation, which significantly improves depth estimation results by emphasizing inter-view correlations while preserving image edge features. Specifically, to prioritize edge details, we introduce an Edge-Enhanced Cost Constructor (EECC) that integrates edge information with existing cost constructors to improve depth estimation performance in complex areas. Furthermore, most light field depth estimation networks utilize only sub-aperture images (SAIs) without considering the inherent angular information in macro-pixel image (MacPI). To address this limitation, we propose the MacPI-Guided Feature Modulation (MGFM) module, which leverages angular information between different views in MacPI to modulate features at each view. Experimental results show that our method not only performs excellently on synthetic datasets but also demonstrates outstanding generalization on real-world datasets, achieving a better balance between accuracy and computation speed.
{"title":"Edge enhancement and feature modulation based network for light field depth estimation","authors":"Xinjun Zhu , Ruiqin Tian , Limei Song , Hongyi Wang , Qinghua Guo","doi":"10.1016/j.optlaseng.2024.108662","DOIUrl":"10.1016/j.optlaseng.2024.108662","url":null,"abstract":"<div><div>Estimating depth from light field images is a critical issue in light field applications. While learning-based methods have made significant strides in light field depth estimation, achieving high accuracy and speed simultaneously remains a major challenge. This paper proposes a light field depth estimation network based on edge enhancement and feature modulation, which significantly improves depth estimation results by emphasizing inter-view correlations while preserving image edge features. Specifically, to prioritize edge details, we introduce an Edge-Enhanced Cost Constructor (EECC) that integrates edge information with existing cost constructors to improve depth estimation performance in complex areas. Furthermore, most light field depth estimation networks utilize only sub-aperture images (SAIs) without considering the inherent angular information in macro-pixel image (MacPI). To address this limitation, we propose the MacPI-Guided Feature Modulation (MGFM) module, which leverages angular information between different views in MacPI to modulate features at each view. Experimental results show that our method not only performs excellently on synthetic datasets but also demonstrates outstanding generalization on real-world datasets, achieving a better balance between accuracy and computation speed.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108662"},"PeriodicalIF":3.5,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Optical differentiation has the advantages of ultrahigh speed and low power consumption over digital electronic computing. Various methods for single and switchable-order differential operations have been extensively studied and applied in fields such as image processing and optical analog computing. Here, we report a parallel multiplexing scheme of optical spatial differentiations via a superposition of multiple complex amplitude filters. The isotropic and anisotropic first- to fourth-order differentiation multiplexing, as well as various types of differentiation multiplexing are demonstrated both theoretically and experimentally. Multifunctional differential operations can be generated simultaneously, realizing the extraction of multiple feature information about amplitude and phase objects. This proof-of-principle work provides an approach for multiplexing optical spatial differentiation and a promising possibility for efficient information processing.
{"title":"Parallel multiplexing optical spatial differentiation based on a superposed complex amplitude filter","authors":"Xiangwei Wang, Ding Yan, Yizhe Chen, Tong Qi, Wei Gao","doi":"10.1016/j.optlaseng.2024.108669","DOIUrl":"10.1016/j.optlaseng.2024.108669","url":null,"abstract":"<div><div>Optical differentiation has the advantages of ultrahigh speed and low power consumption over digital electronic computing. Various methods for single and switchable-order differential operations have been extensively studied and applied in fields such as image processing and optical analog computing. Here, we report a parallel multiplexing scheme of optical spatial differentiations via a superposition of multiple complex amplitude filters. The isotropic and anisotropic first- to fourth-order differentiation multiplexing, as well as various types of differentiation multiplexing are demonstrated both theoretically and experimentally. Multifunctional differential operations can be generated simultaneously, realizing the extraction of multiple feature information about amplitude and phase objects. This proof-of-principle work provides an approach for multiplexing optical spatial differentiation and a promising possibility for efficient information processing.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108669"},"PeriodicalIF":3.5,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-29DOI: 10.1016/j.optlaseng.2024.108656
Qi Hu , Jiahao Yang , Jin Duan , Youfei Hao , Huateng Ding , Xinming Zhang , Wenbo Zhu , Weijie Fu
The geometric attenuation factor plays an important role in the construction of polarized bidirectional reflection distribution function (pBRDF) model, but the traditional geometric attenuation factor theory neglects the influence of microsurface height on the shadowing and masking effects of light. Therefore, we present a geometric attenuation factor related to the height of the discrete Gaussian microsurface based on microfacet theory. We correspond each sampled point on the microsurface to an element in the attenuation matrix, and assign values to the elements of the attenuation matrix by determining whether the sampling points are illuminated or observable. Finally, we can get the numerical solution of the geometric attenuation factor of the 3D discrete Gaussian microsurface by calculating the attenuation matrix. The results show that the presented geometric attenuation factor is reasonable and effective, and can be better applied to pBRDF model to improve the accuracy of pBRDF model.
{"title":"3D geometric attenuation factor for discrete Gaussian microsurfaces","authors":"Qi Hu , Jiahao Yang , Jin Duan , Youfei Hao , Huateng Ding , Xinming Zhang , Wenbo Zhu , Weijie Fu","doi":"10.1016/j.optlaseng.2024.108656","DOIUrl":"10.1016/j.optlaseng.2024.108656","url":null,"abstract":"<div><div>The geometric attenuation factor plays an important role in the construction of polarized bidirectional reflection distribution function (pBRDF) model, but the traditional geometric attenuation factor theory neglects the influence of microsurface height on the shadowing and masking effects of light. Therefore, we present a geometric attenuation factor related to the height of the discrete Gaussian microsurface based on microfacet theory. We correspond each sampled point on the microsurface to an element in the attenuation matrix, and assign values to the elements of the attenuation matrix by determining whether the sampling points are illuminated or observable. Finally, we can get the numerical solution of the geometric attenuation factor of the 3D discrete Gaussian microsurface by calculating the attenuation matrix. The results show that the presented geometric attenuation factor is reasonable and effective, and can be better applied to pBRDF model to improve the accuracy of pBRDF model.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108656"},"PeriodicalIF":3.5,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose and demonstrate a multifunctional tapered optical fiber tweezers (MTOFT) for capturing and manipulating micro particles. By employing the wavelength division multiplexing technology, two wavelengths, 980 nm and 650 nm, are coupled into optical fiber tweezers to achieve the flexibility of capture, transport and release of particles with different refractive indexes using fabricated tapered optical fiber probe (TOFP). Wherein, the 980 nm light wave excites LP01 and LP11 modes beams, and the 650 nm light wave excites LP01, LP11, LP21 and LP02 modes beams. Simulations and experiments demonstrated that the capture of yeast and the ejection of silica are achieved with the laser beam at 980nm wavelength. At 650 nm laser beam, the capture of silica and the ejection of yeast are achieved. This structure enables flexible manipulation of different particles by combining multiple wavelengths, expanding the direction of combining particle transport and particle emission functions.
{"title":"Selective manipulation of particles for multifunctional optical fiber tweezers with wavelength division multiplexing technology","authors":"Peng Chen, Liuting Zhou, Liguo Li, Yuting Dang, Chunlei Jiang","doi":"10.1016/j.optlaseng.2024.108661","DOIUrl":"10.1016/j.optlaseng.2024.108661","url":null,"abstract":"<div><div>We propose and demonstrate a multifunctional tapered optical fiber tweezers (MTOFT) for capturing and manipulating micro particles. By employing the wavelength division multiplexing technology, two wavelengths, 980 nm and 650 nm, are coupled into optical fiber tweezers to achieve the flexibility of capture, transport and release of particles with different refractive indexes using fabricated tapered optical fiber probe (TOFP). Wherein, the 980 nm light wave excites LP<sub>01</sub> and LP<sub>11</sub> modes beams, and the 650 nm light wave excites LP<sub>01</sub>, LP<sub>11</sub>, LP<sub>21</sub> and LP<sub>02</sub> modes beams. Simulations and experiments demonstrated that the capture of yeast and the ejection of silica are achieved with the laser beam at 980nm wavelength. At 650 nm laser beam, the capture of silica and the ejection of yeast are achieved. This structure enables flexible manipulation of different particles by combining multiple wavelengths, expanding the direction of combining particle transport and particle emission functions.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108661"},"PeriodicalIF":3.5,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.1016/j.optlaseng.2024.108654
Limei Song , Tenglong Zheng , Yunpeng Li , Haozhen Huang , Yangang Yang , Xinjun Zhu , Zonghua Zhang
To address the issues of low tracking efficiency and poor localization accuracy of artificial coded targets under complex background interference conditions, a new method for dynamic tracking of coded targets is proposed. This method includes a lightweight feature tracker (CBAM-Slim-Net) for adaptive localization of coded circles and a large-capacity coded target solver (CSN-BSSCT). The CBAM-Slim-Net feature tracker achieves a detection accuracy of 0.987 with only 6.030 M parameters. In actual measurement environments with complex background interference, CSN-BSSCT can decode quickly and accurately, with a mean error of 0.036 mm in the three-dimensional Euclidean distance between coded circles in static measurement scenarios. Additionally, this method can analyze the motion trajectory of the target and perform dynamic stitching for 3D measurement from multiple perspectives, making it highly significant for applications in robot motion control and large-field-of-view 3D measurement.
针对人工编码目标在复杂背景干扰条件下跟踪效率低、定位精度差的问题,提出了一种新的编码目标动态跟踪方法。该方法包括一个用于编码圆自适应定位的轻量级特征跟踪器(CBAM-Slim-Net)和一个大容量编码目标求解器(CSN-BSSCT)。CBAM-Slim-Net 特征跟踪器只需 6.030 M 个参数就能达到 0.987 的检测精度。在具有复杂背景干扰的实际测量环境中,CSN-BSSCT 可以快速准确地解码,在静态测量场景中,编码圆之间的三维欧氏距离平均误差为 0.036 毫米。此外,该方法还能分析目标的运动轨迹,并进行动态拼接,以实现多角度三维测量,因此在机器人运动控制和大视场三维测量中的应用意义重大。
{"title":"A novel dynamic tracking method for coded targets with complex background noise","authors":"Limei Song , Tenglong Zheng , Yunpeng Li , Haozhen Huang , Yangang Yang , Xinjun Zhu , Zonghua Zhang","doi":"10.1016/j.optlaseng.2024.108654","DOIUrl":"10.1016/j.optlaseng.2024.108654","url":null,"abstract":"<div><div>To address the issues of low tracking efficiency and poor localization accuracy of artificial coded targets under complex background interference conditions, a new method for dynamic tracking of coded targets is proposed. This method includes a lightweight feature tracker (CBAM-Slim-Net) for adaptive localization of coded circles and a large-capacity coded target solver (CSN-BSSCT). The CBAM-Slim-Net feature tracker achieves a detection accuracy of 0.987 with only 6.030 M parameters. In actual measurement environments with complex background interference, CSN-BSSCT can decode quickly and accurately, with a mean error of 0.036 mm in the three-dimensional Euclidean distance between coded circles in static measurement scenarios. Additionally, this method can analyze the motion trajectory of the target and perform dynamic stitching for 3D measurement from multiple perspectives, making it highly significant for applications in robot motion control and large-field-of-view 3D measurement.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108654"},"PeriodicalIF":3.5,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.1016/j.optlaseng.2024.108652
Lei Deng , Guihua Liu , Huiming Huang , Yunxin Gong , Tianci Liu , Tao Song , Fuping Qin
To address the issue of cumulative error leading to poor registration results in multi-view laser point cloud registration aided by circular markers, caused by the reconstruction error of the three-dimensional (3D) coordinates of marker centres and local view transformation matrix estimation error, an Adaptive-Weighted Bundle Adjustment (AWBA) method is proposed. Firstly, coarse registration is achieved based on Euclidean distance matching and angle constraints. Then, an adaptive weighting strategy is introduced to incorporate the accuracy information into the computation of the transformation matrix to improve its estimation accuracy and inhibit the single-view registration error by considering the accuracy difference of the circle centre 3D coordinates. Next, global marker coordinates are optimised by first removing outliers from the sets of homologous points using statistical methods followed by iteratively solving the global marker coordinates using remaining weighted homologous points to improve the accuracy of the global marker. AWBA adopts a synchronous optimisation strategy to calculate the current view transformation matrix based on the latest optimised global markers when the data of a new view is acquired, and it continuously optimises the coordinates of the global marker throughout the reconstruction process to suppress the backward cumulative error. Experimental results demonstrate that AWBA enjoys state-of-the-art performance compared with other methods, with Absolute Error (AE) <0.094 mm for the standard ball radius, model Mean Absolute Distance (MAD) <0.093 mm, and Successful Registration Rate (SRR) greater than 93.010. AWBA can enhance the registration effect of multi-view laser point clouds with a wide range of applications in industrial inspection, robotic navigation and cultural heritage preservation.
{"title":"Circular marker-aided multi-view laser point cloud registration based on adaptive-weighted bundle adjustment","authors":"Lei Deng , Guihua Liu , Huiming Huang , Yunxin Gong , Tianci Liu , Tao Song , Fuping Qin","doi":"10.1016/j.optlaseng.2024.108652","DOIUrl":"10.1016/j.optlaseng.2024.108652","url":null,"abstract":"<div><div>To address the issue of cumulative error leading to poor registration results in multi-view laser point cloud registration aided by circular markers, caused by the reconstruction error of the three-dimensional (3D) coordinates of marker centres and local view transformation matrix estimation error, an Adaptive-Weighted Bundle Adjustment (AWBA) method is proposed. Firstly, coarse registration is achieved based on Euclidean distance matching and angle constraints. Then, an adaptive weighting strategy is introduced to incorporate the accuracy information into the computation of the transformation matrix to improve its estimation accuracy and inhibit the single-view registration error by considering the accuracy difference of the circle centre 3D coordinates. Next, global marker coordinates are optimised by first removing outliers from the sets of homologous points using statistical methods followed by iteratively solving the global marker coordinates using remaining weighted homologous points to improve the accuracy of the global marker. AWBA adopts a synchronous optimisation strategy to calculate the current view transformation matrix based on the latest optimised global markers when the data of a new view is acquired, and it continuously optimises the coordinates of the global marker throughout the reconstruction process to suppress the backward cumulative error. Experimental results demonstrate that AWBA enjoys state-of-the-art performance compared with other methods, with Absolute Error (AE) <0.094 mm for the standard ball radius, model Mean Absolute Distance (MAD) <0.093 mm, and Successful Registration Rate (SRR) greater than 93.010. AWBA can enhance the registration effect of multi-view laser point clouds with a wide range of applications in industrial inspection, robotic navigation and cultural heritage preservation.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108652"},"PeriodicalIF":3.5,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24DOI: 10.1016/j.optlaseng.2024.108645
Dmitry A. Rymov, Andrey S. Svistunov, Rostislav S. Starikov, Anna V. Shifrina, Vladislav G. Rodin, Nikolay N. Evtikhiev, Pavel A. Cheremkhin
Computer generated holograms can create arbitrary light distributions through computation of light propagation. 3D-computer-generated-hologram generation requires significant computation time, especially so for 3D-computer-generated-holograms where it is important to calculate the interactions between different planes on top of the planes themselves. In this paper we propose a neural-network-based method for 3D-computer-generated-hologram generation in order to improve the hologram computation speed. The trained model can be used to generate holograms with arbitrary light propagation parameters. The neural network was numerically and optically tested against the GS algorithm using 3D-computer-generated-holograms with resolution up to 1024×1024 pixels. 3D-holograms with 16 object planes were generated, which is, to our knowledge, the highest number currently achieved with a neural-network-based-method. The experiments show that proposed model can create holograms significantly faster than some conventional algorithms and, overall, results better-quality images. The trained network can also be used using different propagation parameters, such as wavelength and focal distance.
{"title":"3D-CGH-Net: Customizable 3D-hologram generation via deep learning","authors":"Dmitry A. Rymov, Andrey S. Svistunov, Rostislav S. Starikov, Anna V. Shifrina, Vladislav G. Rodin, Nikolay N. Evtikhiev, Pavel A. Cheremkhin","doi":"10.1016/j.optlaseng.2024.108645","DOIUrl":"10.1016/j.optlaseng.2024.108645","url":null,"abstract":"<div><div>Computer generated holograms can create arbitrary light distributions through computation of light propagation. 3D-computer-generated-hologram generation requires significant computation time, especially so for 3D-computer-generated-holograms where it is important to calculate the interactions between different planes on top of the planes themselves. In this paper we propose a neural-network-based method for 3D-computer-generated-hologram generation in order to improve the hologram computation speed. The trained model can be used to generate holograms with arbitrary light propagation parameters. The neural network was numerically and optically tested against the GS algorithm using 3D-computer-generated-holograms with resolution up to 1024×1024 pixels. 3D-holograms with 16 object planes were generated, which is, to our knowledge, the highest number currently achieved with a neural-network-based-method. The experiments show that proposed model can create holograms significantly faster than some conventional algorithms and, overall, results better-quality images. The trained network can also be used using different propagation parameters, such as wavelength and focal distance.</div></div>","PeriodicalId":49719,"journal":{"name":"Optics and Lasers in Engineering","volume":"184 ","pages":"Article 108645"},"PeriodicalIF":3.5,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}