Close-range photogrammetry is a significant method that can detect size, shape and position of objects for its conveniences and high accuracy. But in some extreme environment, the conventional method is difficult to match the request of measurement for there are still many measurement work can not complete using traditional method. This paper has development a new method to measure the section of objects using the single camera measurement model. In order to achieve the purpose, there are three main parts in this paper. Firstly, two extraction method of laser fringe is presented, their extraction precision and time is compared via extracting laser fringe from images with different Gauss noise. Steger method's precision is higher than curve fitting method. But curve fitting method cost less time than Steger method. Secondly, we have improved the traditional Autobar to adapt the dark measure environment. Considering retro-reflective targets and common black-white targets can not be recognized easily while without strobe light or lack of illumination, the retro-reflective material of traditional Autobar is replaced with LED light to be recognized easily in image without strong flicker when photographing. At last, a simulation experiment is taken to demonstrate the whole measurement process and validate the new single camera measurement model' feasibility. The final results of simulation experiments showed that the newly presented measurement model has its feasibility. This measurement model greatly improves the measurement efficiency and makes the measurement work more flexible.
{"title":"New close-range photogrammetry method based on grain-lacking object","authors":"Xiaohui Yang, Zongchun Li","doi":"10.1117/12.900545","DOIUrl":"https://doi.org/10.1117/12.900545","url":null,"abstract":"Close-range photogrammetry is a significant method that can detect size, shape and position of objects for its conveniences and high accuracy. But in some extreme environment, the conventional method is difficult to match the request of measurement for there are still many measurement work can not complete using traditional method. This paper has development a new method to measure the section of objects using the single camera measurement model. In order to achieve the purpose, there are three main parts in this paper. Firstly, two extraction method of laser fringe is presented, their extraction precision and time is compared via extracting laser fringe from images with different Gauss noise. Steger method's precision is higher than curve fitting method. But curve fitting method cost less time than Steger method. Secondly, we have improved the traditional Autobar to adapt the dark measure environment. Considering retro-reflective targets and common black-white targets can not be recognized easily while without strobe light or lack of illumination, the retro-reflective material of traditional Autobar is replaced with LED light to be recognized easily in image without strong flicker when photographing. At last, a simulation experiment is taken to demonstrate the whole measurement process and validate the new single camera measurement model' feasibility. The final results of simulation experiments showed that the newly presented measurement model has its feasibility. This measurement model greatly improves the measurement efficiency and makes the measurement work more flexible.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114690925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of image fusion is to obtain an iamge from multiple images, this image should be able to reflect the important information of all original images. Contourlet transform, not only has characteristics of multiresolution locality and critical sampling which wavelet has but also has the characteristics of multiple decomposition directions and anisotropy which wavelets lacking. Energy is a statistical parameter of describe the texture feature. So we apply the Max Energy and Contourlet transform combined for image fusion. Entropy expreses the average amount of information. The distribution of standard deviation reflects the degree of dispersion of the image.The average gradient reflects the clarity of the image, the contrast of small details and the feature of texture transform. Contrast with wavelet transform, laplace transform, weighted transform, the traditional of contourlet transform, on evaluation by Entropy, standard deviation and average gradient, experimental results from this algorithms for fusion with infrared image and visual image were better than other algorithms.
{"title":"Image fusion base on improved contourlet transform","authors":"L. Wang, Chengjin Li, Xunjie Zhao, Xiaoli Liu","doi":"10.1117/12.895522","DOIUrl":"https://doi.org/10.1117/12.895522","url":null,"abstract":"The purpose of image fusion is to obtain an iamge from multiple images, this image should be able to reflect the important information of all original images. Contourlet transform, not only has characteristics of multiresolution locality and critical sampling which wavelet has but also has the characteristics of multiple decomposition directions and anisotropy which wavelets lacking. Energy is a statistical parameter of describe the texture feature. So we apply the Max Energy and Contourlet transform combined for image fusion. Entropy expreses the average amount of information. The distribution of standard deviation reflects the degree of dispersion of the image.The average gradient reflects the clarity of the image, the contrast of small details and the feature of texture transform. Contrast with wavelet transform, laplace transform, weighted transform, the traditional of contourlet transform, on evaluation by Entropy, standard deviation and average gradient, experimental results from this algorithms for fusion with infrared image and visual image were better than other algorithms.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"702 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114725107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a compact terahertz stop band filter is presented, which consists of two parallel metallic surfaces corrugated with rectangular groove arrays of period a. When a metallic surface is periodically corrugated, surface electromagnetic modes can be excited by incident terahertz waves. These modes are surface plasmon polaritons with an effective plasma frequency controlled entirely by the surface geometry. Because of the corrugated grooves and geometry induced highly confined surface plasmon polaritons, the fundamental mode of the parallel metallic plates splits into two modes and there is a band gap between the two modes. The band stop filtering functionality is realized by rejecting all frequencies in the gap. Simulation results calculated by finite element method show that the proposed structure has a band gap ranging from 0.315c/a to 0.350c/a for groove depth d = 0.5a, groove width l = 0.5a and gap width of the two parallel metallic surfaces w = 2a. Transmission spectra also show zero transmission within the band gap region where no guided modes are supported. By varying the gap width w and groove depth d, different filtering bandwidths with different center frequencies can be achieved.
{"title":"A terahertz stop band filter based on two parallel metallic surfaces textured with groove arrays","authors":"Tao Li, Dongxiao Yang, Lei Rao, Song Xia","doi":"10.1117/12.901519","DOIUrl":"https://doi.org/10.1117/12.901519","url":null,"abstract":"In this paper, a compact terahertz stop band filter is presented, which consists of two parallel metallic surfaces corrugated with rectangular groove arrays of period a. When a metallic surface is periodically corrugated, surface electromagnetic modes can be excited by incident terahertz waves. These modes are surface plasmon polaritons with an effective plasma frequency controlled entirely by the surface geometry. Because of the corrugated grooves and geometry induced highly confined surface plasmon polaritons, the fundamental mode of the parallel metallic plates splits into two modes and there is a band gap between the two modes. The band stop filtering functionality is realized by rejecting all frequencies in the gap. Simulation results calculated by finite element method show that the proposed structure has a band gap ranging from 0.315c/a to 0.350c/a for groove depth d = 0.5a, groove width l = 0.5a and gap width of the two parallel metallic surfaces w = 2a. Transmission spectra also show zero transmission within the band gap region where no guided modes are supported. By varying the gap width w and groove depth d, different filtering bandwidths with different center frequencies can be achieved.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124458073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an experimental terahertz (THz) spectroscopic investigation of amino acid using an air-breakdown-coherent detection (ABCD) system. The strong and ultra-broadband (0.1 to 10THz) terahertz radiations generated by two-color laser induced air plasma and measured by coherent heterodyne detection. The broadband THz reflection spectra of L-Lysine (C6H14N2O2) and L-Arginine (C6H14N2O2) are obtained. To solve the phase-retrieval problem in RTDS, the absorption signatures of the materials are extracted directly from the first derivative of the relative reflectance with respect to frequency. The absorption features of the two amino acids are characterized in the 0.5~6 THz region. It is found that both the two amino acids have an absorption peak at 1.10 THz.
{"title":"Terahertz broadband spectroscopic investigations of amino acid","authors":"Dechong Zhu, Liangliang Zhang, Hua Zhong, Cunlin Zhang","doi":"10.1117/12.900765","DOIUrl":"https://doi.org/10.1117/12.900765","url":null,"abstract":"We present an experimental terahertz (THz) spectroscopic investigation of amino acid using an air-breakdown-coherent detection (ABCD) system. The strong and ultra-broadband (0.1 to 10THz) terahertz radiations generated by two-color laser induced air plasma and measured by coherent heterodyne detection. The broadband THz reflection spectra of L-Lysine (C6H14N2O2) and L-Arginine (C6H14N2O2) are obtained. To solve the phase-retrieval problem in RTDS, the absorption signatures of the materials are extracted directly from the first derivative of the relative reflectance with respect to frequency. The absorption features of the two amino acids are characterized in the 0.5~6 THz region. It is found that both the two amino acids have an absorption peak at 1.10 THz.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115866673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Signal-to-Noise Ratio (SNR) is an important quantitative parameter for evaluating the capability of spectrometers. The noises of CMOS image sensor, stray light and radiometric distortion play important roles in the spectrometer's SNR performance. An Offner imaging spectrometer is designed and tested. By measuring the spectrometer's spectral response, its SNR is calculated by the traditional statistical method and the wavelet analysis. Both methods give similar result and can provide useful information during the spectrometer commissioning as well as performance evaluation.
{"title":"Spectral response and SNR analysis of an Offner imaging spectrometer","authors":"Zhen-zhou Wu, Zhi-hong Ma","doi":"10.1117/12.900932","DOIUrl":"https://doi.org/10.1117/12.900932","url":null,"abstract":"The Signal-to-Noise Ratio (SNR) is an important quantitative parameter for evaluating the capability of spectrometers. The noises of CMOS image sensor, stray light and radiometric distortion play important roles in the spectrometer's SNR performance. An Offner imaging spectrometer is designed and tested. By measuring the spectrometer's spectral response, its SNR is calculated by the traditional statistical method and the wavelet analysis. Both methods give similar result and can provide useful information during the spectrometer commissioning as well as performance evaluation.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132522639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Methods for pose estimation of flying objects are introduced. Among them is the model-based optical method. We focus on the feature description aspect in model-based method. Feature descriptors of chain codes, moments, Fourier descriptors are used for 2D silhouette or region description. Common issues and techniques, particularly representation and normalization, of such three kinds of descriptors in the application of model-based pose estimation are analyzed. We build a Matlab pose estimation framework to compare pose estimation procedures using different feature descriptors. A missile model of MilkShape 3D file format is created as the simulation object. Experiments concerning with the abilities of descriptors are proceeded to show the difference of these descriptors.
{"title":"A comparative study on model-based pose estimation of flying objects with different feature descriptors","authors":"Hui-jun Tang, Jia Wen, Cai-wen Ma, Ren-kui Zhou","doi":"10.1117/12.900949","DOIUrl":"https://doi.org/10.1117/12.900949","url":null,"abstract":"Methods for pose estimation of flying objects are introduced. Among them is the model-based optical method. We focus on the feature description aspect in model-based method. Feature descriptors of chain codes, moments, Fourier descriptors are used for 2D silhouette or region description. Common issues and techniques, particularly representation and normalization, of such three kinds of descriptors in the application of model-based pose estimation are analyzed. We build a Matlab pose estimation framework to compare pose estimation procedures using different feature descriptors. A missile model of MilkShape 3D file format is created as the simulation object. Experiments concerning with the abilities of descriptors are proceeded to show the difference of these descriptors.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"8196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130472121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research the specific applications of rolling shutter of CMOS image sensor with CMOS image sensor. First, this paper introduces the principle and characteristics of global shutter and rolling shutter of the CMOS imager, it analyzes the impact of rolling shutter on measurement precision of the imaging system based on CMOS imager. Imaging experiment is taken to test the analyses of the rolling shutter. Then, an original method for computing instantaneous 3D pose and velocity of fast moving objects using a single view is presented. It exploits image deformations induced by rolling shutter in CMOS image sensors. Finally, a general perspective projection model of a moving 3D point is presented. A solution for the pose and velocity recovery problem is then described. Results indicate that some aberrations appear in faith, and the aberration degree has close relations with some parameters of CMOS imager like integration. After experiments can minimize error in the case of moving objects by the pose and speed parameters, the calculation error is under 2.5 percent. Experimental results with real data confirm the relevance of the approach. The resulting algorithm enables to transform a CMOS low cost and low power camera into an original velocity sensor.
{"title":"Research on shutter mode of CMOS imager and its application","authors":"Dan Liu, Zhi Liu, Wei Sun, Jing Zhang","doi":"10.1117/12.896734","DOIUrl":"https://doi.org/10.1117/12.896734","url":null,"abstract":"Research the specific applications of rolling shutter of CMOS image sensor with CMOS image sensor. First, this paper introduces the principle and characteristics of global shutter and rolling shutter of the CMOS imager, it analyzes the impact of rolling shutter on measurement precision of the imaging system based on CMOS imager. Imaging experiment is taken to test the analyses of the rolling shutter. Then, an original method for computing instantaneous 3D pose and velocity of fast moving objects using a single view is presented. It exploits image deformations induced by rolling shutter in CMOS image sensors. Finally, a general perspective projection model of a moving 3D point is presented. A solution for the pose and velocity recovery problem is then described. Results indicate that some aberrations appear in faith, and the aberration degree has close relations with some parameters of CMOS imager like integration. After experiments can minimize error in the case of moving objects by the pose and speed parameters, the calculation error is under 2.5 percent. Experimental results with real data confirm the relevance of the approach. The resulting algorithm enables to transform a CMOS low cost and low power camera into an original velocity sensor.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131543714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new method can estimate pose between camera and object when part even all known features are lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.
从二维图像到三维刚体的姿态估计需要一些已知的特征来跟踪。在实践中,有许多算法可以高精度地完成这项任务,但这些算法都存在特征丢失的问题。本文研究了已知特征数不可见甚至全部不可见时的姿态估计问题。首先,跟踪已知特征,计算当前图像和下一张图像中的姿态;其次,在当前图像和下一幅图像中自动检测出一些未知但很好的跟踪特征;第三,保留两幅图像中处于刚体上且能够相互匹配的未知特征;由于刚体的运动特性,除了以下两种情况外,刚体上未知特征的三维信息可以由刚体在两时刻的位姿及其在两幅图像中的二维信息来求解:第一种情况是相机和物体都没有相对运动,并且相机的焦距、主点等参数在两时刻都没有变化;二是两幅图像中没有共享的场景或没有匹配的特征。最后,由于第一次未知的特征现在是已知的,所以通过重复上述过程,即使一开始缺少已知的特征,也可以在接下来的图像中进行姿态估计。比较了Kanade-Lucas-Tomasi (KLT)特征、Scale Invariant feature Transform (SIFT)特征和Speed Up Robust feature (SURF)特征检测算法对姿态估计的鲁棒性,讨论了相机与刚体之间不同相对运动的紧凑性。利用图形处理单元(GPU)的并行计算对数百个特征进行提取和匹配,实现在中央处理器(CPU)上难以实现的实时姿态估计。与其他姿态估计方法相比,该方法可以在部分甚至全部已知特征丢失的情况下估计出相机与目标之间的姿态,并且具有快速响应的优点。该方法可广泛应用于视觉引导技术中,增强其智能化和泛化能力,在自主导航定位、未知环境下的机器人领域发挥重要作用。仿真和实验结果表明,该方法能够有效地抑制噪声,鲁棒地提取特征,达到实时性要求。理论分析和实验表明,该方法是合理有效的。
{"title":"An anti-disturbing real time pose estimation method and system","authors":"Jian Zhou, Xiao-hu Zhang","doi":"10.1117/12.900564","DOIUrl":"https://doi.org/10.1117/12.900564","url":null,"abstract":"Pose estimation relating two-dimensional (2D) images to three-dimensional (3D) rigid object need some known features to track. In practice, there are many algorithms which perform this task in high accuracy, but all of these algorithms suffer from features lost. This paper investigated the pose estimation when numbers of known features or even all of them were invisible. Firstly, known features were tracked to calculate pose in the current and the next image. Secondly, some unknown but good features to track were automatically detected in the current and the next image. Thirdly, those unknown features which were on the rigid and could match each other in the two images were retained. Because of the motion characteristic of the rigid object, the 3D information of those unknown features on the rigid could be solved by the rigid object's pose at the two moment and their 2D information in the two images except only two case: the first one was that both camera and object have no relative motion and camera parameter such as focus length, principle point, and etc. have no change at the two moment; the second one was that there was no shared scene or no matched feature in the two image. Finally, because those unknown features at the first time were known now, pose estimation could go on in the followed images in spite of the missing of known features in the beginning by repeating the process mentioned above. The robustness of pose estimation by different features detection algorithms such as Kanade-Lucas-Tomasi (KLT) feature, Scale Invariant Feature Transform (SIFT) and Speed Up Robust Feature (SURF) were compared and the compact of the different relative motion between camera and the rigid object were discussed in this paper. Graphic Processing Unit (GPU) parallel computing was also used to extract and to match hundreds of features for real time pose estimation which was hard to work on Central Processing Unit (CPU). Compared with other pose estimation methods, this new method can estimate pose between camera and object when part even all known features are lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131607621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A nonuniformity correction and radiometric calibration algorithm for infrared focal plane array is presented, combined with two-point correction along the U-shaped blackbody rim. The format of Infrared Focal-Plane Array (IRFPA) is larger and larger now; however, due to technical limitations and material defects in production, the drift of the IRFPA response during their working is unavailable. It will leads to non-uniformity of the thermal imaging systems which has become an important affect element of the efficiency for the practical use of the thermal imaging equipments. Point to the problems of traditional radiation calibration and correction methods, we proposed a dynamic infrared calibration and correction technology using U-shaped blackbody. With the help of blackbody in low and high temperature, two-point correction is executed initially to perimeter detectors. Then based on the scene information and shift between adjacent frames, a special algebraic algorithm is proposed to transport correction parameters from perimeter detectors to those interior un-corrected ones. In this way, the correction parameters of the whole field of view (FOV) are calculated. The temperature of the U-shaped blackbody is controllable, so dynamic infrared calibration can be done after nonuniformity correction to modification the drift of the original calibration table. A U-shaped blackbody is designed and an experimental platform is built to evaluate the algorithm. The U-shaped perimeter blackbody is designed to be able to scale out periodically so as to continuously update the correction parameters. It proves to be able to achieve two-point correction for accuracy, without covering the central FOV.
{"title":"Infrared nonuniformity correction and radiometric calibration technology using U-shaped blackbody","authors":"Weiqi Jin, Chongliang Liu, J. Xiu","doi":"10.1117/12.900122","DOIUrl":"https://doi.org/10.1117/12.900122","url":null,"abstract":"A nonuniformity correction and radiometric calibration algorithm for infrared focal plane array is presented, combined with two-point correction along the U-shaped blackbody rim. The format of Infrared Focal-Plane Array (IRFPA) is larger and larger now; however, due to technical limitations and material defects in production, the drift of the IRFPA response during their working is unavailable. It will leads to non-uniformity of the thermal imaging systems which has become an important affect element of the efficiency for the practical use of the thermal imaging equipments. Point to the problems of traditional radiation calibration and correction methods, we proposed a dynamic infrared calibration and correction technology using U-shaped blackbody. With the help of blackbody in low and high temperature, two-point correction is executed initially to perimeter detectors. Then based on the scene information and shift between adjacent frames, a special algebraic algorithm is proposed to transport correction parameters from perimeter detectors to those interior un-corrected ones. In this way, the correction parameters of the whole field of view (FOV) are calculated. The temperature of the U-shaped blackbody is controllable, so dynamic infrared calibration can be done after nonuniformity correction to modification the drift of the original calibration table. A U-shaped blackbody is designed and an experimental platform is built to evaluate the algorithm. The U-shaped perimeter blackbody is designed to be able to scale out periodically so as to continuously update the correction parameters. It proves to be able to achieve two-point correction for accuracy, without covering the central FOV.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132938917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Low light level target detection has received more attentions in varieties of domains in recent years. In this paper we use hybrid optoelectronic joint transform correlator(HOJTC) for detecting and recognizing low light level target. It is thought to be one of the most effective methods in target detection. But because of the cluttered backgrounds and strong noises of the low light level target, it always can not be detected successfully. In order to solve this problem efficiently, firstly we choose sym4 wavelet function to achieve the purpose of wavelet de-noising. After that edge extraction processing is used to distinguish the useful target from the cluttered backgrounds with Sobel operator. At last processed targets can be put into HOJTC to obtain a pair of correlation peaks clearly. To prove this method, many experiments of low light level targets have been implemented with computer simulation method and optical experiment method. As an example a low light level image "deer" is presented. The results show that the low light level target can be detected from the cluttered backgrounds and strong noises with wavelet de-noising and Sobel operator successfully.
{"title":"Detection research on low light level target with joint transform correlator","authors":"Su Zhang, Jiyang Shang, Chi Chen, Wensheng Wang","doi":"10.1117/12.899886","DOIUrl":"https://doi.org/10.1117/12.899886","url":null,"abstract":"Low light level target detection has received more attentions in varieties of domains in recent years. In this paper we use hybrid optoelectronic joint transform correlator(HOJTC) for detecting and recognizing low light level target. It is thought to be one of the most effective methods in target detection. But because of the cluttered backgrounds and strong noises of the low light level target, it always can not be detected successfully. In order to solve this problem efficiently, firstly we choose sym4 wavelet function to achieve the purpose of wavelet de-noising. After that edge extraction processing is used to distinguish the useful target from the cluttered backgrounds with Sobel operator. At last processed targets can be put into HOJTC to obtain a pair of correlation peaks clearly. To prove this method, many experiments of low light level targets have been implemented with computer simulation method and optical experiment method. As an example a low light level image \"deer\" is presented. The results show that the low light level target can be detected from the cluttered backgrounds and strong noises with wavelet de-noising and Sobel operator successfully.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133494336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}