We investigated a negative feedback method for adding functionality to a CMOS image sensor. Our sensor effectively uses the method to set any intermediate voltage into a photodiode capacitance while a pixel circuit is in motion. The negative feedback reset functions as a noise cancellation technique and can obtain intermediate image data during charge accumulation. As an above application, dynamic range compression is achieved by individually selecting pixels and by setting an intermediate voltage or performing quasi-holding with respect to each pixel. Additionally, we achieved duplicated interlaced processing and were able to output frame-difference images without frame buffers. The experimental results obtained with a chip fabricated using a 0.25-μm CMOS process demonstrate that dynamic range compression and intra-frame motion detection are effective applications of negative feedback resetting.
{"title":"A CMOS imager with negative feedback pixel circuits and its applications","authors":"M. Ikebe, J. Motohisa","doi":"10.1117/12.900558","DOIUrl":"https://doi.org/10.1117/12.900558","url":null,"abstract":"We investigated a negative feedback method for adding functionality to a CMOS image sensor. Our sensor effectively uses the method to set any intermediate voltage into a photodiode capacitance while a pixel circuit is in motion. The negative feedback reset functions as a noise cancellation technique and can obtain intermediate image data during charge accumulation. As an above application, dynamic range compression is achieved by individually selecting pixels and by setting an intermediate voltage or performing quasi-holding with respect to each pixel. Additionally, we achieved duplicated interlaced processing and were able to output frame-difference images without frame buffers. The experimental results obtained with a chip fabricated using a 0.25-μm CMOS process demonstrate that dynamic range compression and intra-frame motion detection are effective applications of negative feedback resetting.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"8194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130266912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High resolution of remote sensing image is impressible on varying pitch angle of satellite platform on orbit. The geometry quality of image is distorted, and image has geometrical warp. Consequently, the spatial distribution of image is changed. However, traditional simulation methods of geometric distortion are complex. Traditional methods are based on accurate physical model. The pixel positions of warp image are calculated as one by one pixel. The topological mapping relationship is analyzed, which is between earth coordinate and optical remote sensor coordinate. The method of active points is proposed. Positions of active points are computed through the transform relationship between earth coordinate and optical remote sensor coordinate. Active points are interpolated by polynomial interpolation. The geometrical distortion is sub-pixel precision. Finally, a frame of image is generated. The effective transform reduces vastly amount of computation. The geometry model contains interior and exterior orientation elements of imaging system on satellite platform. The simulation experiment is based on three axes. Various angles of three axes are included by proposed model. As a result, the boundary condition of motion error affecting imaging quality is analyzed. The proposed geometry model not only improves physical information of active points, but also reduces computational complexity of transform between earth coordinate and optical remote sensor coordinate. The result is beneficial to design and optimize parameters of satellite platform.
{"title":"An imaging geometry model of space camera","authors":"Zhi Zhang, Zhao-jun Liu","doi":"10.1117/12.900187","DOIUrl":"https://doi.org/10.1117/12.900187","url":null,"abstract":"High resolution of remote sensing image is impressible on varying pitch angle of satellite platform on orbit. The geometry quality of image is distorted, and image has geometrical warp. Consequently, the spatial distribution of image is changed. However, traditional simulation methods of geometric distortion are complex. Traditional methods are based on accurate physical model. The pixel positions of warp image are calculated as one by one pixel. The topological mapping relationship is analyzed, which is between earth coordinate and optical remote sensor coordinate. The method of active points is proposed. Positions of active points are computed through the transform relationship between earth coordinate and optical remote sensor coordinate. Active points are interpolated by polynomial interpolation. The geometrical distortion is sub-pixel precision. Finally, a frame of image is generated. The effective transform reduces vastly amount of computation. The geometry model contains interior and exterior orientation elements of imaging system on satellite platform. The simulation experiment is based on three axes. Various angles of three axes are included by proposed model. As a result, the boundary condition of motion error affecting imaging quality is analyzed. The proposed geometry model not only improves physical information of active points, but also reduces computational complexity of transform between earth coordinate and optical remote sensor coordinate. The result is beneficial to design and optimize parameters of satellite platform.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129760425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The four-element infrared detecting system is a common used detecting system to detect a target from sky background. However, one of the main application limits of this system is the short detecting distance. Consequently, this paper presents an adaptive weak signal extraction algorithm, with which the weak distant target signal can be extracted, thus the detecting distance can be obviously lengthened.
{"title":"An adaptive weak signal extraction algorithm based on four-element infrared detecting system","authors":"Ende Wang, F. Zhu, Yanghui Xiao, X. Tong, Dan Zhu","doi":"10.1117/12.899991","DOIUrl":"https://doi.org/10.1117/12.899991","url":null,"abstract":"The four-element infrared detecting system is a common used detecting system to detect a target from sky background. However, one of the main application limits of this system is the short detecting distance. Consequently, this paper presents an adaptive weak signal extraction algorithm, with which the weak distant target signal can be extracted, thus the detecting distance can be obviously lengthened.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123862363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu-quan Gan, Wei Ge, Wei-dong Qiao, Di Lu, Juan Lv
Thermoelectric cooler (TEC) is a kind of solid hot pump performed with Peltier effect. And it is small, light and noiseless. The cooling quantity is proportional to the TEC working current when the temperature difference between the hot side and the cold side keeps stable. The heating quantity and cooling quantity can be controlled by changing the value and direction of current of two sides of TEC. So, thermoelectric cooling technology is the best way to cool CCD device. The E2V's scientific image sensor CCD47-20 integrates TEC and CCD together. This package makes easier of electrical design. Software and hardware system of TEC controller are designed with CCD47-20 which is packaged with integral solid-state Peltier cooler. For hardware system, 80C51 MCU is used as CPU, 8-bit ADC and 8-bit DAC compose of closed-loop controlled system. Controlled quantity can be computed by sampling the temperature from thermistor in CCD. TEC is drove by MOSFET which consists of constant current driving circuit. For software system, advanced controlled precision and convergence speed of TEC system can be gotten by using PID controlled algorithm and tuning proportional, integral and differential coefficient. The result shows: if the heat emission of the hot side of TEC is good enough to keep the temperature stable, and when the sampling frequency is 2 seconds, temperature controlled velocity is 5°C/min. And temperature difference can reach -40°C controlled precision can achieve 0.3°C. When the hot side temperature is stable at °C, CCD temperature can reach -°C, and thermal noise of CCD is less than 1e-/pix/s. The controlled system restricts the dark-current noise of CCD and increases SNR of the camera system.
{"title":"Design and application of TEC controller Using in CCD camera","authors":"Yu-quan Gan, Wei Ge, Wei-dong Qiao, Di Lu, Juan Lv","doi":"10.1117/12.900697","DOIUrl":"https://doi.org/10.1117/12.900697","url":null,"abstract":"Thermoelectric cooler (TEC) is a kind of solid hot pump performed with Peltier effect. And it is small, light and noiseless. The cooling quantity is proportional to the TEC working current when the temperature difference between the hot side and the cold side keeps stable. The heating quantity and cooling quantity can be controlled by changing the value and direction of current of two sides of TEC. So, thermoelectric cooling technology is the best way to cool CCD device. The E2V's scientific image sensor CCD47-20 integrates TEC and CCD together. This package makes easier of electrical design. Software and hardware system of TEC controller are designed with CCD47-20 which is packaged with integral solid-state Peltier cooler. For hardware system, 80C51 MCU is used as CPU, 8-bit ADC and 8-bit DAC compose of closed-loop controlled system. Controlled quantity can be computed by sampling the temperature from thermistor in CCD. TEC is drove by MOSFET which consists of constant current driving circuit. For software system, advanced controlled precision and convergence speed of TEC system can be gotten by using PID controlled algorithm and tuning proportional, integral and differential coefficient. The result shows: if the heat emission of the hot side of TEC is good enough to keep the temperature stable, and when the sampling frequency is 2 seconds, temperature controlled velocity is 5°C/min. And temperature difference can reach -40°C controlled precision can achieve 0.3°C. When the hot side temperature is stable at °C, CCD temperature can reach -°C, and thermal noise of CCD is less than 1e-/pix/s. The controlled system restricts the dark-current noise of CCD and increases SNR of the camera system.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126213045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a limited cognition on human beings comprehend the universe. Aiming at the impending need of mars exploration in the near future, starting from the mars three-dimensional (3D) model, the mars texture which based on several reality pictures was drew and the Bump mapping technique was managed to enhance the realistic rendering. In order to improve the simulation fidelity, the composing of mars atmospheric was discussed and the reason caused atmospheric scattering was investigated, the scattering algorithm was studied and calculated as well. The reasons why "Red storm" that frequently appeared on mars were particularized, these factors inevitable brought on another celestial body appearance. To conquer this problem, two methods which depended on different position of view point (universe point and terrestrial point) were proposed: in previous way, the 3D model was divided into different meshes to simulate the storm effect and the formula algorithm that mesh could rotate with any axis was educed. From a certain extent the model guaranteed rendering result when looked at the mars (with "Red storm") in universe; in latter way, 3D mars terrain scene was build up according to the mars pictures downloaded on "Google Mars", particle system used to simulated the storm effect, then the Billboard technique was managed to finish the color emendation and rendering compensation. At the end, the star field simulation based on multiple texture blending was given. The result of experiment showed that these methods had not only given a substantial increase in fidelity, but also guaranteed real-time rendering. It can be widely used in simulation of space battlefield and exploration tasks.
{"title":"Modelling and simulation of virtual Mars scene","authors":"Siliang Sun, Ren Chen, L. Sun, Jie Yan","doi":"10.1117/12.900163","DOIUrl":"https://doi.org/10.1117/12.900163","url":null,"abstract":"There is a limited cognition on human beings comprehend the universe. Aiming at the impending need of mars exploration in the near future, starting from the mars three-dimensional (3D) model, the mars texture which based on several reality pictures was drew and the Bump mapping technique was managed to enhance the realistic rendering. In order to improve the simulation fidelity, the composing of mars atmospheric was discussed and the reason caused atmospheric scattering was investigated, the scattering algorithm was studied and calculated as well. The reasons why \"Red storm\" that frequently appeared on mars were particularized, these factors inevitable brought on another celestial body appearance. To conquer this problem, two methods which depended on different position of view point (universe point and terrestrial point) were proposed: in previous way, the 3D model was divided into different meshes to simulate the storm effect and the formula algorithm that mesh could rotate with any axis was educed. From a certain extent the model guaranteed rendering result when looked at the mars (with \"Red storm\") in universe; in latter way, 3D mars terrain scene was build up according to the mars pictures downloaded on \"Google Mars\", particle system used to simulated the storm effect, then the Billboard technique was managed to finish the color emendation and rendering compensation. At the end, the star field simulation based on multiple texture blending was given. The result of experiment showed that these methods had not only given a substantial increase in fidelity, but also guaranteed real-time rendering. It can be widely used in simulation of space battlefield and exploration tasks.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"64 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133071137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Determination of relative three-dimensional position and orientation between two reference frames can be solved by the pose measuring methods based on monocular vision model. Owing to the special T-shaped configuration, the definition of object rotational matrix in the terms of quaternion elements helped in representing the problem by six nonlinear equations from which a closed-form solution can be obtained for all the unknown parameters. The calculating formulas of elements in the rotational matrix were deduced from the coordinates of feature points in camera frame as well as the converting vector which was also introduced into the process acting as corrected term. An approximate pose could be found by the assumption of zero difference in depth of all points in camera frame, then the converting vector should be initialized by the third row of current rotational matrix. The principle of computing priority of the max value in quaternion expression was proposed to ensure the convergence of the iteration loop through which the final pose was achieved in a few iterations. Simulation experiments show the validity of the solution and analysis of the calculating precision was made in detail. The measuring orientation error would constringe with the reduction of distance from camera focus to target object and performance of the algorithm went well in short distance, while the deformation went larger with the increasing of errors caused by imprecise correspondence.
{"title":"A quaternion pose determination solution based on monocular vision model","authors":"Jun Chen, Qiuzhi Zhang, Baoshang Zhang","doi":"10.1117/12.895926","DOIUrl":"https://doi.org/10.1117/12.895926","url":null,"abstract":"Determination of relative three-dimensional position and orientation between two reference frames can be solved by the pose measuring methods based on monocular vision model. Owing to the special T-shaped configuration, the definition of object rotational matrix in the terms of quaternion elements helped in representing the problem by six nonlinear equations from which a closed-form solution can be obtained for all the unknown parameters. The calculating formulas of elements in the rotational matrix were deduced from the coordinates of feature points in camera frame as well as the converting vector which was also introduced into the process acting as corrected term. An approximate pose could be found by the assumption of zero difference in depth of all points in camera frame, then the converting vector should be initialized by the third row of current rotational matrix. The principle of computing priority of the max value in quaternion expression was proposed to ensure the convergence of the iteration loop through which the final pose was achieved in a few iterations. Simulation experiments show the validity of the solution and analysis of the calculating precision was made in detail. The measuring orientation error would constringe with the reduction of distance from camera focus to target object and performance of the algorithm went well in short distance, while the deformation went larger with the increasing of errors caused by imprecise correspondence.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133935003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep space exploration is one of the hot techniques currently. The small and dim point target detection is one of the key technologies for space surveillance. In order to detect the small and dim point target without background compensation, we proposed a new method to realize the target detection in the feature space which we special designed. This method makes the centroid location to denote each of all the stars and targets separately, when the reference stars are chosen, the images could be mapping in the feature space which gets from the changing distance from all stars and potential targets to the reference stars, then the stars and potential targets can be divided by comparability measurement function with different motion characteristics, finally by trajectory conjunction the target detection be realized. The method we present here can be widely used in the visible light space surveillance system and infrared systems, and employed not only ground based surveillance system but also space based surveillance systems, which can also play an important role in space debris surveillance. The experimental result shown that the algorithm can fully take into account the characteristics of the fixed star, dim point moving target and noise, and can be effective to detect the moving dim and small in moving background with low SNR.
{"title":"Small and dim point target detection in special feature space","authors":"Jinqiu Sun, Jun Zhou, Weijun Hu","doi":"10.1117/12.900302","DOIUrl":"https://doi.org/10.1117/12.900302","url":null,"abstract":"Deep space exploration is one of the hot techniques currently. The small and dim point target detection is one of the key technologies for space surveillance. In order to detect the small and dim point target without background compensation, we proposed a new method to realize the target detection in the feature space which we special designed. This method makes the centroid location to denote each of all the stars and targets separately, when the reference stars are chosen, the images could be mapping in the feature space which gets from the changing distance from all stars and potential targets to the reference stars, then the stars and potential targets can be divided by comparability measurement function with different motion characteristics, finally by trajectory conjunction the target detection be realized. The method we present here can be widely used in the visible light space surveillance system and infrared systems, and employed not only ground based surveillance system but also space based surveillance systems, which can also play an important role in space debris surveillance. The experimental result shown that the algorithm can fully take into account the characteristics of the fixed star, dim point moving target and noise, and can be effective to detect the moving dim and small in moving background with low SNR.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128732958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Space payload is effected directly by space cryogenic environment, high radiation, etc. when working on orbit, especially large space payload a majority of which faces directly to formidable space nature condition. How to insure space payload working effectively on orbit is a key technology to space payload application technology. The papers starts from the optical system of a payload, to discuss the influence of optical system under the space cryogenic environment. Applying the law of optical imagery and the cryogenics thermodynamic method, and using Zernike polynomial coefficient to describe the changing of optical aberration on orbit and point spread function to evaluate the changing of performance of optical system. The papers optimizes the optical system of space payload by the means of temperature topology. From the data of final computation, by contrast to general optical system, for the same system, topological optic system can suit to more servers temperature condition that optical surface axial temperate grade is 5 to 10k, and can prove point spread function more than 0.2~0.3. so it can get the conclusion from analysis data that applying temperature topological optimization method can improve space payload optical quality by the mean of suiting to higher temperature grade, which more suit to space deep cryogenic environment. It gives model and data reference to extend space payload applying condition.
{"title":"The influence of the space cryogenic environment on space payload","authors":"Fan-jiao Tan, Fu-nian Long","doi":"10.1117/12.900156","DOIUrl":"https://doi.org/10.1117/12.900156","url":null,"abstract":"Space payload is effected directly by space cryogenic environment, high radiation, etc. when working on orbit, especially large space payload a majority of which faces directly to formidable space nature condition. How to insure space payload working effectively on orbit is a key technology to space payload application technology. The papers starts from the optical system of a payload, to discuss the influence of optical system under the space cryogenic environment. Applying the law of optical imagery and the cryogenics thermodynamic method, and using Zernike polynomial coefficient to describe the changing of optical aberration on orbit and point spread function to evaluate the changing of performance of optical system. The papers optimizes the optical system of space payload by the means of temperature topology. From the data of final computation, by contrast to general optical system, for the same system, topological optic system can suit to more servers temperature condition that optical surface axial temperate grade is 5 to 10k, and can prove point spread function more than 0.2~0.3. so it can get the conclusion from analysis data that applying temperature topological optimization method can improve space payload optical quality by the mean of suiting to higher temperature grade, which more suit to space deep cryogenic environment. It gives model and data reference to extend space payload applying condition.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121195054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In space attack and defense, on-orbital servicing, pose estimation of unknown (non-cooperative) spacecrafts is one of the most important conditions when taking the attack, defense and servicing measures. However, as for non-cooperative spacecrafts, the imaging characteristics of the features and the geometric constraints among the features are unknown, it is almost impossible to achieve target extraction, recognition, tracking, and pose solving automatically. To solve this technical problem, the paper proposes a method to determine the pose of non-cooperative spacecrafts based on collaboration of space-ground and rectangle feature. It employs a camera and rectangular features to achieve these operations above mentioned automatically. Experimental results indicate that both the position errors and the attitude errors satisfy the requirements of pose estimation during the tracking, approaching and flying round the non-cooperative spacecraft. The method provides a new solution for pose estimation of the non-cooperative target, and has potential significance for space-based attack and defense and on-orbital servicing.
{"title":"Pose estimation of non-cooperative spacecraft based on collaboration of space-ground and rectangle feature","authors":"Xi-kui Miao, F. Zhu, Yingming Hao","doi":"10.1117/12.900008","DOIUrl":"https://doi.org/10.1117/12.900008","url":null,"abstract":"In space attack and defense, on-orbital servicing, pose estimation of unknown (non-cooperative) spacecrafts is one of the most important conditions when taking the attack, defense and servicing measures. However, as for non-cooperative spacecrafts, the imaging characteristics of the features and the geometric constraints among the features are unknown, it is almost impossible to achieve target extraction, recognition, tracking, and pose solving automatically. To solve this technical problem, the paper proposes a method to determine the pose of non-cooperative spacecrafts based on collaboration of space-ground and rectangle feature. It employs a camera and rectangular features to achieve these operations above mentioned automatically. Experimental results indicate that both the position errors and the attitude errors satisfy the requirements of pose estimation during the tracking, approaching and flying round the non-cooperative spacecraft. The method provides a new solution for pose estimation of the non-cooperative target, and has potential significance for space-based attack and defense and on-orbital servicing.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116836054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, the bistatic LRCS of Teflon spheres of different size is measured by laser scattering automatic measurement system in laboratory and bistatic LRCS measurement system outdoor respectively, and the bistatic LRCS of standard Lambert spheres of the same size is calculated. The experimental results show that Teflon sphere has obvious coherent scattering when the bistatic angle is smaller than 10°. The experimental results coincide with the theoretical calculation when the bistatic angle is bigger than 10°. Scaling relations of the bistatic LRCS is also studied in experiment, the relative error between the experimental results of scaling relations and the theoretical calculation results of scaling relations of the same size Lambert sphere is smaller than 10 percent, which satisfies the requirements of Engineering. It is demonstrated that outdoor measurement method has valuable usage in engineering practice to measure the bistatic LRCS of large object.
{"title":"Research on experimental measurement of regular objects bistatic LRCS and scaling relations","authors":"Xiang’e Han, Yan-jie Zhao, Xiangzhen Li","doi":"10.1117/12.900937","DOIUrl":"https://doi.org/10.1117/12.900937","url":null,"abstract":"In this paper, the bistatic LRCS of Teflon spheres of different size is measured by laser scattering automatic measurement system in laboratory and bistatic LRCS measurement system outdoor respectively, and the bistatic LRCS of standard Lambert spheres of the same size is calculated. The experimental results show that Teflon sphere has obvious coherent scattering when the bistatic angle is smaller than 10°. The experimental results coincide with the theoretical calculation when the bistatic angle is bigger than 10°. Scaling relations of the bistatic LRCS is also studied in experiment, the relative error between the experimental results of scaling relations and the theoretical calculation results of scaling relations of the same size Lambert sphere is smaller than 10 percent, which satisfies the requirements of Engineering. It is demonstrated that outdoor measurement method has valuable usage in engineering practice to measure the bistatic LRCS of large object.","PeriodicalId":355017,"journal":{"name":"Photoelectronic Detection and Imaging","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125859130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}