In the fields of early warning, one is depending on reliable image exploitation: Only if the applied detection and tracking algorithms work efficiently, the threat approach alert can be given fast enough to ensure an automatic initiation of the countermeasure. In order to evaluate the performance of those algorithms for a certain electro-optical (EO) sensor system, test sequences need to be created as realistic and comprehensive as possible. Since both, background and target signature, depend on the environmental conditions, a detailed knowledge of the meteorology and climatology is necessary. Trials for measuring these environmental characteristics serve as a solid basis, but might only constitute the conditions during a rather short period of time. To represent the entire variation of meteorology and climatology that the future system will be exposed to, the application of comprehensive atmospheric modelling tools is essential. This paper gives an introduction of the atmospheric modelling tools that are currently used at Fraunhofer IOSB to simulate spectral background signatures in the infrared (IR) range. It is also demonstrated, how those signatures are affected by changing atmospheric and climatic conditions. In conclusion – and with a special focus on the modelling of different cloud types - sources of error and limits are discussed.
{"title":"Simulation of atmospheric and terrestrial background signatures for detection and tracking scenarios","authors":"C. Schweitzer, K. Stein","doi":"10.1117/12.2196382","DOIUrl":"https://doi.org/10.1117/12.2196382","url":null,"abstract":"In the fields of early warning, one is depending on reliable image exploitation: Only if the applied detection and tracking algorithms work efficiently, the threat approach alert can be given fast enough to ensure an automatic initiation of the countermeasure. In order to evaluate the performance of those algorithms for a certain electro-optical (EO) sensor system, test sequences need to be created as realistic and comprehensive as possible. Since both, background and target signature, depend on the environmental conditions, a detailed knowledge of the meteorology and climatology is necessary. Trials for measuring these environmental characteristics serve as a solid basis, but might only constitute the conditions during a rather short period of time. To represent the entire variation of meteorology and climatology that the future system will be exposed to, the application of comprehensive atmospheric modelling tools is essential. This paper gives an introduction of the atmospheric modelling tools that are currently used at Fraunhofer IOSB to simulate spectral background signatures in the infrared (IR) range. It is also demonstrated, how those signatures are affected by changing atmospheric and climatic conditions. In conclusion – and with a special focus on the modelling of different cloud types - sources of error and limits are discussed.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127298756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Bouma, P. Eendebak, K. Schutte, G. Azzopardi, G. Burghouts
Object recognition and localization are important to automatically interpret video and allow better querying on its content. We propose a method for object localization that learns incrementally and addresses four key aspects. Firstly, we show that for certain applications, recognition is feasible with only a few training samples. Secondly, we show that novel objects can be added incrementally without retraining existing objects, which is important for fast interaction. Thirdly, we show that an unbalanced number of positive training samples leads to biased classifier scores that can be corrected by modifying weights. Fourthly, we show that the detector performance can deteriorate due to hard-negative mining for similar or closely related classes (e.g., for Barbie and dress, because the doll is wearing a dress). This can be solved by our hierarchical classification. We introduce a new dataset, which we call TOSO, and use it to demonstrate the effectiveness of the proposed method for the localization and recognition of multiple objects in images.
{"title":"Incremental concept learning with few training examples and hierarchical classification","authors":"H. Bouma, P. Eendebak, K. Schutte, G. Azzopardi, G. Burghouts","doi":"10.1117/12.2194438","DOIUrl":"https://doi.org/10.1117/12.2194438","url":null,"abstract":"Object recognition and localization are important to automatically interpret video and allow better querying on its content. We propose a method for object localization that learns incrementally and addresses four key aspects. Firstly, we show that for certain applications, recognition is feasible with only a few training samples. Secondly, we show that novel objects can be added incrementally without retraining existing objects, which is important for fast interaction. Thirdly, we show that an unbalanced number of positive training samples leads to biased classifier scores that can be corrected by modifying weights. Fourthly, we show that the detector performance can deteriorate due to hard-negative mining for similar or closely related classes (e.g., for Barbie and dress, because the doll is wearing a dress). This can be solved by our hierarchical classification. We introduce a new dataset, which we call TOSO, and use it to demonstrate the effectiveness of the proposed method for the localization and recognition of multiple objects in images.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128689237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a hierarchical kernel associative memory (H-KAM) based computational model with Finite Ridgelet Transform (FRIT) representation for multispectral palmprint recognition. To characterize a multispectral palmprint image, the Finite Ridgelet Transform is used to achieve a very compact and distinctive representation of linear singularities while it also captures the singularities along lines and edges. The proposed system makes use of Finite Ridgelet Transform to represent multispectral palmprint image and it is then modeled by Kernel Associative Memories. Finally, the recognition scheme is thoroughly tested with a benchmarking multispectral palmprint database CASIA. For recognition purpose a Bayesian classifier is used. The experimental results exhibit robustness of the proposed system under different wavelengths of palm image.
{"title":"FRIT characterized hierarchical kernel memory arrangement for multiband palmprint recognition","authors":"D. Kisku, Phalguni Gupta, J. Sing","doi":"10.1117/12.2190204","DOIUrl":"https://doi.org/10.1117/12.2190204","url":null,"abstract":"In this paper, we present a hierarchical kernel associative memory (H-KAM) based computational model with Finite Ridgelet Transform (FRIT) representation for multispectral palmprint recognition. To characterize a multispectral palmprint image, the Finite Ridgelet Transform is used to achieve a very compact and distinctive representation of linear singularities while it also captures the singularities along lines and edges. The proposed system makes use of Finite Ridgelet Transform to represent multispectral palmprint image and it is then modeled by Kernel Associative Memories. Finally, the recognition scheme is thoroughly tested with a benchmarking multispectral palmprint database CASIA. For recognition purpose a Bayesian classifier is used. The experimental results exhibit robustness of the proposed system under different wavelengths of palm image.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127153067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Jonsson, Julia Hedborg, M. Henriksson, L. Sjöqvist
Time-correlated single-photon counting (TCSPC) is a laser radar technique that can provide range profiling with subcentimetre range resolution. The method relies on accurate time measurements between a laser pulse sync signal and the registration of a single-photon detection of photons reflected from an object. The measurement is performed multiple times and a histogram of arrival times is computed to gain information about surfaces at different distances within the field of view of the laser radar. TCSPC is a statistic method that requires an integration time and therefore the range profile of a non-stationary object (target) will be corrupted. However, by dividing the measurement into time intervals much shorter than the total acquisition time and cross correlating the histogram from each time interval it is possible calculate how the target has moved relative to the first time interval. The distance as a function of time was fitted to a polynomic function. This result was used to calculate a distance correction of every single detection event and the equivalent stationary histogram was reconstructed. Series of measurements on the objects with constant or non-linear velocities up to 0.5 m/s were performed and compared with stationary measurements. The results show that it is possible to reconstruct range profiles of moving objects with this technique. Reconstruction of the signal requires no prior information of the original range profile and the instantaneous and average velocities of the object can be calculated.
{"title":"Reconstruction of time-correlated single-photon counting range profiles of moving objects","authors":"P. Jonsson, Julia Hedborg, M. Henriksson, L. Sjöqvist","doi":"10.1117/12.2194859","DOIUrl":"https://doi.org/10.1117/12.2194859","url":null,"abstract":"Time-correlated single-photon counting (TCSPC) is a laser radar technique that can provide range profiling with subcentimetre range resolution. The method relies on accurate time measurements between a laser pulse sync signal and the registration of a single-photon detection of photons reflected from an object. The measurement is performed multiple times and a histogram of arrival times is computed to gain information about surfaces at different distances within the field of view of the laser radar. TCSPC is a statistic method that requires an integration time and therefore the range profile of a non-stationary object (target) will be corrupted. However, by dividing the measurement into time intervals much shorter than the total acquisition time and cross correlating the histogram from each time interval it is possible calculate how the target has moved relative to the first time interval. The distance as a function of time was fitted to a polynomic function. This result was used to calculate a distance correction of every single detection event and the equivalent stationary histogram was reconstructed. Series of measurements on the objects with constant or non-linear velocities up to 0.5 m/s were performed and compared with stationary measurements. The results show that it is possible to reconstruct range profiles of moving objects with this technique. Reconstruction of the signal requires no prior information of the original range profile and the instantaneous and average velocities of the object can be calculated.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128868359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Grönwall, G. Tolt, Patrik Lif, H. Larsson, Fredrik Bissmarck, M. Tulldahl, M. Henriksson, P. Wikberg, Mirko Thorstensson
This paper summarizes on-going work on 3D sensing and imaging for unmanned aerial vehicles UAV carried laser sensors. We study sensor concepts, UAVs suitable for carrying the sensors, and signal processing for mapping and target detection applications. We also perform user studies together with the Swedish armed forces, to evaluate usage in their mission cycle and interviews to clarify how to present data. Two ladar sensor concepts for mounting in UAV are studied. The discussion is based on known performance in commercial ladar systems today and predicted performance in future UAV applications. The small UAV is equipped with a short-range scanning ladar. The system is aimed for quick situational analysis of small areas and for documentation of a situation. The large UAV is equipped with a high-performing photon counting ladar with matrix detector. Its purpose is to support large-area surveillance, intelligence and mapping operations. Based on these sensors and their performance, signal and image processing support for data analysis is analyzed. Generated data amounts are estimated and demands on data storage capacity and data transfer is analyzed. We have tested the usage of 3D mapping together with military rangers. We tested to use 3D mapping in the planning phase and as last-minute intelligence update of the target. Feedback from these tests will be presented. We are performing interviews with various military professions, to get better understanding of how 3D data are used and interpreted. We discuss approaches of how to present data from 3D imaging sensor for a user.
{"title":"3D sensing and imaging for UAVs","authors":"C. Grönwall, G. Tolt, Patrik Lif, H. Larsson, Fredrik Bissmarck, M. Tulldahl, M. Henriksson, P. Wikberg, Mirko Thorstensson","doi":"10.1117/12.2192834","DOIUrl":"https://doi.org/10.1117/12.2192834","url":null,"abstract":"This paper summarizes on-going work on 3D sensing and imaging for unmanned aerial vehicles UAV carried laser sensors. We study sensor concepts, UAVs suitable for carrying the sensors, and signal processing for mapping and target detection applications. We also perform user studies together with the Swedish armed forces, to evaluate usage in their mission cycle and interviews to clarify how to present data.\u0000Two ladar sensor concepts for mounting in UAV are studied. The discussion is based on known performance in commercial ladar systems today and predicted performance in future UAV applications. The small UAV is equipped with a short-range scanning ladar. The system is aimed for quick situational analysis of small areas and for documentation of a situation. The large UAV is equipped with a high-performing photon counting ladar with matrix detector. Its purpose is to support large-area surveillance, intelligence and mapping operations. Based on these sensors and their performance, signal and image processing support for data analysis is analyzed. Generated data amounts are estimated and demands on data storage capacity and data transfer is analyzed.\u0000We have tested the usage of 3D mapping together with military rangers. We tested to use 3D mapping in the planning phase and as last-minute intelligence update of the target. Feedback from these tests will be presented. We are performing interviews with various military professions, to get better understanding of how 3D data are used and interpreted. We discuss approaches of how to present data from 3D imaging sensor for a user.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131320220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Gustafsson, R. Persson, F. Gustafsson, Folke Berglund, Julia Hedborg, Jonas Malmquist
The reduction of the laser hazard distance range using atmospheric attenuation has been tested with series of lidar measurements accomplished at the Vidsel Test Range, Vidsel, Sweden. The objective was to find situations with low level of aerosol backscatter during this campaign, with the implications of low extinction coefficient, since the lowest atmospheric attenuation gives the highest ocular hazards. The work included building a ground based backscatter lidar, performing a series of measurements and analyzing the results. The measurements were performed during the period June to November, 2014. The results of lidar measurements showed at several occasions’ very low atmospheric attenuation as a function of height to an altitude of at least 10 km. The lowest limit of aerosol backscatter coefficient possible to measure with this instrument is less than 0.3•10-7 m-1 sr-1. Assuming an aerosol lidar ratio between 30 – 100 sr this leads to an aerosol extinction coefficient of about 0.9 - 3•10-6 m-1. Using a designator laser as an example with wavelength 1064 nm, power 0.180 W, pulse length 15 ns, PRF 11.5 Hz, exposure time of 10 sec and beam divergence of 0.08 mrad, it will have a NOHD of 48 km. With the measured aerosol attenuation and by assuming a molecule extinction coefficient to be 5•10-6 m-1 (calculated using MODTRAN (Ontar Corp.) assuming no aerosol) the laser hazard distance will be reduced with 51 - 58 %, depending on the lidar ratio assumption. The conclusion from the work is; reducing of the laser hazard distance using atmospheric attenuation within the NOHD calculations is possible but should be combined with measurements of the attenuation.
{"title":"Lidar measurement as support to the ocular hazard distance calculation using atmospheric attenuation","authors":"K. Gustafsson, R. Persson, F. Gustafsson, Folke Berglund, Julia Hedborg, Jonas Malmquist","doi":"10.1117/12.2194259","DOIUrl":"https://doi.org/10.1117/12.2194259","url":null,"abstract":"The reduction of the laser hazard distance range using atmospheric attenuation has been tested with series of lidar measurements accomplished at the Vidsel Test Range, Vidsel, Sweden. The objective was to find situations with low level of aerosol backscatter during this campaign, with the implications of low extinction coefficient, since the lowest atmospheric attenuation gives the highest ocular hazards. The work included building a ground based backscatter lidar, performing a series of measurements and analyzing the results. The measurements were performed during the period June to November, 2014. The results of lidar measurements showed at several occasions’ very low atmospheric attenuation as a function of height to an altitude of at least 10 km. The lowest limit of aerosol backscatter coefficient possible to measure with this instrument is less than 0.3•10-7 m-1 sr-1. Assuming an aerosol lidar ratio between 30 – 100 sr this leads to an aerosol extinction coefficient of about 0.9 - 3•10-6 m-1. Using a designator laser as an example with wavelength 1064 nm, power 0.180 W, pulse length 15 ns, PRF 11.5 Hz, exposure time of 10 sec and beam divergence of 0.08 mrad, it will have a NOHD of 48 km. With the measured aerosol attenuation and by assuming a molecule extinction coefficient to be 5•10-6 m-1 (calculated using MODTRAN (Ontar Corp.) assuming no aerosol) the laser hazard distance will be reduced with 51 - 58 %, depending on the lidar ratio assumption. The conclusion from the work is; reducing of the laser hazard distance using atmospheric attenuation within the NOHD calculations is possible but should be combined with measurements of the attenuation.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125165140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of camera technology in recent years has made high speed imaging a reliable method in vibration and dynamic measurements. The passive recovery of vibration information from high speed video recordings was reported in several recent papers. A highly developed technique, involving decomposition of the input video into spatial subframes to compute local motion signals, allowed an accurate sound reconstruction. A simpler technique based on image matching for vibration measurement was also reported as efficient in extracting audio information from a silent high speed video. In this paper we investigate and discuss the sensitivity and the limitations of the high speed imaging technique for vibration detection in comparison to the well-established Doppler vibrometry technique. Experiments on the extension of the high speed imaging method to longer range applications are presented.
{"title":"Comparison of high speed imaging technique to laser vibrometry for detection of vibration information from objects","authors":"G. Paunescu, P. Lutzmann, B. Göhler, D. Wegner","doi":"10.1117/12.2194753","DOIUrl":"https://doi.org/10.1117/12.2194753","url":null,"abstract":"The development of camera technology in recent years has made high speed imaging a reliable method in vibration and dynamic measurements. The passive recovery of vibration information from high speed video recordings was reported in several recent papers. A highly developed technique, involving decomposition of the input video into spatial subframes to compute local motion signals, allowed an accurate sound reconstruction. A simpler technique based on image matching for vibration measurement was also reported as efficient in extracting audio information from a silent high speed video. In this paper we investigate and discuss the sensitivity and the limitations of the high speed imaging technique for vibration detection in comparison to the well-established Doppler vibrometry technique. Experiments on the extension of the high speed imaging method to longer range applications are presented.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123094004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. M. Tulldahl, Fredrik Bissmarck, H. Larsson, C. Grönwall, G. Tolt
A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.
{"title":"Accuracy evaluation of 3D lidar data from small UAV","authors":"H. M. Tulldahl, Fredrik Bissmarck, H. Larsson, C. Grönwall, G. Tolt","doi":"10.1117/12.2194508","DOIUrl":"https://doi.org/10.1117/12.2194508","url":null,"abstract":"A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch, roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based (microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved accuracy compared to processing based solely on INS data.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133995632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Monnin, Gwenaél Schmitt, C. Fischer, Martin Laurenzis, F. Christnacher
Global navigation satellite systems (GNSS) are widely used for the localization and the navigation of unmanned and remotely operated vehicles (ROV). In contrast to ground or aerial vehicles, GNSS cannot be employed for autonomous underwater vehicles (AUV) without the use of a communication link to the water surface, since satellite signals cannot be received underwater. However, underwater autonomous navigation is still possible using self-localization methods which determines the relative location of an AUV with respect to a reference location using inertial measurement units (IMU), depth sensors and even sometimes radar or sonar imaging. As an alternative or a complementary solution to common underwater reckoning techniques, we present the first results of a feasibility study of an active-imaging-based localization method which uses a range-gated active-imaging system and can yield radiometric and odometric information even in turbid water.
{"title":"Active-imaging-based underwater navigation","authors":"D. Monnin, Gwenaél Schmitt, C. Fischer, Martin Laurenzis, F. Christnacher","doi":"10.1117/12.2199912","DOIUrl":"https://doi.org/10.1117/12.2199912","url":null,"abstract":"Global navigation satellite systems (GNSS) are widely used for the localization and the navigation of unmanned and remotely operated vehicles (ROV). In contrast to ground or aerial vehicles, GNSS cannot be employed for autonomous underwater vehicles (AUV) without the use of a communication link to the water surface, since satellite signals cannot be received underwater. However, underwater autonomous navigation is still possible using self-localization methods which determines the relative location of an AUV with respect to a reference location using inertial measurement units (IMU), depth sensors and even sometimes radar or sonar imaging. As an alternative or a complementary solution to common underwater reckoning techniques, we present the first results of a feasibility study of an active-imaging-based localization method which uses a range-gated active-imaging system and can yield radiometric and odometric information even in turbid water.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115999102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Armande Pola Fossi, Y. Ferrec, R. Domel, C. Coudrain, N. Guerineau, N. Roux, Oscar D'almeida, Marc Bousquet, E. Kling, H. Sauer
Recent developments in unmanned aerial vehicles have increased the demand for more and more compact optical systems. In order to bring solutions to this demand, several infrared systems are being developed at ONERA such as spectrometers, imaging devices, multispectral and hyperspectral imaging systems. In the field of compact infrared hyperspectral imaging devices, ONERA and Sagem Défense et Sécurité have collaborated to develop a prototype called SIBI, which stands for "Spectro-Imageur Birefringent Infrarouge". It is a static Fourier transform imaging spectrometer which operates in the mid-wavelength infrared spectral range and uses a birefringent lateral shearing interferometer. Up to now, birefringent interferometers have not been often used for hyperspectral imaging in the mid-infrared because of the lack of crystal manufacturers, contrary to the visible spectral domain where the production of uniaxial crystals like calcite are mastered for various optical applications. In the following, we will present the design and the realization of SIBI as well as the first experimental results.
无人机的最新发展增加了对越来越紧凑的光学系统的需求。为了满足这一需求,ONERA正在开发几种红外系统,如光谱仪、成像设备、多光谱和高光谱成像系统。在紧凑型红外高光谱成像设备领域,ONERA和Sagem dsamfense et ssamucurit合作开发了一种名为SIBI的原型,即“光谱成像双折射红外”。它是一种静态傅立叶变换成像光谱仪,工作在中波长红外光谱范围内,采用双折射横向剪切干涉仪。到目前为止,由于缺乏晶体制造商,双折射干涉仪还没有经常用于中红外的高光谱成像,而在可见光谱领域,像方解石这样的单轴晶体的生产已经掌握了各种光学应用。下面,我们将介绍SIBI的设计和实现,以及第一次实验结果。
{"title":"SIBI: A compact hyperspectral camera in the mid-infrared","authors":"Armande Pola Fossi, Y. Ferrec, R. Domel, C. Coudrain, N. Guerineau, N. Roux, Oscar D'almeida, Marc Bousquet, E. Kling, H. Sauer","doi":"10.1117/12.2195241","DOIUrl":"https://doi.org/10.1117/12.2195241","url":null,"abstract":"Recent developments in unmanned aerial vehicles have increased the demand for more and more compact optical systems. In order to bring solutions to this demand, several infrared systems are being developed at ONERA such as spectrometers, imaging devices, multispectral and hyperspectral imaging systems. In the field of compact infrared hyperspectral imaging devices, ONERA and Sagem Défense et Sécurité have collaborated to develop a prototype called SIBI, which stands for \"Spectro-Imageur Birefringent Infrarouge\". It is a static Fourier transform imaging spectrometer which operates in the mid-wavelength infrared spectral range and uses a birefringent lateral shearing interferometer. Up to now, birefringent interferometers have not been often used for hyperspectral imaging in the mid-infrared because of the lack of crystal manufacturers, contrary to the visible spectral domain where the production of uniaxial crystals like calcite are mastered for various optical applications. In the following, we will present the design and the realization of SIBI as well as the first experimental results.","PeriodicalId":348143,"journal":{"name":"SPIE Security + Defence","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128261522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}