E. Semenishchev, A. Zelensky, A. Alepko, M. Zhdanova, V. Voronin, Y. Ilyukhin
The article proposes a fusion technique and an algorithm for combining images recorded in the IR and visible spectrum in relation to the problem of processing products by robotic complexes in dust and fog. Primary data processing is based on the use of a multi-criteria processing with complex data analysis and cross-change of the filtration coefficient for different types of data. The search for base points is based on the application of the technique of reducing the range of clusters (image simplification) and searching for transition boundaries using the approach of determining the slope of the function in local areas. As test data used to evaluate the effectiveness, pairs of test images obtained by sensors with a resolution of 1024x768 (8 bit, color image, visible range) and 640x480 (8 bit, color, IR image) are used. Images of simple shapes are used as analyzed objects.
{"title":"Development of a fusion technique and an algorithm for merging images recorded in the IR and visible spectrum in dust and fog","authors":"E. Semenishchev, A. Zelensky, A. Alepko, M. Zhdanova, V. Voronin, Y. Ilyukhin","doi":"10.1117/12.2641155","DOIUrl":"https://doi.org/10.1117/12.2641155","url":null,"abstract":"The article proposes a fusion technique and an algorithm for combining images recorded in the IR and visible spectrum in relation to the problem of processing products by robotic complexes in dust and fog. Primary data processing is based on the use of a multi-criteria processing with complex data analysis and cross-change of the filtration coefficient for different types of data. The search for base points is based on the application of the technique of reducing the range of clusters (image simplification) and searching for transition boundaries using the approach of determining the slope of the function in local areas. As test data used to evaluate the effectiveness, pairs of test images obtained by sensors with a resolution of 1024x768 (8 bit, color image, visible range) and 640x480 (8 bit, color, IR image) are used. Images of simple shapes are used as analyzed objects.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"57 5","pages":"122710O - 122710O-9"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72495997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Smith, B. Ndagano, G. Redonnet-Brown, A. Weaver, A. Astill, H. White, C. Gawith, L. McKnight
We report a nonlinear optical upconversion 3D imaging system for infrared radiation enabled by zinc indiffused MgO:PPLN waveguides. While raster-scanning a scene with an 1800 nm pulsed-laser source, we record time-of-flight information, thus probing the 3D structure of various objects in the scene of interest. Through upconversion, the 3D information is transferred from 1800 nm to 795 nm, a wavelength accessible to single-photon avalanche diode (SPAD).
{"title":"Single-photon infrared waveguide-based upconversion imaging","authors":"R. Smith, B. Ndagano, G. Redonnet-Brown, A. Weaver, A. Astill, H. White, C. Gawith, L. McKnight","doi":"10.1117/12.2636260","DOIUrl":"https://doi.org/10.1117/12.2636260","url":null,"abstract":"We report a nonlinear optical upconversion 3D imaging system for infrared radiation enabled by zinc indiffused MgO:PPLN waveguides. While raster-scanning a scene with an 1800 nm pulsed-laser source, we record time-of-flight information, thus probing the 3D structure of various objects in the scene of interest. Through upconversion, the 3D information is transferred from 1800 nm to 795 nm, a wavelength accessible to single-photon avalanche diode (SPAD).","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"47 1","pages":"1227103 - 1227103-5"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80856740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The threat of unmanned aerial vehicles (UAV:s) is well documented during recent conflicts. It has therefore been more important to investigate different means for countering this threat. One of the potential means is to use a laser. The laser may be used as a support sensor to others like radar or IR to detect end recognise and track the UAV and it can dazzle and destroy its optical sensors. A laser can also be used to sense the atmospheric attenuation and turbulence in slant paths, which are critical to the performance of a high power laser weapon aimed to destroy the UAV. This paper will investigate how the atmosphere and beam jitter due to tracking and platform pointing errors will affect the performance of the laser either used as a sensor, countermeasure or as a weapon.
{"title":"Beam tracking and atmospheric influence on laser performance in defeating UAV:s","authors":"O. Steinvall","doi":"10.1117/12.2634422","DOIUrl":"https://doi.org/10.1117/12.2634422","url":null,"abstract":"The threat of unmanned aerial vehicles (UAV:s) is well documented during recent conflicts. It has therefore been more important to investigate different means for countering this threat. One of the potential means is to use a laser. The laser may be used as a support sensor to others like radar or IR to detect end recognise and track the UAV and it can dazzle and destroy its optical sensors. A laser can also be used to sense the atmospheric attenuation and turbulence in slant paths, which are critical to the performance of a high power laser weapon aimed to destroy the UAV. This paper will investigate how the atmosphere and beam jitter due to tracking and platform pointing errors will affect the performance of the laser either used as a sensor, countermeasure or as a weapon.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"79 1","pages":"122720C - 122720C-17"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81311863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Jargus, Michal Kostelansky, Michael Fridrich, M. Fajkus, J. Nedoma
This article describes the research work in search of an optimized solution for the measurement of compressive force using the detection of the intensity of the optical power coupled into the optical fiber. In the experimental part of the research a product realized by 3D printing was used the outer case of which was made of FLEXFILL 98A material and the inner part was formed by a three-part PETG layer while the middle sensory part was changeable. This model was used to test different shapes of deformation elements in the variable part to find suitable configurations of the deformation plate. A standard 50/125 μm multimode graded index optical fiber was placed in the sensory part. It can be assumed that the results of this research can be used for the design of sensors based on the detection of changes in optical power intensity
{"title":"Measuring the pressure force by detecting the change in optical power intensity","authors":"J. Jargus, Michal Kostelansky, Michael Fridrich, M. Fajkus, J. Nedoma","doi":"10.1117/12.2636226","DOIUrl":"https://doi.org/10.1117/12.2636226","url":null,"abstract":"This article describes the research work in search of an optimized solution for the measurement of compressive force using the detection of the intensity of the optical power coupled into the optical fiber. In the experimental part of the research a product realized by 3D printing was used the outer case of which was made of FLEXFILL 98A material and the inner part was formed by a three-part PETG layer while the middle sensory part was changeable. This model was used to test different shapes of deformation elements in the variable part to find suitable configurations of the deformation plate. A standard 50/125 μm multimode graded index optical fiber was placed in the sensory part. It can be assumed that the results of this research can be used for the design of sensors based on the detection of changes in optical power intensity","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"23 1","pages":"122720M - 122720M-7"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87464802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Frommholz, F. Kuijper, D. Bulatov, Desmond Cheung
This paper discusses a rapid workflow for the automated generation of geospecific terrain databases for military simulation environments. Starting from photogrammetric data products of an oblique aerial camera, the process comprises deterministic terrain extraction from digital surface models and semantic building reconstruction from 3D point clouds. Further, an efficient supervised technique using little training data is applied to recover land classes from the true-orthophoto of the scene, and visual artifacts from parked vehicles to be separately modeled are suppressed through inpainting based on generative adversarial networks. As a proof-of-concept for the proposed pipeline, a dataset of the Altmark/Schnoeggersburg training area in Germany was prepared and transformed into a ready-to-use environment for the commercial Virtual Battlespace Simulator (VBS). The obtained result got compared to another automatedly derived database and a semi-manually crafted scene regarding visual accuracy, functionality and necessary time effort.
{"title":"Geospecific terrain databases for military simulation environments","authors":"D. Frommholz, F. Kuijper, D. Bulatov, Desmond Cheung","doi":"10.1117/12.2636138","DOIUrl":"https://doi.org/10.1117/12.2636138","url":null,"abstract":"This paper discusses a rapid workflow for the automated generation of geospecific terrain databases for military simulation environments. Starting from photogrammetric data products of an oblique aerial camera, the process comprises deterministic terrain extraction from digital surface models and semantic building reconstruction from 3D point clouds. Further, an efficient supervised technique using little training data is applied to recover land classes from the true-orthophoto of the scene, and visual artifacts from parked vehicles to be separately modeled are suppressed through inpainting based on generative adversarial networks. As a proof-of-concept for the proposed pipeline, a dataset of the Altmark/Schnoeggersburg training area in Germany was prepared and transformed into a ready-to-use environment for the commercial Virtual Battlespace Simulator (VBS). The obtained result got compared to another automatedly derived database and a semi-manually crafted scene regarding visual accuracy, functionality and necessary time effort.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"17 1","pages":"1227207 - 1227207-14"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89777407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthias Mischung, Jendrik Schmidt, E. Peters, Marco W. Berger, M. Anders, Maurice Stephan
A portable short-wave infrared (SWIR) sensor system was developed aiming at vision enhancement through fog and smoke for support of emergency forces such as fire fighters or the police. In these environments, wavelengths in the SWIR regime have superior transmission and less backscatter in comparison to the visible spectral range received by the human eye or RGB cameras. On the emitter side, the active SWIR sensor system features a light-emitting diode (LED) array consisting of 55 SWIR-LEDs with a total optical power output of 280 mW emitting at wavelengths around λ = 1568 nm with a Full Width at Half Maximum (FWHM) of 137 nm, which are more eye-safe compared to the visible range. The receiver consists of an InGaAs camera equipped with a lens with a field of view slightly exceeding the angle of radiation of the LED array. For convenient use as a portable device, a display for live video from the SWIR camera is embedded within the system. The dimensions of the system are 270 x 190 x 110 mm and the overall weight is 3470 g. The superior potential of SWIR in contrast to visible wavelengths in scattering environments is first theoretically estimated using the Mie scattering theory, followed by an introduction of the SWIR sensor system including a detailed description of its assembly and a characterisation of the illuminator regarding optical power, spatial emission profile, heat dissipation, and spectral emission. The performance of the system is then estimated by design calculations based on the lidar equation. First field experiments using a fog machine show an improved performance compared to a camera in the visible range (VIS), as a result of less backscattering from illumination, lower extinction and thus producing a clearer image.
{"title":"Development and characterisation of a portable, active short-wave infrared camera system for vision enhancement through smoke and fog","authors":"Matthias Mischung, Jendrik Schmidt, E. Peters, Marco W. Berger, M. Anders, Maurice Stephan","doi":"10.1117/12.2636216","DOIUrl":"https://doi.org/10.1117/12.2636216","url":null,"abstract":"A portable short-wave infrared (SWIR) sensor system was developed aiming at vision enhancement through fog and smoke for support of emergency forces such as fire fighters or the police. In these environments, wavelengths in the SWIR regime have superior transmission and less backscatter in comparison to the visible spectral range received by the human eye or RGB cameras. On the emitter side, the active SWIR sensor system features a light-emitting diode (LED) array consisting of 55 SWIR-LEDs with a total optical power output of 280 mW emitting at wavelengths around λ = 1568 nm with a Full Width at Half Maximum (FWHM) of 137 nm, which are more eye-safe compared to the visible range. The receiver consists of an InGaAs camera equipped with a lens with a field of view slightly exceeding the angle of radiation of the LED array. For convenient use as a portable device, a display for live video from the SWIR camera is embedded within the system. The dimensions of the system are 270 x 190 x 110 mm and the overall weight is 3470 g. The superior potential of SWIR in contrast to visible wavelengths in scattering environments is first theoretically estimated using the Mie scattering theory, followed by an introduction of the SWIR sensor system including a detailed description of its assembly and a characterisation of the illuminator regarding optical power, spatial emission profile, heat dissipation, and spectral emission. The performance of the system is then estimated by design calculations based on the lidar equation. First field experiments using a fog machine show an improved performance compared to a camera in the visible range (VIS), as a result of less backscattering from illumination, lower extinction and thus producing a clearer image.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"58 1","pages":"122710M - 122710M-13"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79077360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Key issues in the design of any passively modelocked laser system are determining the parameter ranges within which it can operate stably, determining its noise performance, and then optimizing the design to achieve the best possible output pulse parameters. Here, we review work within our research group to use computational methods based on dynamical systems theory to accurately and efficiently address these issues. These methods are typically many orders of magnitude faster than widely used evolutionary methods. We then review our application of these methods to the analysis and design of passively modelocked fiber lasers that use a semiconductor saturable absorbing mirror (SESAM). These lasers are subject to a wake instability in which modes can grow in the wake of the modelocked pulse and destroy it. Even when stable, the wake modes can lead to undesirable radio-frequency sidebands. We demonstrate that the dynamical methods have an advantage of more than three orders of magnitude over standard evolutionary methods for this laser system. After identifying the stable operating range, we take advantage of the computational speed of these methods to optimize the laser performance over a three-dimensional parameter space.
{"title":"Stability and noise in frequency combs: efficient and accurate computation using dynamical methods","authors":"C. Menyuk, Shaokang Wang","doi":"10.1117/12.2644162","DOIUrl":"https://doi.org/10.1117/12.2644162","url":null,"abstract":"Key issues in the design of any passively modelocked laser system are determining the parameter ranges within which it can operate stably, determining its noise performance, and then optimizing the design to achieve the best possible output pulse parameters. Here, we review work within our research group to use computational methods based on dynamical systems theory to accurately and efficiently address these issues. These methods are typically many orders of magnitude faster than widely used evolutionary methods. We then review our application of these methods to the analysis and design of passively modelocked fiber lasers that use a semiconductor saturable absorbing mirror (SESAM). These lasers are subject to a wake instability in which modes can grow in the wake of the modelocked pulse and destroy it. Even when stable, the wake modes can lead to undesirable radio-frequency sidebands. We demonstrate that the dynamical methods have an advantage of more than three orders of magnitude over standard evolutionary methods for this laser system. After identifying the stable operating range, we take advantage of the computational speed of these methods to optimize the laser performance over a three-dimensional parameter space.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"52 1","pages":"1227304 - 1227304-8"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85169361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marceau Bamond, N. Hueber, G. Strub, S. Changey, Jonathan Weber
A new challenging vision system has recently gained prominence and proven its capacities compared to traditional imagers: the paradigm of event-based vision. Instead of capturing the whole sensor area in a fixed frame rate as in a frame-based camera, Spike sensors or event cameras report the location and the sign of brightness changes in the image. Despite the fact that the currently available spatial resolutions are quite low (640x480 pixels) for these event cameras, the real interest is in their very high temporal resolution (in the range of microseconds) and very high dynamic range (up to 140 dB). Thanks to the event-driven approach, their power consumption and processing power requirements are quite low compared to conventional cameras. This latter characteristic is of particular interest for embedded applications especially for situational awareness. The main goal for this project is to detect and to track activity zones from the spike event stream, and to notify the standard imager where the activity takes place. By doing so, automated situational awareness is enabled by analyzing the sparse information of event-based vision, and waking up the standard camera at the right moments, and at the right positions i.e. the detected regions of interest. We demonstrate the capacity of this bimodal vision approach to take advantage of both cameras: spatial resolution for standard camera and temporal resolution for event-based cameras. An opto-mechanical demonstrator has been designed to integrate both cameras in a compact visual system, with embedded Software processing, enabling the perspective of autonomous remote sensing. Several field experiments demonstrate the performances and the interest of such an autonomous vision system. The emphasis is placed on the ability to detect and track fast moving objects, such as fast drones. Results and performances are evaluated and discussed on these realistic scenarios.
{"title":"Application of an event-sensor to situational awareness","authors":"Marceau Bamond, N. Hueber, G. Strub, S. Changey, Jonathan Weber","doi":"10.1117/12.2638545","DOIUrl":"https://doi.org/10.1117/12.2638545","url":null,"abstract":"A new challenging vision system has recently gained prominence and proven its capacities compared to traditional imagers: the paradigm of event-based vision. Instead of capturing the whole sensor area in a fixed frame rate as in a frame-based camera, Spike sensors or event cameras report the location and the sign of brightness changes in the image. Despite the fact that the currently available spatial resolutions are quite low (640x480 pixels) for these event cameras, the real interest is in their very high temporal resolution (in the range of microseconds) and very high dynamic range (up to 140 dB). Thanks to the event-driven approach, their power consumption and processing power requirements are quite low compared to conventional cameras. This latter characteristic is of particular interest for embedded applications especially for situational awareness. The main goal for this project is to detect and to track activity zones from the spike event stream, and to notify the standard imager where the activity takes place. By doing so, automated situational awareness is enabled by analyzing the sparse information of event-based vision, and waking up the standard camera at the right moments, and at the right positions i.e. the detected regions of interest. We demonstrate the capacity of this bimodal vision approach to take advantage of both cameras: spatial resolution for standard camera and temporal resolution for event-based cameras. An opto-mechanical demonstrator has been designed to integrate both cameras in a compact visual system, with embedded Software processing, enabling the perspective of autonomous remote sensing. Several field experiments demonstrate the performances and the interest of such an autonomous vision system. The emphasis is placed on the ability to detect and track fast moving objects, such as fast drones. Results and performances are evaluated and discussed on these realistic scenarios.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"8 1","pages":"122720G - 122720G-6"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81290207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Q. Shao, Noel Richards, R. Messina, Neal Winter, Joanne B. Culpepper
Evaluating the visible signature of operational platforms has long been a focus of military research. Human observations of targets in the field are perceived to be the most accurate way to assess a target’s visible signature, although the results are limited to conditions observed in the field. Synthetic imagery could potentially enhance visible signature analysis by providing a wider range of target images in differing environmental conditions than is feasible to collect in field trials. In order for synthetic images to be effective, the virtual scenes need to replicate reality as much as possible. Simulating a maritime environment presents many difficult challenges in trying to replicate the lighting effects of the oceanic scenes precisely in a virtual setting. Using the colour checker charts widely used in photography we present a detailed methodology on how to create a virtual colour checker chart in synthetic scenes developed in the commercially available Autodesk Maya software. Our initial investigation shows a significant difference between the theoretical sRGB values calculated under the CIE D65 illuminant and those simulated in Autodesk Maya under the same illuminant. These differences are somewhat expected, and must be accounted for in order for synthetic scenes to be useful in visible signature analysis. The sRGB values measured from a digital photograph taken at a field trial also differed, but this is expected due to possible variations in lighting conditions between the synthetic and real images, the camera’s sRGB output and the spatial resolution of the camera which is currently not modelled in the synthetic scenes.
{"title":"Validating colour representation in synthetic scenes using a virtual colour checker chart","authors":"Q. Shao, Noel Richards, R. Messina, Neal Winter, Joanne B. Culpepper","doi":"10.1117/12.2638442","DOIUrl":"https://doi.org/10.1117/12.2638442","url":null,"abstract":"Evaluating the visible signature of operational platforms has long been a focus of military research. Human observations of targets in the field are perceived to be the most accurate way to assess a target’s visible signature, although the results are limited to conditions observed in the field. Synthetic imagery could potentially enhance visible signature analysis by providing a wider range of target images in differing environmental conditions than is feasible to collect in field trials. In order for synthetic images to be effective, the virtual scenes need to replicate reality as much as possible. Simulating a maritime environment presents many difficult challenges in trying to replicate the lighting effects of the oceanic scenes precisely in a virtual setting. Using the colour checker charts widely used in photography we present a detailed methodology on how to create a virtual colour checker chart in synthetic scenes developed in the commercially available Autodesk Maya software. Our initial investigation shows a significant difference between the theoretical sRGB values calculated under the CIE D65 illuminant and those simulated in Autodesk Maya under the same illuminant. These differences are somewhat expected, and must be accounted for in order for synthetic scenes to be useful in visible signature analysis. The sRGB values measured from a digital photograph taken at a field trial also differed, but this is expected due to possible variations in lighting conditions between the synthetic and real images, the camera’s sRGB output and the spatial resolution of the camera which is currently not modelled in the synthetic scenes.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"9 1","pages":"1227007 - 1227007-19"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86443522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, ground based image sequences of the sky will be evaluated to analyse the cloud coverage. These images are taken in the visual and infrared spectrum. The main ambition is to determine the cloud coverage without the knowledge of additional measurements (like temperature or precipitable water vapor). The determination of the cloud coverage is deduced from camera images only. In the visual spectrum, methods from literature are extended according to this application. For example, the ratio of the color channels red and blue is formed. In the infrared spectral range a method is developed that can distinguish the cloud-covered from the cloudless image areas by using the maximum and minimum occurring values in the image. The grey values are parameterised using statistical boundary values in such a way that a temperature relationship is unambiguously possible and consequently a statement about the degree of coverage can be made by the algorithm. The determination of the cloud coverage reaches a higher accuracy and reliability in the infrared spectral range.
{"title":"Determination of the cloud coverage using ground based camera images in the visible and infrared spectral range","authors":"Jeanette Mostafa, T. Kociok, E. Sucher, K. Stein","doi":"10.1117/12.2636706","DOIUrl":"https://doi.org/10.1117/12.2636706","url":null,"abstract":"In this study, ground based image sequences of the sky will be evaluated to analyse the cloud coverage. These images are taken in the visual and infrared spectrum. The main ambition is to determine the cloud coverage without the knowledge of additional measurements (like temperature or precipitable water vapor). The determination of the cloud coverage is deduced from camera images only. In the visual spectrum, methods from literature are extended according to this application. For example, the ratio of the color channels red and blue is formed. In the infrared spectral range a method is developed that can distinguish the cloud-covered from the cloudless image areas by using the maximum and minimum occurring values in the image. The grey values are parameterised using statistical boundary values in such a way that a temperature relationship is unambiguously possible and consequently a statement about the degree of coverage can be made by the algorithm. The determination of the cloud coverage reaches a higher accuracy and reliability in the infrared spectral range.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"1 1","pages":"122700A - 122700A-9"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72689041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}