首页 > 最新文献

Security and Defence Quarterly最新文献

英文 中文
Development of a fusion technique and an algorithm for merging images recorded in the IR and visible spectrum in dust and fog 开发了一种融合技术和算法,用于合并尘埃和雾中记录的红外和可见光谱图像
Pub Date : 2022-11-02 DOI: 10.1117/12.2641155
E. Semenishchev, A. Zelensky, A. Alepko, M. Zhdanova, V. Voronin, Y. Ilyukhin
The article proposes a fusion technique and an algorithm for combining images recorded in the IR and visible spectrum in relation to the problem of processing products by robotic complexes in dust and fog. Primary data processing is based on the use of a multi-criteria processing with complex data analysis and cross-change of the filtration coefficient for different types of data. The search for base points is based on the application of the technique of reducing the range of clusters (image simplification) and searching for transition boundaries using the approach of determining the slope of the function in local areas. As test data used to evaluate the effectiveness, pairs of test images obtained by sensors with a resolution of 1024x768 (8 bit, color image, visible range) and 640x480 (8 bit, color, IR image) are used. Images of simple shapes are used as analyzed objects.
针对机器人复合体在粉尘和雾中处理产品的问题,提出了一种红外和可见光谱图像的融合技术和算法。原始数据处理是基于使用多准则处理,对不同类型的数据进行复杂的数据分析和过滤系数的交叉变化。基点的搜索是基于应用减少聚类范围(图像简化)的技术,并使用确定局部函数斜率的方法搜索过渡边界。作为评估有效性的测试数据,使用分辨率为1024x768(8位,彩色图像,可见范围)和640x480(8位,彩色,红外图像)的传感器获得的成对测试图像。简单形状的图像被用作分析对象。
{"title":"Development of a fusion technique and an algorithm for merging images recorded in the IR and visible spectrum in dust and fog","authors":"E. Semenishchev, A. Zelensky, A. Alepko, M. Zhdanova, V. Voronin, Y. Ilyukhin","doi":"10.1117/12.2641155","DOIUrl":"https://doi.org/10.1117/12.2641155","url":null,"abstract":"The article proposes a fusion technique and an algorithm for combining images recorded in the IR and visible spectrum in relation to the problem of processing products by robotic complexes in dust and fog. Primary data processing is based on the use of a multi-criteria processing with complex data analysis and cross-change of the filtration coefficient for different types of data. The search for base points is based on the application of the technique of reducing the range of clusters (image simplification) and searching for transition boundaries using the approach of determining the slope of the function in local areas. As test data used to evaluate the effectiveness, pairs of test images obtained by sensors with a resolution of 1024x768 (8 bit, color image, visible range) and 640x480 (8 bit, color, IR image) are used. Images of simple shapes are used as analyzed objects.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"57 5","pages":"122710O - 122710O-9"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72495997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-photon infrared waveguide-based upconversion imaging 基于单光子红外波导的上转换成像
Pub Date : 2022-11-02 DOI: 10.1117/12.2636260
R. Smith, B. Ndagano, G. Redonnet-Brown, A. Weaver, A. Astill, H. White, C. Gawith, L. McKnight
We report a nonlinear optical upconversion 3D imaging system for infrared radiation enabled by zinc indiffused MgO:PPLN waveguides. While raster-scanning a scene with an 1800 nm pulsed-laser source, we record time-of-flight information, thus probing the 3D structure of various objects in the scene of interest. Through upconversion, the 3D information is transferred from 1800 nm to 795 nm, a wavelength accessible to single-photon avalanche diode (SPAD).
我们报道了一种由锌扩散MgO:PPLN波导实现的红外辐射非线性光学上转换三维成像系统。当使用1800 nm脉冲激光源对场景进行光栅扫描时,我们记录飞行时间信息,从而探测感兴趣场景中各种物体的3D结构。通过上转换,三维信息从1800 nm传输到795 nm,这是单光子雪崩二极管(SPAD)可以访问的波长。
{"title":"Single-photon infrared waveguide-based upconversion imaging","authors":"R. Smith, B. Ndagano, G. Redonnet-Brown, A. Weaver, A. Astill, H. White, C. Gawith, L. McKnight","doi":"10.1117/12.2636260","DOIUrl":"https://doi.org/10.1117/12.2636260","url":null,"abstract":"We report a nonlinear optical upconversion 3D imaging system for infrared radiation enabled by zinc indiffused MgO:PPLN waveguides. While raster-scanning a scene with an 1800 nm pulsed-laser source, we record time-of-flight information, thus probing the 3D structure of various objects in the scene of interest. Through upconversion, the 3D information is transferred from 1800 nm to 795 nm, a wavelength accessible to single-photon avalanche diode (SPAD).","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"47 1","pages":"1227103 - 1227103-5"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80856740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beam tracking and atmospheric influence on laser performance in defeating UAV:s 光束跟踪和大气对激光打击无人机性能的影响
Pub Date : 2022-11-02 DOI: 10.1117/12.2634422
O. Steinvall
The threat of unmanned aerial vehicles (UAV:s) is well documented during recent conflicts. It has therefore been more important to investigate different means for countering this threat. One of the potential means is to use a laser. The laser may be used as a support sensor to others like radar or IR to detect end recognise and track the UAV and it can dazzle and destroy its optical sensors. A laser can also be used to sense the atmospheric attenuation and turbulence in slant paths, which are critical to the performance of a high power laser weapon aimed to destroy the UAV. This paper will investigate how the atmosphere and beam jitter due to tracking and platform pointing errors will affect the performance of the laser either used as a sensor, countermeasure or as a weapon.
在最近的冲突中,无人驾驶飞行器(UAV:s)的威胁得到了充分的证明。因此,更重要的是研究对付这一威胁的不同手段。一种可能的方法是使用激光。激光可能被用作其它的支持传感器,像雷达或红外去探测、识别和跟踪UAV并且它能使它的光学传感器眩目和破坏。激光也可以用于探测大气衰减和倾斜路径上的湍流,这对旨在摧毁无人机的高功率激光武器的性能至关重要。本文将研究由于跟踪和平台指向误差引起的大气和光束抖动如何影响激光作为传感器、对抗或武器的性能。
{"title":"Beam tracking and atmospheric influence on laser performance in defeating UAV:s","authors":"O. Steinvall","doi":"10.1117/12.2634422","DOIUrl":"https://doi.org/10.1117/12.2634422","url":null,"abstract":"The threat of unmanned aerial vehicles (UAV:s) is well documented during recent conflicts. It has therefore been more important to investigate different means for countering this threat. One of the potential means is to use a laser. The laser may be used as a support sensor to others like radar or IR to detect end recognise and track the UAV and it can dazzle and destroy its optical sensors. A laser can also be used to sense the atmospheric attenuation and turbulence in slant paths, which are critical to the performance of a high power laser weapon aimed to destroy the UAV. This paper will investigate how the atmosphere and beam jitter due to tracking and platform pointing errors will affect the performance of the laser either used as a sensor, countermeasure or as a weapon.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"79 1","pages":"122720C - 122720C-17"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81311863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Measuring the pressure force by detecting the change in optical power intensity 通过检测光功率强度的变化来测量压力
Pub Date : 2022-11-02 DOI: 10.1117/12.2636226
J. Jargus, Michal Kostelansky, Michael Fridrich, M. Fajkus, J. Nedoma
This article describes the research work in search of an optimized solution for the measurement of compressive force using the detection of the intensity of the optical power coupled into the optical fiber. In the experimental part of the research a product realized by 3D printing was used the outer case of which was made of FLEXFILL 98A material and the inner part was formed by a three-part PETG layer while the middle sensory part was changeable. This model was used to test different shapes of deformation elements in the variable part to find suitable configurations of the deformation plate. A standard 50/125 μm multimode graded index optical fiber was placed in the sensory part. It can be assumed that the results of this research can be used for the design of sensors based on the detection of changes in optical power intensity
本文介绍了利用耦合到光纤中的光功率强度检测压缩力测量的优化解决方案的研究工作。实验部分采用3D打印技术实现产品,产品外壳采用FLEXFILL 98A材料,内层采用三段PETG层,中间的感觉部分是可变的。利用该模型对可变部件中不同形状的变形单元进行测试,以找到合适的变形板配置。传感部分采用标准的50/125 μm多模渐变折射率光纤。可以假设,本研究的结果可以用于基于检测光功率强度变化的传感器的设计
{"title":"Measuring the pressure force by detecting the change in optical power intensity","authors":"J. Jargus, Michal Kostelansky, Michael Fridrich, M. Fajkus, J. Nedoma","doi":"10.1117/12.2636226","DOIUrl":"https://doi.org/10.1117/12.2636226","url":null,"abstract":"This article describes the research work in search of an optimized solution for the measurement of compressive force using the detection of the intensity of the optical power coupled into the optical fiber. In the experimental part of the research a product realized by 3D printing was used the outer case of which was made of FLEXFILL 98A material and the inner part was formed by a three-part PETG layer while the middle sensory part was changeable. This model was used to test different shapes of deformation elements in the variable part to find suitable configurations of the deformation plate. A standard 50/125 μm multimode graded index optical fiber was placed in the sensory part. It can be assumed that the results of this research can be used for the design of sensors based on the detection of changes in optical power intensity","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"23 1","pages":"122720M - 122720M-7"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87464802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geospecific terrain databases for military simulation environments 用于军事模拟环境的地理地形数据库
Pub Date : 2022-11-02 DOI: 10.1117/12.2636138
D. Frommholz, F. Kuijper, D. Bulatov, Desmond Cheung
This paper discusses a rapid workflow for the automated generation of geospecific terrain databases for military simulation environments. Starting from photogrammetric data products of an oblique aerial camera, the process comprises deterministic terrain extraction from digital surface models and semantic building reconstruction from 3D point clouds. Further, an efficient supervised technique using little training data is applied to recover land classes from the true-orthophoto of the scene, and visual artifacts from parked vehicles to be separately modeled are suppressed through inpainting based on generative adversarial networks. As a proof-of-concept for the proposed pipeline, a dataset of the Altmark/Schnoeggersburg training area in Germany was prepared and transformed into a ready-to-use environment for the commercial Virtual Battlespace Simulator (VBS). The obtained result got compared to another automatedly derived database and a semi-manually crafted scene regarding visual accuracy, functionality and necessary time effort.
本文讨论了军事仿真环境中地理地形数据库自动生成的快速工作流程。从倾斜航空相机的摄影测量数据产品出发,从数字表面模型中提取确定性地形,从三维点云中重建语义建筑。此外,使用少量训练数据的有效监督技术从场景的真正射影像中恢复土地类别,并通过基于生成对抗网络的油漆来抑制停放车辆的视觉伪影。作为拟议管道的概念验证,准备了德国Altmark/Schnoeggersburg训练区域的数据集,并将其转换为商用虚拟战斗空间模拟器(VBS)的即用型环境。在视觉精度、功能和所需时间方面,将所得结果与另一个自动导出的数据库和半手工制作的场景进行了比较。
{"title":"Geospecific terrain databases for military simulation environments","authors":"D. Frommholz, F. Kuijper, D. Bulatov, Desmond Cheung","doi":"10.1117/12.2636138","DOIUrl":"https://doi.org/10.1117/12.2636138","url":null,"abstract":"This paper discusses a rapid workflow for the automated generation of geospecific terrain databases for military simulation environments. Starting from photogrammetric data products of an oblique aerial camera, the process comprises deterministic terrain extraction from digital surface models and semantic building reconstruction from 3D point clouds. Further, an efficient supervised technique using little training data is applied to recover land classes from the true-orthophoto of the scene, and visual artifacts from parked vehicles to be separately modeled are suppressed through inpainting based on generative adversarial networks. As a proof-of-concept for the proposed pipeline, a dataset of the Altmark/Schnoeggersburg training area in Germany was prepared and transformed into a ready-to-use environment for the commercial Virtual Battlespace Simulator (VBS). The obtained result got compared to another automatedly derived database and a semi-manually crafted scene regarding visual accuracy, functionality and necessary time effort.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"17 1","pages":"1227207 - 1227207-14"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89777407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and characterisation of a portable, active short-wave infrared camera system for vision enhancement through smoke and fog 一种便携式、有源短波红外摄像机系统的开发和特性,用于通过烟雾和雾增强视觉
Pub Date : 2022-11-02 DOI: 10.1117/12.2636216
Matthias Mischung, Jendrik Schmidt, E. Peters, Marco W. Berger, M. Anders, Maurice Stephan
A portable short-wave infrared (SWIR) sensor system was developed aiming at vision enhancement through fog and smoke for support of emergency forces such as fire fighters or the police. In these environments, wavelengths in the SWIR regime have superior transmission and less backscatter in comparison to the visible spectral range received by the human eye or RGB cameras. On the emitter side, the active SWIR sensor system features a light-emitting diode (LED) array consisting of 55 SWIR-LEDs with a total optical power output of 280 mW emitting at wavelengths around λ = 1568 nm with a Full Width at Half Maximum (FWHM) of 137 nm, which are more eye-safe compared to the visible range. The receiver consists of an InGaAs camera equipped with a lens with a field of view slightly exceeding the angle of radiation of the LED array. For convenient use as a portable device, a display for live video from the SWIR camera is embedded within the system. The dimensions of the system are 270 x 190 x 110 mm and the overall weight is 3470 g. The superior potential of SWIR in contrast to visible wavelengths in scattering environments is first theoretically estimated using the Mie scattering theory, followed by an introduction of the SWIR sensor system including a detailed description of its assembly and a characterisation of the illuminator regarding optical power, spatial emission profile, heat dissipation, and spectral emission. The performance of the system is then estimated by design calculations based on the lidar equation. First field experiments using a fog machine show an improved performance compared to a camera in the visible range (VIS), as a result of less backscattering from illumination, lower extinction and thus producing a clearer image.
研制了一种便携式短波红外(SWIR)传感器系统,旨在通过雾和烟增强视觉,以支持消防员或警察等紧急部队。在这些环境中,与人眼或RGB相机接收的可见光谱范围相比,SWIR波段的波长具有优越的透射性和较少的后向散射。在发射器侧,主动式SWIR传感器系统采用由55个SWIR-LED组成的发光二极管(LED)阵列,总光功率输出为280 mW,发射波长约λ = 1568 nm,半最大全宽(FWHM)为137 nm,与可见范围相比,更安全。接收器由一个InGaAs相机组成,配备一个镜头,其视野略大于LED阵列的辐射角度。为了方便作为便携式设备使用,系统内嵌入了SWIR摄像机的实时视频显示器。系统尺寸为270 × 190 × 110 mm,总重量为3470g。首先使用Mie散射理论从理论上估计了SWIR在散射环境中与可见波长相比的优越潜力,然后介绍了SWIR传感器系统,包括其组件的详细描述以及照明器关于光功率,空间发射轮廓,散热和光谱发射的特征。然后通过基于激光雷达方程的设计计算来估计系统的性能。使用雾机进行的第一次现场实验表明,与可见光范围内的相机相比,雾机的性能有所提高,因为光照产生的后向散射更少,消光更低,从而产生更清晰的图像。
{"title":"Development and characterisation of a portable, active short-wave infrared camera system for vision enhancement through smoke and fog","authors":"Matthias Mischung, Jendrik Schmidt, E. Peters, Marco W. Berger, M. Anders, Maurice Stephan","doi":"10.1117/12.2636216","DOIUrl":"https://doi.org/10.1117/12.2636216","url":null,"abstract":"A portable short-wave infrared (SWIR) sensor system was developed aiming at vision enhancement through fog and smoke for support of emergency forces such as fire fighters or the police. In these environments, wavelengths in the SWIR regime have superior transmission and less backscatter in comparison to the visible spectral range received by the human eye or RGB cameras. On the emitter side, the active SWIR sensor system features a light-emitting diode (LED) array consisting of 55 SWIR-LEDs with a total optical power output of 280 mW emitting at wavelengths around λ = 1568 nm with a Full Width at Half Maximum (FWHM) of 137 nm, which are more eye-safe compared to the visible range. The receiver consists of an InGaAs camera equipped with a lens with a field of view slightly exceeding the angle of radiation of the LED array. For convenient use as a portable device, a display for live video from the SWIR camera is embedded within the system. The dimensions of the system are 270 x 190 x 110 mm and the overall weight is 3470 g. The superior potential of SWIR in contrast to visible wavelengths in scattering environments is first theoretically estimated using the Mie scattering theory, followed by an introduction of the SWIR sensor system including a detailed description of its assembly and a characterisation of the illuminator regarding optical power, spatial emission profile, heat dissipation, and spectral emission. The performance of the system is then estimated by design calculations based on the lidar equation. First field experiments using a fog machine show an improved performance compared to a camera in the visible range (VIS), as a result of less backscattering from illumination, lower extinction and thus producing a clearer image.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"58 1","pages":"122710M - 122710M-13"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79077360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Stability and noise in frequency combs: efficient and accurate computation using dynamical methods 频率梳的稳定性和噪声:用动力学方法高效准确地计算
Pub Date : 2022-11-02 DOI: 10.1117/12.2644162
C. Menyuk, Shaokang Wang
Key issues in the design of any passively modelocked laser system are determining the parameter ranges within which it can operate stably, determining its noise performance, and then optimizing the design to achieve the best possible output pulse parameters. Here, we review work within our research group to use computational methods based on dynamical systems theory to accurately and efficiently address these issues. These methods are typically many orders of magnitude faster than widely used evolutionary methods. We then review our application of these methods to the analysis and design of passively modelocked fiber lasers that use a semiconductor saturable absorbing mirror (SESAM). These lasers are subject to a wake instability in which modes can grow in the wake of the modelocked pulse and destroy it. Even when stable, the wake modes can lead to undesirable radio-frequency sidebands. We demonstrate that the dynamical methods have an advantage of more than three orders of magnitude over standard evolutionary methods for this laser system. After identifying the stable operating range, we take advantage of the computational speed of these methods to optimize the laser performance over a three-dimensional parameter space.
任何被动锁模激光系统设计的关键问题是确定其稳定运行的参数范围,确定其噪声性能,然后优化设计以获得最佳的输出脉冲参数。在这里,我们回顾了我们研究小组使用基于动力系统理论的计算方法来准确有效地解决这些问题的工作。这些方法通常比广泛使用的进化方法快许多个数量级。然后回顾了这些方法在使用半导体可饱和吸收镜(SESAM)的被动锁模光纤激光器的分析和设计中的应用。这些激光器受到尾迹不稳定性的影响,其中模式可以在模型锁定脉冲的尾迹中生长并破坏它。即使在稳定的情况下,尾流模式也会导致不希望出现的射频边带。我们证明了动力学方法比标准进化方法具有三个数量级以上的优势。在确定了稳定的工作范围后,我们利用这些方法的计算速度在三维参数空间上优化激光性能。
{"title":"Stability and noise in frequency combs: efficient and accurate computation using dynamical methods","authors":"C. Menyuk, Shaokang Wang","doi":"10.1117/12.2644162","DOIUrl":"https://doi.org/10.1117/12.2644162","url":null,"abstract":"Key issues in the design of any passively modelocked laser system are determining the parameter ranges within which it can operate stably, determining its noise performance, and then optimizing the design to achieve the best possible output pulse parameters. Here, we review work within our research group to use computational methods based on dynamical systems theory to accurately and efficiently address these issues. These methods are typically many orders of magnitude faster than widely used evolutionary methods. We then review our application of these methods to the analysis and design of passively modelocked fiber lasers that use a semiconductor saturable absorbing mirror (SESAM). These lasers are subject to a wake instability in which modes can grow in the wake of the modelocked pulse and destroy it. Even when stable, the wake modes can lead to undesirable radio-frequency sidebands. We demonstrate that the dynamical methods have an advantage of more than three orders of magnitude over standard evolutionary methods for this laser system. After identifying the stable operating range, we take advantage of the computational speed of these methods to optimize the laser performance over a three-dimensional parameter space.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"52 1","pages":"1227304 - 1227304-8"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85169361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of an event-sensor to situational awareness 事件传感器在态势感知中的应用
Pub Date : 2022-11-02 DOI: 10.1117/12.2638545
Marceau Bamond, N. Hueber, G. Strub, S. Changey, Jonathan Weber
A new challenging vision system has recently gained prominence and proven its capacities compared to traditional imagers: the paradigm of event-based vision. Instead of capturing the whole sensor area in a fixed frame rate as in a frame-based camera, Spike sensors or event cameras report the location and the sign of brightness changes in the image. Despite the fact that the currently available spatial resolutions are quite low (640x480 pixels) for these event cameras, the real interest is in their very high temporal resolution (in the range of microseconds) and very high dynamic range (up to 140 dB). Thanks to the event-driven approach, their power consumption and processing power requirements are quite low compared to conventional cameras. This latter characteristic is of particular interest for embedded applications especially for situational awareness. The main goal for this project is to detect and to track activity zones from the spike event stream, and to notify the standard imager where the activity takes place. By doing so, automated situational awareness is enabled by analyzing the sparse information of event-based vision, and waking up the standard camera at the right moments, and at the right positions i.e. the detected regions of interest. We demonstrate the capacity of this bimodal vision approach to take advantage of both cameras: spatial resolution for standard camera and temporal resolution for event-based cameras. An opto-mechanical demonstrator has been designed to integrate both cameras in a compact visual system, with embedded Software processing, enabling the perspective of autonomous remote sensing. Several field experiments demonstrate the performances and the interest of such an autonomous vision system. The emphasis is placed on the ability to detect and track fast moving objects, such as fast drones. Results and performances are evaluated and discussed on these realistic scenarios.
一种新的具有挑战性的视觉系统最近得到了重视,并证明了其与传统成像仪相比的能力:基于事件的视觉范式。与基于帧的相机以固定帧速率捕获整个传感器区域不同,Spike传感器或事件相机报告图像中的位置和亮度变化的迹象。尽管这些事件相机目前可用的空间分辨率相当低(640x480像素),但真正令人感兴趣的是它们非常高的时间分辨率(在微秒范围内)和非常高的动态范围(高达140 dB)。由于采用事件驱动的方法,与传统相机相比,它们的功耗和处理能力要求相当低。后一种特性对于嵌入式应用程序,特别是对态势感知的应用程序特别有意义。这个项目的主要目标是从峰值事件流中检测和跟踪活动区域,并通知标准成像仪活动发生的位置。通过分析基于事件的视觉的稀疏信息,并在正确的时刻和正确的位置(即检测到的感兴趣的区域)唤醒标准摄像机,从而实现自动态势感知。我们展示了这种双峰视觉方法利用两种相机的能力:标准相机的空间分辨率和基于事件的相机的时间分辨率。设计了一个光机械演示器,将两个相机集成在一个紧凑的视觉系统中,具有嵌入式软件处理,可以实现自主遥感的视角。几个现场实验证明了这种自主视觉系统的性能和兴趣。重点放在探测和跟踪快速移动物体的能力上,比如快速无人机。在这些现实场景中对结果和性能进行了评估和讨论。
{"title":"Application of an event-sensor to situational awareness","authors":"Marceau Bamond, N. Hueber, G. Strub, S. Changey, Jonathan Weber","doi":"10.1117/12.2638545","DOIUrl":"https://doi.org/10.1117/12.2638545","url":null,"abstract":"A new challenging vision system has recently gained prominence and proven its capacities compared to traditional imagers: the paradigm of event-based vision. Instead of capturing the whole sensor area in a fixed frame rate as in a frame-based camera, Spike sensors or event cameras report the location and the sign of brightness changes in the image. Despite the fact that the currently available spatial resolutions are quite low (640x480 pixels) for these event cameras, the real interest is in their very high temporal resolution (in the range of microseconds) and very high dynamic range (up to 140 dB). Thanks to the event-driven approach, their power consumption and processing power requirements are quite low compared to conventional cameras. This latter characteristic is of particular interest for embedded applications especially for situational awareness. The main goal for this project is to detect and to track activity zones from the spike event stream, and to notify the standard imager where the activity takes place. By doing so, automated situational awareness is enabled by analyzing the sparse information of event-based vision, and waking up the standard camera at the right moments, and at the right positions i.e. the detected regions of interest. We demonstrate the capacity of this bimodal vision approach to take advantage of both cameras: spatial resolution for standard camera and temporal resolution for event-based cameras. An opto-mechanical demonstrator has been designed to integrate both cameras in a compact visual system, with embedded Software processing, enabling the perspective of autonomous remote sensing. Several field experiments demonstrate the performances and the interest of such an autonomous vision system. The emphasis is placed on the ability to detect and track fast moving objects, such as fast drones. Results and performances are evaluated and discussed on these realistic scenarios.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"8 1","pages":"122720G - 122720G-6"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81290207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Validating colour representation in synthetic scenes using a virtual colour checker chart 使用虚拟颜色检查表验证合成场景中的颜色表示
Pub Date : 2022-11-02 DOI: 10.1117/12.2638442
Q. Shao, Noel Richards, R. Messina, Neal Winter, Joanne B. Culpepper
Evaluating the visible signature of operational platforms has long been a focus of military research. Human observations of targets in the field are perceived to be the most accurate way to assess a target’s visible signature, although the results are limited to conditions observed in the field. Synthetic imagery could potentially enhance visible signature analysis by providing a wider range of target images in differing environmental conditions than is feasible to collect in field trials. In order for synthetic images to be effective, the virtual scenes need to replicate reality as much as possible. Simulating a maritime environment presents many difficult challenges in trying to replicate the lighting effects of the oceanic scenes precisely in a virtual setting. Using the colour checker charts widely used in photography we present a detailed methodology on how to create a virtual colour checker chart in synthetic scenes developed in the commercially available Autodesk Maya software. Our initial investigation shows a significant difference between the theoretical sRGB values calculated under the CIE D65 illuminant and those simulated in Autodesk Maya under the same illuminant. These differences are somewhat expected, and must be accounted for in order for synthetic scenes to be useful in visible signature analysis. The sRGB values measured from a digital photograph taken at a field trial also differed, but this is expected due to possible variations in lighting conditions between the synthetic and real images, the camera’s sRGB output and the spatial resolution of the camera which is currently not modelled in the synthetic scenes.
作战平台的可见特征评估一直是军事研究的热点。人类在野外对目标的观察被认为是评估目标可见特征的最准确方法,尽管结果仅限于在野外观察到的条件。通过在不同环境条件下提供比在实地试验中可行的更大范围的目标图像,合成图像可能潜在地增强可见特征分析。为了使合成图像有效,虚拟场景需要尽可能地复制现实。模拟海洋环境提出了许多困难的挑战,试图在虚拟环境中精确地复制海洋场景的灯光效果。使用在摄影中广泛使用的颜色检查器图表,我们提出了一个详细的方法,如何在商用Autodesk Maya软件开发的合成场景中创建虚拟颜色检查器图表。我们的初步调查显示,在CIE D65光源下计算的理论sRGB值与在相同光源下在Autodesk Maya中模拟的sRGB值之间存在显着差异。这些差异在某种程度上是可以预料到的,为了使合成场景在可见签名分析中有用,必须考虑到这些差异。在实地试验中拍摄的数码照片测量的sRGB值也有所不同,但这是预期的,因为合成图像和真实图像之间的照明条件可能存在变化,相机的sRGB输出和相机的空间分辨率目前没有在合成场景中建模。
{"title":"Validating colour representation in synthetic scenes using a virtual colour checker chart","authors":"Q. Shao, Noel Richards, R. Messina, Neal Winter, Joanne B. Culpepper","doi":"10.1117/12.2638442","DOIUrl":"https://doi.org/10.1117/12.2638442","url":null,"abstract":"Evaluating the visible signature of operational platforms has long been a focus of military research. Human observations of targets in the field are perceived to be the most accurate way to assess a target’s visible signature, although the results are limited to conditions observed in the field. Synthetic imagery could potentially enhance visible signature analysis by providing a wider range of target images in differing environmental conditions than is feasible to collect in field trials. In order for synthetic images to be effective, the virtual scenes need to replicate reality as much as possible. Simulating a maritime environment presents many difficult challenges in trying to replicate the lighting effects of the oceanic scenes precisely in a virtual setting. Using the colour checker charts widely used in photography we present a detailed methodology on how to create a virtual colour checker chart in synthetic scenes developed in the commercially available Autodesk Maya software. Our initial investigation shows a significant difference between the theoretical sRGB values calculated under the CIE D65 illuminant and those simulated in Autodesk Maya under the same illuminant. These differences are somewhat expected, and must be accounted for in order for synthetic scenes to be useful in visible signature analysis. The sRGB values measured from a digital photograph taken at a field trial also differed, but this is expected due to possible variations in lighting conditions between the synthetic and real images, the camera’s sRGB output and the spatial resolution of the camera which is currently not modelled in the synthetic scenes.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"9 1","pages":"1227007 - 1227007-19"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86443522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Determination of the cloud coverage using ground based camera images in the visible and infrared spectral range 利用可见光和红外光谱范围内的地面相机图像确定云覆盖范围
Pub Date : 2022-11-02 DOI: 10.1117/12.2636706
Jeanette Mostafa, T. Kociok, E. Sucher, K. Stein
In this study, ground based image sequences of the sky will be evaluated to analyse the cloud coverage. These images are taken in the visual and infrared spectrum. The main ambition is to determine the cloud coverage without the knowledge of additional measurements (like temperature or precipitable water vapor). The determination of the cloud coverage is deduced from camera images only. In the visual spectrum, methods from literature are extended according to this application. For example, the ratio of the color channels red and blue is formed. In the infrared spectral range a method is developed that can distinguish the cloud-covered from the cloudless image areas by using the maximum and minimum occurring values in the image. The grey values are parameterised using statistical boundary values in such a way that a temperature relationship is unambiguously possible and consequently a statement about the degree of coverage can be made by the algorithm. The determination of the cloud coverage reaches a higher accuracy and reliability in the infrared spectral range.
在本研究中,将评估天空的地面图像序列以分析云覆盖。这些图像是在可见光和红外光谱中拍摄的。主要目标是在不知道额外测量(如温度或可降水量)的情况下确定云的覆盖范围。云覆盖的确定只能从相机图像中推断出来。在视觉光谱中,根据这一应用扩展了文献中的方法。例如,颜色通道红和蓝的比例形成。在红外光谱范围内,提出了一种利用图像中出现值的最大值和最小值来区分云区和无云区的方法。使用统计边界值对灰色值进行参数化,使温度关系明确可能,因此可以通过算法对覆盖程度进行陈述。在红外光谱范围内,云覆盖的测定具有较高的精度和可靠性。
{"title":"Determination of the cloud coverage using ground based camera images in the visible and infrared spectral range","authors":"Jeanette Mostafa, T. Kociok, E. Sucher, K. Stein","doi":"10.1117/12.2636706","DOIUrl":"https://doi.org/10.1117/12.2636706","url":null,"abstract":"In this study, ground based image sequences of the sky will be evaluated to analyse the cloud coverage. These images are taken in the visual and infrared spectrum. The main ambition is to determine the cloud coverage without the knowledge of additional measurements (like temperature or precipitable water vapor). The determination of the cloud coverage is deduced from camera images only. In the visual spectrum, methods from literature are extended according to this application. For example, the ratio of the color channels red and blue is formed. In the infrared spectral range a method is developed that can distinguish the cloud-covered from the cloudless image areas by using the maximum and minimum occurring values in the image. The grey values are parameterised using statistical boundary values in such a way that a temperature relationship is unambiguously possible and consequently a statement about the degree of coverage can be made by the algorithm. The determination of the cloud coverage reaches a higher accuracy and reliability in the infrared spectral range.","PeriodicalId":52940,"journal":{"name":"Security and Defence Quarterly","volume":"1 1","pages":"122700A - 122700A-9"},"PeriodicalIF":0.0,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72689041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Security and Defence Quarterly
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1