首页 > 最新文献

Defense, Security, and Sensing最新文献

英文 中文
Correlation of partial frames in video matching 视频匹配中部分帧的相关性
Pub Date : 2013-06-17 DOI: 10.1117/12.2016645
Boris Kovalerchuk, Sergei Kovalerchuk
Correlating and fusing video frames from distributed and moving sensors is important area of video matching. It is especially difficult for frames with objects at long distances that are visible as single pixels where the algorithms cannot exploit the structure of each object. The proposed algorithm correlates partial frames with such small objects using the algebraic structural approach that exploits structural relations between objects including ratios of areas. The algorithm is fully affine invariant, which includes any rotation, shift, and scaling.
分布式和运动传感器视频帧的关联和融合是视频匹配的一个重要领域。对于具有远距离可见的单个像素的对象的帧来说尤其困难,因为算法无法利用每个对象的结构。该算法使用代数结构方法将部分框架与此类小对象关联起来,该方法利用对象之间的结构关系,包括面积比率。该算法是完全仿射不变的,它包括任何旋转、移动和缩放。
{"title":"Correlation of partial frames in video matching","authors":"Boris Kovalerchuk, Sergei Kovalerchuk","doi":"10.1117/12.2016645","DOIUrl":"https://doi.org/10.1117/12.2016645","url":null,"abstract":"Correlating and fusing video frames from distributed and moving sensors is important area of video matching. It is especially difficult for frames with objects at long distances that are visible as single pixels where the algorithms cannot exploit the structure of each object. The proposed algorithm correlates partial frames with such small objects using the algebraic structural approach that exploits structural relations between objects including ratios of areas. The algorithm is fully affine invariant, which includes any rotation, shift, and scaling.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134090698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Vehicle detection and orientation estimation using the radon transform 基于radon变换的车辆检测与方向估计
Pub Date : 2013-06-14 DOI: 10.1117/12.2016407
R. Pelapur, F. Bunyak, K. Palaniappan, G. Seetharaman
Determining the location and orientation of vehicles in satellite and airborne imagery is a challenging task given the density of cars and other vehicles and complexity of the environment in urban scenes almost anywhere in the world. We have developed a robust and accurate method for detecting vehicles using a template-based directional chamfer matching, combined with vehicle orientation estimation based on a refined segmentation, followed by a Radon transform based profile variance peak analysis approach. The same algorithm was applied to both high resolution satellite imagery and wide area aerial imagery and initial results show robustness to illumination changes and geometric appearance distortions. Nearly 80% of the orientation angle estimates for 1585 vehicles across both satellite and aerial imagery were accurate to within 15◦ of the ground truth. In the case of satellite imagery alone, nearly 90% of the objects have an estimated error within ±1.0° of the ground truth.
考虑到世界上几乎任何地方的城市场景中汽车和其他车辆的密度以及环境的复杂性,在卫星和航空图像中确定车辆的位置和方向是一项具有挑战性的任务。我们开发了一种鲁棒且准确的车辆检测方法,该方法使用基于模板的方向倒角匹配,结合基于精细分割的车辆方向估计,以及基于Radon变换的轮廓方差峰值分析方法。将相同的算法应用于高分辨率卫星图像和广域航空图像,初步结果表明该算法对光照变化和几何外观畸变具有鲁棒性。在1585辆车的卫星和航空图像中,近80%的方向角估计精度在地面真实度15◦以内。仅在卫星图像的情况下,近90%的物体的估计误差在地面真实度的±1.0°以内。
{"title":"Vehicle detection and orientation estimation using the radon transform","authors":"R. Pelapur, F. Bunyak, K. Palaniappan, G. Seetharaman","doi":"10.1117/12.2016407","DOIUrl":"https://doi.org/10.1117/12.2016407","url":null,"abstract":"Determining the location and orientation of vehicles in satellite and airborne imagery is a challenging task given the density of cars and other vehicles and complexity of the environment in urban scenes almost anywhere in the world. We have developed a robust and accurate method for detecting vehicles using a template-based directional chamfer matching, combined with vehicle orientation estimation based on a refined segmentation, followed by a Radon transform based profile variance peak analysis approach. The same algorithm was applied to both high resolution satellite imagery and wide area aerial imagery and initial results show robustness to illumination changes and geometric appearance distortions. Nearly 80% of the orientation angle estimates for 1585 vehicles across both satellite and aerial imagery were accurate to within 15◦ of the ground truth. In the case of satellite imagery alone, nearly 90% of the objects have an estimated error within ±1.0° of the ground truth.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122756978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Guidance in feature extraction to resolve uncertainty 特征提取以解决不确定性的指导
Pub Date : 2013-06-13 DOI: 10.1117/12.2016509
Boris Kovalerchuk, Michael Kovalerchuk, S. Streltsov, M. Best
Automated Feature Extraction (AFE) plays a critical role in image understanding. Often the imagery analysts extract features better than AFE algorithms do, because analysts use additional information. The extraction and processing of this information can be more complex than the original AFE task, and that leads to the “complexity trap”. This can happen when the shadow from the buildings guides the extraction of buildings and roads. This work proposes an AFE algorithm to extract roads and trails by using the GMTI/GPS tracking information and older inaccurate maps of roads and trails as AFE guides.
自动特征提取在图像理解中起着至关重要的作用。通常,图像分析比AFE算法更好地提取特征,因为分析使用了额外的信息。这些信息的提取和处理可能比原来的AFE任务更复杂,这就导致了“复杂性陷阱”。当建筑物的阴影引导建筑物和道路的提取时,就会发生这种情况。本文提出了一种利用GMTI/GPS跟踪信息和旧的不准确的道路和小径地图作为AFE指南提取道路和小径的AFE算法。
{"title":"Guidance in feature extraction to resolve uncertainty","authors":"Boris Kovalerchuk, Michael Kovalerchuk, S. Streltsov, M. Best","doi":"10.1117/12.2016509","DOIUrl":"https://doi.org/10.1117/12.2016509","url":null,"abstract":"Automated Feature Extraction (AFE) plays a critical role in image understanding. Often the imagery analysts extract features better than AFE algorithms do, because analysts use additional information. The extraction and processing of this information can be more complex than the original AFE task, and that leads to the “complexity trap”. This can happen when the shadow from the buildings guides the extraction of buildings and roads. This work proposes an AFE algorithm to extract roads and trails by using the GMTI/GPS tracking information and older inaccurate maps of roads and trails as AFE guides.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117132471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Feature selection for appearance-based vehicle tracking in geospatial video 地理空间视频中基于外观的车辆跟踪特征选择
Pub Date : 2013-06-13 DOI: 10.1117/12.2015672
M. Poostchi, F. Bunyak, K. Palaniappan, G. Seetharaman
Current video tracking systems often employ a rich set of intensity, edge, texture, shape and object level features combined with descriptors for appearance modeling. This approach increases tracker robustness but is compu- tationally expensive for realtime applications and localization accuracy can be adversely affected by including distracting features in the feature fusion or object classification processes. This paper explores offline feature subset selection using a filter-based evaluation approach for video tracking to reduce the dimensionality of the feature space and to discover relevant representative lower dimensional subspaces for online tracking. We com- pare the performance of the exhaustive FOCUS algorithm to the sequential heuristic SFFS, SFS and RELIEF feature selection methods. Experiments show that using offline feature selection reduces computational complex- ity, improves feature fusion and is expected to translate into better online tracking performance. Overall SFFS and SFS perform very well, close to the optimum determined by FOCUS, but RELIEF does not work as well for feature selection in the context of appearance-based object tracking.
当前的视频跟踪系统通常采用一组丰富的强度、边缘、纹理、形状和对象级别特征,并结合描述符进行外观建模。这种方法增加了跟踪器的鲁棒性,但对于实时应用来说,计算成本很高,并且在特征融合或目标分类过程中包含分散的特征会对定位精度产生不利影响。本文利用基于滤波器的评估方法探索离线特征子集选择,用于视频跟踪,以降低特征空间的维数,并发现相关的具有代表性的低维子空间用于在线跟踪。我们比较了穷举FOCUS算法与顺序启发式SFFS、SFS和RELIEF特征选择方法的性能。实验表明,使用离线特征选择降低了计算复杂度,提高了特征融合,有望转化为更好的在线跟踪性能。总体而言,SFFS和SFS表现非常好,接近FOCUS确定的最优值,但RELIEF在基于外观的对象跟踪环境中不适合用于特征选择。
{"title":"Feature selection for appearance-based vehicle tracking in geospatial video","authors":"M. Poostchi, F. Bunyak, K. Palaniappan, G. Seetharaman","doi":"10.1117/12.2015672","DOIUrl":"https://doi.org/10.1117/12.2015672","url":null,"abstract":"Current video tracking systems often employ a rich set of intensity, edge, texture, shape and object level features combined with descriptors for appearance modeling. This approach increases tracker robustness but is compu- tationally expensive for realtime applications and localization accuracy can be adversely affected by including distracting features in the feature fusion or object classification processes. This paper explores offline feature subset selection using a filter-based evaluation approach for video tracking to reduce the dimensionality of the feature space and to discover relevant representative lower dimensional subspaces for online tracking. We com- pare the performance of the exhaustive FOCUS algorithm to the sequential heuristic SFFS, SFS and RELIEF feature selection methods. Experiments show that using offline feature selection reduces computational complex- ity, improves feature fusion and is expected to translate into better online tracking performance. Overall SFFS and SFS perform very well, close to the optimum determined by FOCUS, but RELIEF does not work as well for feature selection in the context of appearance-based object tracking.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132926519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Geometric exploration of virtual planes in a fusion-based 3D data registration framework 基于融合的三维数据配准框架中虚拟平面的几何探索
Pub Date : 2013-06-13 DOI: 10.1117/12.2015933
H. Aliakbarpour, K. Palaniappan, J. Dias
Three-dimensional reconstruction of objects, particularly buildings, within an aerial scene is still a challenging computer vision task and an importance component of Geospatial Information Systems. In this paper we present a new homography-based approach for 3D urban reconstruction based on virtual planes. A hybrid sensor consisting of three sensor elements including camera, inertial (orientation) sensor (IS) and GPS (Global Positioning System) location device mounted on an airborne platform can be used for wide area scene reconstruction. The heterogeneous data coming from each of these three sensors are fused using projective transformations or homographies. Due to inaccuracies in the sensor observations, the estimated homography transforms between inertial and virtual 3D planes have measurement uncertainties. The modeling of such uncertainties for the virtual plane reconstruction method is described in this paper. A preliminary set of results using simulation data is used to demonstrate the feasibility of the proposed approach.
航拍场景中物体(尤其是建筑物)的三维重建仍然是一项具有挑战性的计算机视觉任务,也是地理空间信息系统的重要组成部分。本文提出了一种基于虚拟平面的城市三维重建新方法。一种由摄像机、惯性(方向)传感器和GPS(全球定位系统)定位装置三种传感器组成的混合传感器,可用于机载平台上的广域场景重建。来自这三个传感器的异构数据使用射影变换或同形变换进行融合。由于传感器观测的不准确性,估计的惯性平面和虚拟三维平面之间的单应变换存在测量不确定性。本文描述了这种不确定性在虚拟平面重建方法中的建模方法。利用仿真数据的初步结果验证了所提方法的可行性。
{"title":"Geometric exploration of virtual planes in a fusion-based 3D data registration framework","authors":"H. Aliakbarpour, K. Palaniappan, J. Dias","doi":"10.1117/12.2015933","DOIUrl":"https://doi.org/10.1117/12.2015933","url":null,"abstract":"Three-dimensional reconstruction of objects, particularly buildings, within an aerial scene is still a challenging computer vision task and an importance component of Geospatial Information Systems. In this paper we present a new homography-based approach for 3D urban reconstruction based on virtual planes. A hybrid sensor consisting of three sensor elements including camera, inertial (orientation) sensor (IS) and GPS (Global Positioning System) location device mounted on an airborne platform can be used for wide area scene reconstruction. The heterogeneous data coming from each of these three sensors are fused using projective transformations or homographies. Due to inaccuracies in the sensor observations, the estimated homography transforms between inertial and virtual 3D planes have measurement uncertainties. The modeling of such uncertainties for the virtual plane reconstruction method is described in this paper. A preliminary set of results using simulation data is used to demonstrate the feasibility of the proposed approach.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125013966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery KOLAM:用于广域图像的可扩展可视化和跟踪的跨平台架构
Pub Date : 2013-06-13 DOI: 10.1117/12.2018162
Joshua Fraser, Anoop Haridas, G. Seetharaman, R. Rao, K. Palaniappan
KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.
KOLAM是一个开放、跨平台、可互操作、可扩展和可扩展的框架,支持用于大数据可视化和可视化分析的新型多尺度时空双缓存数据结构。本文重点研究了KOLAM在高分辨率、高吞吐量宽格式视频(也称为广域运动图像(WAMI))中的目标跟踪应用。它最初是为高空间和光谱分辨率的超大地理空间图像的交互式可视化而开发的。KOLAM是独立于平台、操作系统和(图形)硬件的,并支持在集群、工作站、台式机和移动计算机上从数百千兆字节到pb级的可扩展的嵌入式数据集。除了快速漫游、缩放和超跳跃空间操作外,还支持大量同时可见的嵌入式金字塔层(也称为多尺度或稀疏图像)、交互式彩色图和直方图增强、球面投影和地形图。KOLAM软件架构被扩展为支持机载广域运动图像,方法是使用分层金字塔缓存数据结构的时间缓存,在非常大格式的视频帧中组织时空块。当前版本支持WAMI动画、快速智能检测、轨迹可视化和目标跟踪(数字标签);后者通过与外部自动跟踪软件接口实现。使用WAMI的关键需求之一是监督跟踪和可视化工具,该工具允许分析人员对多个目标进行数字标记,快速审查和纠正跟踪结果,并对生成的轨迹应用地理空间可视化分析工具。一键式手动跟踪与多种自动跟踪算法相结合,可帮助分析师并提高人力效率。
{"title":"KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery","authors":"Joshua Fraser, Anoop Haridas, G. Seetharaman, R. Rao, K. Palaniappan","doi":"10.1117/12.2018162","DOIUrl":"https://doi.org/10.1117/12.2018162","url":null,"abstract":"KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116222680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Multisource information fusion for enhanced simultaneous tracking and recognition 多源信息融合增强同步跟踪和识别
Pub Date : 2013-06-12 DOI: 10.1117/12.2016616
B. Kahler
A layered sensing approach helps to mitigate sensor, target, and environmental operating conditions affecting target tracking and recognition performance. Radar sensors provide standoff sensing capabilities over a range of weather conditions; however, operating conditions such as obscuration can hinder radar target tracking. By using other sensing modalities such as electro-optical (EO) building cameras or eye witness reports, continuous target tracking and recognition may be achieved when radar data is unavailable. Information fusion is necessary to associate independent multisource data to ensure accurate target track and identification is maintained. Exploiting the unique information obtained from multiple sensor modalities with non-sensor sources will enhance vehicle track and recognition performance and increase confidence in the reported results by providing confirmation of target tracks when multiple sources have overlapping coverage of the vehicle of interest. The author uses a fusion performance model in conjunction with a tracking and recognition performance model to assess which combination of information sources produce the greatest gains for both urban and rural environments for a typical sized ground vehicle.
分层传感方法有助于减轻传感器、目标和环境操作条件对目标跟踪和识别性能的影响。雷达传感器提供各种天气条件下的防区外感知能力;然而,诸如遮挡等操作条件会阻碍雷达目标跟踪。通过使用其他传感方式,如光电(EO)建筑摄像机或目击者报告,当雷达数据不可用时,可以实现连续的目标跟踪和识别。信息融合是将独立的多源数据关联起来以保证准确的目标跟踪和识别的必要条件。利用从非传感器源的多个传感器模式获得的独特信息,将提高车辆的跟踪和识别性能,并通过在多个源重叠覆盖感兴趣的车辆时提供目标轨迹确认来增加报告结果的可信度。作者将融合性能模型与跟踪和识别性能模型结合使用,以评估对于典型大小的地面车辆,哪种信息源组合在城市和农村环境中产生最大收益。
{"title":"Multisource information fusion for enhanced simultaneous tracking and recognition","authors":"B. Kahler","doi":"10.1117/12.2016616","DOIUrl":"https://doi.org/10.1117/12.2016616","url":null,"abstract":"A layered sensing approach helps to mitigate sensor, target, and environmental operating conditions affecting target tracking and recognition performance. Radar sensors provide standoff sensing capabilities over a range of weather conditions; however, operating conditions such as obscuration can hinder radar target tracking. By using other sensing modalities such as electro-optical (EO) building cameras or eye witness reports, continuous target tracking and recognition may be achieved when radar data is unavailable. Information fusion is necessary to associate independent multisource data to ensure accurate target track and identification is maintained. Exploiting the unique information obtained from multiple sensor modalities with non-sensor sources will enhance vehicle track and recognition performance and increase confidence in the reported results by providing confirmation of target tracks when multiple sources have overlapping coverage of the vehicle of interest. The author uses a fusion performance model in conjunction with a tracking and recognition performance model to assess which combination of information sources produce the greatest gains for both urban and rural environments for a typical sized ground vehicle.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121774943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Aural stealth of portable HOT infrared imager 便携式热红外成像仪的听觉隐身
Pub Date : 2013-06-11 DOI: 10.1117/12.2017125
A. Veprik
Further reduction of size, weight and power consumption of the High Operating Temperature (HOT) infrared (IR) Integrated Detector-Dewar-Cooler Assemblies (IDDCA) eventually calls for development of high-speed cryocoolers. In case of integral rotary design, the immediate penalty is the more intensive slapping of compression and expansion pistons along with intensification of micro collisions inherent for the operation of crank-slide linkages featuring ball bearings. Resulting from this is the generation of impulsive vibration export, the spectrum of which features the driving frequency along with numerous multiples covering the entire range of audible frequencies. In a typical design of an infrared imager, the metal light-weight enclosure accommodates a directly mounted IDDCA and an optical train, thus serving as an optical bench and heat sink. This usually results in excitation of structural resonances in the said enclosure and, therefore, in excessive noise generation compromising the aural stealth. The author presents the complex approach to a design of aural undetectable infrared imagers in which the IDDCA is mounted upon the imager enclosure through a silent pad. Special attention is paid to resolving the line of sight stability and heat sinking issues. The demonstration imager relying on Ricor K562S based IDDCA meets the most stringent requirement to 10 meters aural non-detectability distance (per MIL-STD 1474D, Level II) even during boost cooldown phase of operation.
为了进一步减小高温(HOT)红外(IR)集成探测器-杜瓦-冷却器组件(IDDCA)的尺寸、重量和功耗,最终需要开发高速制冷机。在整体旋转设计的情况下,直接的惩罚是压缩和膨胀活塞更强烈的撞击,以及带有滚珠轴承的曲柄滑动连杆机构固有的微碰撞加剧。由此产生的是脉冲振动输出,其频谱特征是驱动频率以及覆盖整个可听频率范围的众多倍数。在红外成像仪的典型设计中,金属轻质外壳可容纳直接安装的IDDCA和光学序列,从而充当光学工作台和散热器。这通常会导致所述外壳中结构共振的激发,因此会产生过多的噪声,损害听觉隐身性。作者提出了一种听觉不可探测红外成像仪的复杂设计方法,其中IDDCA通过无声垫安装在成像仪外壳上。特别注意的是解决视线稳定性和热沉问题。基于Ricor K562S的IDDCA的演示成像仪即使在升压冷却阶段也能满足最严格的10米听觉不可探测距离要求(根据MIL-STD 1474D, II级)。
{"title":"Aural stealth of portable HOT infrared imager","authors":"A. Veprik","doi":"10.1117/12.2017125","DOIUrl":"https://doi.org/10.1117/12.2017125","url":null,"abstract":"Further reduction of size, weight and power consumption of the High Operating Temperature (HOT) infrared (IR) Integrated Detector-Dewar-Cooler Assemblies (IDDCA) eventually calls for development of high-speed cryocoolers. In case of integral rotary design, the immediate penalty is the more intensive slapping of compression and expansion pistons along with intensification of micro collisions inherent for the operation of crank-slide linkages featuring ball bearings. Resulting from this is the generation of impulsive vibration export, the spectrum of which features the driving frequency along with numerous multiples covering the entire range of audible frequencies. In a typical design of an infrared imager, the metal light-weight enclosure accommodates a directly mounted IDDCA and an optical train, thus serving as an optical bench and heat sink. This usually results in excitation of structural resonances in the said enclosure and, therefore, in excessive noise generation compromising the aural stealth. The author presents the complex approach to a design of aural undetectable infrared imagers in which the IDDCA is mounted upon the imager enclosure through a silent pad. Special attention is paid to resolving the line of sight stability and heat sinking issues. The demonstration imager relying on Ricor K562S based IDDCA meets the most stringent requirement to 10 meters aural non-detectability distance (per MIL-STD 1474D, Level II) even during boost cooldown phase of operation.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114645685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
80×80 VPD PbSe: the first uncooled MWIR FPA monolithically integrated with a Si-CMOS ROIC 80×80 VPD PbSe:第一个非冷却的MWIR FPA单片集成Si-CMOS ROIC
Pub Date : 2013-06-11 DOI: 10.1117/12.2015290
G. Vergara, R. Linares Herrero, R. Gutíerrez Álvarez, C. Fernández-Montojo, L. J. Gomez, V. Villamayor, A. Baldasano Ramírez, M. Montojo
In this work a breakthrough in the field of low cost uncooled infrared detectors is presented: an 80x80 MWIR VPD PbSe detector monolithically integrated with the corresponding Si-CMOS circuitry. Fast speed of response and high frame rates are, until date, non existing performances in the domain of low cost uncooled IR imagers. The new detector presented fills the gap. The device is capable to provide MWIR images to rates as high as 2 KHz, full frame, in real uncooled operation which converts it in an excellent solution for being used in applications where short events and fast transients dominate the system dynamics to be studied or detected. VPD PbSe technology is unique because combines all the main requirements demanded for a volume ready technology: 1. Simple processing 2. Good reproducibility and homogeneity 3. Processing compatible with big areas substrates 4. Si-CMOS compatible (no hybridation needed) 5. Low cost optics and packagin The new FPA represents a milestone in the road towards affordable uncooled MWIR imagers and it is the demonstration of VPD PbSe technology has reached industrial maturity. The device presented in the work was processed on 8-inch Si wafers with excellent results in terms of manufacturing yield and repeatability. The technology opens the MWIR band to SWaP concept.
本文提出了低成本非冷却红外探测器领域的一个突破:80x80 MWIR VPD PbSe探测器与相应的Si-CMOS电路单片集成。到目前为止,在低成本非制冷红外成像仪领域,快速响应和高帧率是不存在的性能。新的探测器填补了这一空白。该设备能够在实际非冷却操作中提供高达2 KHz的全帧MWIR图像,这将其转换为一种出色的解决方案,可用于短事件和快速瞬态主导系统动力学的应用中进行研究或检测。VPD PbSe技术是独一无二的,因为它结合了批量就绪技术所需的所有主要要求:简单处理2。重复性好,均匀性好。加工兼容大面积基材Si-CMOS兼容(不需要杂化)新型FPA代表了经济实惠的非制冷MWIR成像仪道路上的一个里程碑,它是VPD PbSe技术达到工业成熟的证明。该器件在8英寸硅晶圆上加工,在制造良率和可重复性方面取得了优异的效果。该技术开启了MWIR频段的SWaP概念。
{"title":"80×80 VPD PbSe: the first uncooled MWIR FPA monolithically integrated with a Si-CMOS ROIC","authors":"G. Vergara, R. Linares Herrero, R. Gutíerrez Álvarez, C. Fernández-Montojo, L. J. Gomez, V. Villamayor, A. Baldasano Ramírez, M. Montojo","doi":"10.1117/12.2015290","DOIUrl":"https://doi.org/10.1117/12.2015290","url":null,"abstract":"In this work a breakthrough in the field of low cost uncooled infrared detectors is presented: an 80x80 MWIR VPD PbSe detector monolithically integrated with the corresponding Si-CMOS circuitry. Fast speed of response and high frame rates are, until date, non existing performances in the domain of low cost uncooled IR imagers. The new detector presented fills the gap. The device is capable to provide MWIR images to rates as high as 2 KHz, full frame, in real uncooled operation which converts it in an excellent solution for being used in applications where short events and fast transients dominate the system dynamics to be studied or detected. VPD PbSe technology is unique because combines all the main requirements demanded for a volume ready technology: 1. Simple processing 2. Good reproducibility and homogeneity 3. Processing compatible with big areas substrates 4. Si-CMOS compatible (no hybridation needed) 5. Low cost optics and packagin The new FPA represents a milestone in the road towards affordable uncooled MWIR imagers and it is the demonstration of VPD PbSe technology has reached industrial maturity. The device presented in the work was processed on 8-inch Si wafers with excellent results in terms of manufacturing yield and repeatability. The technology opens the MWIR band to SWaP concept.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124909335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Miniaturized day/night sight in Soldato Futuro program 在Soldato Futuro程序中小型化昼/夜瞄准器
Pub Date : 2013-06-11 DOI: 10.1117/12.2015814
A. Landini, A. Cocchi, R. Bardazzi, Mauro Sardelli, Stefano Puntri
The market of the sights for the 5.56 mm assault rifles is dominated by mainly three types of systems: TWS (Thermal Weapon Sight), the Pocket Scope with Weapon Mount and the Clip-on. The latter are designed primarily for special forces and snipers use, while the TWS design is triggered mainly by the DRI (Detection, Recognition, Identification) requirements. The Pocket Scope design is focused on respecting the SWaP (Size, Weight and Power dissipation) requirements. Compared to the TWS systems, for the last two years there was a significant technological growth of the Pocket Scope/Weapon Mount solutions, concentrated on the compression of the overall dimensions. The trend for the assault rifles is the use of small size/light weight (SWaP) IR sights, suitable mainly for close combat operations but also for extraordinary use as pocket scopes – handheld or helmet mounted. The latest developments made by Selex ES S.p.A. are responding precisely to the above-mentioned trend, through a miniaturized Day/Night sight embedding state-of-the art sensors and using standard protocols (USB 2.0, Bluetooth 4.0) for interfacing with PDAs, Wearable computers, etc., while maintaining the “shoot around the corner” capability. Indeed, inside the miniaturized Day/Night sight architecture, a wireless link using Bluetooth technology has been implemented to transmit the video streaming of the rifle sight to an helmet mounted display. The video of the rifle sight is transmitted only to the eye-piece of the soldier shouldering the rifle.
5.56毫米突击步枪的瞄准具市场主要由三种类型的系统主导:TWS(热武器瞄准具),带有武器支架的口袋瞄准具和夹式瞄准具。后者主要设计用于特种部队和狙击手使用,而TWS设计主要由DRI(探测、识别、识别)需求触发。口袋瞄准镜的设计重点是尊重SWaP(尺寸、重量和功耗)要求。与TWS系统相比,在过去的两年里,袖珍瞄准镜/武器安装解决方案有了显著的技术发展,主要集中在整体尺寸的压缩上。突击步枪的趋势是使用小尺寸/轻重量(SWaP)红外瞄准具,主要适用于近距离战斗行动,但也适用于特殊用途的口袋瞄准具-手持或头盔安装。Selex ES S.p.A.的最新发展正是对上述趋势的回应,通过嵌入最先进传感器的小型化日/夜视点,并使用标准协议(USB 2.0,蓝牙4.0)与pda,可穿戴计算机等接口,同时保持“拍摄角落”的能力。实际上,在小型化的昼/夜瞄准具结构内部,采用蓝牙技术的无线链路已经实现,将步枪瞄准具的视频流传输到头盔显示器。步枪瞄准具的视频只传输到肩扛步枪的士兵的目镜。
{"title":"Miniaturized day/night sight in Soldato Futuro program","authors":"A. Landini, A. Cocchi, R. Bardazzi, Mauro Sardelli, Stefano Puntri","doi":"10.1117/12.2015814","DOIUrl":"https://doi.org/10.1117/12.2015814","url":null,"abstract":"The market of the sights for the 5.56 mm assault rifles is dominated by mainly three types of systems: TWS (Thermal Weapon Sight), the Pocket Scope with Weapon Mount and the Clip-on. The latter are designed primarily for special forces and snipers use, while the TWS design is triggered mainly by the DRI (Detection, Recognition, Identification) requirements. The Pocket Scope design is focused on respecting the SWaP (Size, Weight and Power dissipation) requirements. Compared to the TWS systems, for the last two years there was a significant technological growth of the Pocket Scope/Weapon Mount solutions, concentrated on the compression of the overall dimensions. The trend for the assault rifles is the use of small size/light weight (SWaP) IR sights, suitable mainly for close combat operations but also for extraordinary use as pocket scopes – handheld or helmet mounted. The latest developments made by Selex ES S.p.A. are responding precisely to the above-mentioned trend, through a miniaturized Day/Night sight embedding state-of-the art sensors and using standard protocols (USB 2.0, Bluetooth 4.0) for interfacing with PDAs, Wearable computers, etc., while maintaining the “shoot around the corner” capability. Indeed, inside the miniaturized Day/Night sight architecture, a wireless link using Bluetooth technology has been implemented to transmit the video streaming of the rifle sight to an helmet mounted display. The video of the rifle sight is transmitted only to the eye-piece of the soldier shouldering the rifle.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117257105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Defense, Security, and Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1