Correlating and fusing video frames from distributed and moving sensors is important area of video matching. It is especially difficult for frames with objects at long distances that are visible as single pixels where the algorithms cannot exploit the structure of each object. The proposed algorithm correlates partial frames with such small objects using the algebraic structural approach that exploits structural relations between objects including ratios of areas. The algorithm is fully affine invariant, which includes any rotation, shift, and scaling.
{"title":"Correlation of partial frames in video matching","authors":"Boris Kovalerchuk, Sergei Kovalerchuk","doi":"10.1117/12.2016645","DOIUrl":"https://doi.org/10.1117/12.2016645","url":null,"abstract":"Correlating and fusing video frames from distributed and moving sensors is important area of video matching. It is especially difficult for frames with objects at long distances that are visible as single pixels where the algorithms cannot exploit the structure of each object. The proposed algorithm correlates partial frames with such small objects using the algebraic structural approach that exploits structural relations between objects including ratios of areas. The algorithm is fully affine invariant, which includes any rotation, shift, and scaling.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134090698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Pelapur, F. Bunyak, K. Palaniappan, G. Seetharaman
Determining the location and orientation of vehicles in satellite and airborne imagery is a challenging task given the density of cars and other vehicles and complexity of the environment in urban scenes almost anywhere in the world. We have developed a robust and accurate method for detecting vehicles using a template-based directional chamfer matching, combined with vehicle orientation estimation based on a refined segmentation, followed by a Radon transform based profile variance peak analysis approach. The same algorithm was applied to both high resolution satellite imagery and wide area aerial imagery and initial results show robustness to illumination changes and geometric appearance distortions. Nearly 80% of the orientation angle estimates for 1585 vehicles across both satellite and aerial imagery were accurate to within 15◦ of the ground truth. In the case of satellite imagery alone, nearly 90% of the objects have an estimated error within ±1.0° of the ground truth.
{"title":"Vehicle detection and orientation estimation using the radon transform","authors":"R. Pelapur, F. Bunyak, K. Palaniappan, G. Seetharaman","doi":"10.1117/12.2016407","DOIUrl":"https://doi.org/10.1117/12.2016407","url":null,"abstract":"Determining the location and orientation of vehicles in satellite and airborne imagery is a challenging task given the density of cars and other vehicles and complexity of the environment in urban scenes almost anywhere in the world. We have developed a robust and accurate method for detecting vehicles using a template-based directional chamfer matching, combined with vehicle orientation estimation based on a refined segmentation, followed by a Radon transform based profile variance peak analysis approach. The same algorithm was applied to both high resolution satellite imagery and wide area aerial imagery and initial results show robustness to illumination changes and geometric appearance distortions. Nearly 80% of the orientation angle estimates for 1585 vehicles across both satellite and aerial imagery were accurate to within 15◦ of the ground truth. In the case of satellite imagery alone, nearly 90% of the objects have an estimated error within ±1.0° of the ground truth.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122756978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Boris Kovalerchuk, Michael Kovalerchuk, S. Streltsov, M. Best
Automated Feature Extraction (AFE) plays a critical role in image understanding. Often the imagery analysts extract features better than AFE algorithms do, because analysts use additional information. The extraction and processing of this information can be more complex than the original AFE task, and that leads to the “complexity trap”. This can happen when the shadow from the buildings guides the extraction of buildings and roads. This work proposes an AFE algorithm to extract roads and trails by using the GMTI/GPS tracking information and older inaccurate maps of roads and trails as AFE guides.
{"title":"Guidance in feature extraction to resolve uncertainty","authors":"Boris Kovalerchuk, Michael Kovalerchuk, S. Streltsov, M. Best","doi":"10.1117/12.2016509","DOIUrl":"https://doi.org/10.1117/12.2016509","url":null,"abstract":"Automated Feature Extraction (AFE) plays a critical role in image understanding. Often the imagery analysts extract features better than AFE algorithms do, because analysts use additional information. The extraction and processing of this information can be more complex than the original AFE task, and that leads to the “complexity trap”. This can happen when the shadow from the buildings guides the extraction of buildings and roads. This work proposes an AFE algorithm to extract roads and trails by using the GMTI/GPS tracking information and older inaccurate maps of roads and trails as AFE guides.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117132471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Poostchi, F. Bunyak, K. Palaniappan, G. Seetharaman
Current video tracking systems often employ a rich set of intensity, edge, texture, shape and object level features combined with descriptors for appearance modeling. This approach increases tracker robustness but is compu- tationally expensive for realtime applications and localization accuracy can be adversely affected by including distracting features in the feature fusion or object classification processes. This paper explores offline feature subset selection using a filter-based evaluation approach for video tracking to reduce the dimensionality of the feature space and to discover relevant representative lower dimensional subspaces for online tracking. We com- pare the performance of the exhaustive FOCUS algorithm to the sequential heuristic SFFS, SFS and RELIEF feature selection methods. Experiments show that using offline feature selection reduces computational complex- ity, improves feature fusion and is expected to translate into better online tracking performance. Overall SFFS and SFS perform very well, close to the optimum determined by FOCUS, but RELIEF does not work as well for feature selection in the context of appearance-based object tracking.
{"title":"Feature selection for appearance-based vehicle tracking in geospatial video","authors":"M. Poostchi, F. Bunyak, K. Palaniappan, G. Seetharaman","doi":"10.1117/12.2015672","DOIUrl":"https://doi.org/10.1117/12.2015672","url":null,"abstract":"Current video tracking systems often employ a rich set of intensity, edge, texture, shape and object level features combined with descriptors for appearance modeling. This approach increases tracker robustness but is compu- tationally expensive for realtime applications and localization accuracy can be adversely affected by including distracting features in the feature fusion or object classification processes. This paper explores offline feature subset selection using a filter-based evaluation approach for video tracking to reduce the dimensionality of the feature space and to discover relevant representative lower dimensional subspaces for online tracking. We com- pare the performance of the exhaustive FOCUS algorithm to the sequential heuristic SFFS, SFS and RELIEF feature selection methods. Experiments show that using offline feature selection reduces computational complex- ity, improves feature fusion and is expected to translate into better online tracking performance. Overall SFFS and SFS perform very well, close to the optimum determined by FOCUS, but RELIEF does not work as well for feature selection in the context of appearance-based object tracking.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132926519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Three-dimensional reconstruction of objects, particularly buildings, within an aerial scene is still a challenging computer vision task and an importance component of Geospatial Information Systems. In this paper we present a new homography-based approach for 3D urban reconstruction based on virtual planes. A hybrid sensor consisting of three sensor elements including camera, inertial (orientation) sensor (IS) and GPS (Global Positioning System) location device mounted on an airborne platform can be used for wide area scene reconstruction. The heterogeneous data coming from each of these three sensors are fused using projective transformations or homographies. Due to inaccuracies in the sensor observations, the estimated homography transforms between inertial and virtual 3D planes have measurement uncertainties. The modeling of such uncertainties for the virtual plane reconstruction method is described in this paper. A preliminary set of results using simulation data is used to demonstrate the feasibility of the proposed approach.
{"title":"Geometric exploration of virtual planes in a fusion-based 3D data registration framework","authors":"H. Aliakbarpour, K. Palaniappan, J. Dias","doi":"10.1117/12.2015933","DOIUrl":"https://doi.org/10.1117/12.2015933","url":null,"abstract":"Three-dimensional reconstruction of objects, particularly buildings, within an aerial scene is still a challenging computer vision task and an importance component of Geospatial Information Systems. In this paper we present a new homography-based approach for 3D urban reconstruction based on virtual planes. A hybrid sensor consisting of three sensor elements including camera, inertial (orientation) sensor (IS) and GPS (Global Positioning System) location device mounted on an airborne platform can be used for wide area scene reconstruction. The heterogeneous data coming from each of these three sensors are fused using projective transformations or homographies. Due to inaccuracies in the sensor observations, the estimated homography transforms between inertial and virtual 3D planes have measurement uncertainties. The modeling of such uncertainties for the virtual plane reconstruction method is described in this paper. A preliminary set of results using simulation data is used to demonstrate the feasibility of the proposed approach.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125013966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joshua Fraser, Anoop Haridas, G. Seetharaman, R. Rao, K. Palaniappan
KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.
{"title":"KOLAM: a cross-platform architecture for scalable visualization and tracking in wide-area imagery","authors":"Joshua Fraser, Anoop Haridas, G. Seetharaman, R. Rao, K. Palaniappan","doi":"10.1117/12.2018162","DOIUrl":"https://doi.org/10.1117/12.2018162","url":null,"abstract":"KOLAM is an open, cross-platform, interoperable, scalable and extensible framework supporting a novel multi- scale spatiotemporal dual-cache data structure for big data visualization and visual analytics. This paper focuses on the use of KOLAM for target tracking in high-resolution, high throughput wide format video also known as wide-area motion imagery (WAMI). It was originally developed for the interactive visualization of extremely large geospatial imagery of high spatial and spectral resolution. KOLAM is platform, operating system and (graphics) hardware independent, and supports embedded datasets scalable from hundreds of gigabytes to feasibly petabytes in size on clusters, workstations, desktops and mobile computers. In addition to rapid roam, zoom and hyper- jump spatial operations, a large number of simultaneously viewable embedded pyramid layers (also referred to as multiscale or sparse imagery), interactive colormap and histogram enhancement, spherical projection and terrain maps are supported. The KOLAM software architecture was extended to support airborne wide-area motion imagery by organizing spatiotemporal tiles in very large format video frames using a temporal cache of tiled pyramid cached data structures. The current version supports WAMI animation, fast intelligent inspection, trajectory visualization and target tracking (digital tagging); the latter by interfacing with external automatic tracking software. One of the critical needs for working with WAMI is a supervised tracking and visualization tool that allows analysts to digitally tag multiple targets, quickly review and correct tracking results and apply geospatial visual analytic tools on the generated trajectories. One-click manual tracking combined with multiple automated tracking algorithms are available to assist the analyst and increase human effectiveness.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116222680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A layered sensing approach helps to mitigate sensor, target, and environmental operating conditions affecting target tracking and recognition performance. Radar sensors provide standoff sensing capabilities over a range of weather conditions; however, operating conditions such as obscuration can hinder radar target tracking. By using other sensing modalities such as electro-optical (EO) building cameras or eye witness reports, continuous target tracking and recognition may be achieved when radar data is unavailable. Information fusion is necessary to associate independent multisource data to ensure accurate target track and identification is maintained. Exploiting the unique information obtained from multiple sensor modalities with non-sensor sources will enhance vehicle track and recognition performance and increase confidence in the reported results by providing confirmation of target tracks when multiple sources have overlapping coverage of the vehicle of interest. The author uses a fusion performance model in conjunction with a tracking and recognition performance model to assess which combination of information sources produce the greatest gains for both urban and rural environments for a typical sized ground vehicle.
{"title":"Multisource information fusion for enhanced simultaneous tracking and recognition","authors":"B. Kahler","doi":"10.1117/12.2016616","DOIUrl":"https://doi.org/10.1117/12.2016616","url":null,"abstract":"A layered sensing approach helps to mitigate sensor, target, and environmental operating conditions affecting target tracking and recognition performance. Radar sensors provide standoff sensing capabilities over a range of weather conditions; however, operating conditions such as obscuration can hinder radar target tracking. By using other sensing modalities such as electro-optical (EO) building cameras or eye witness reports, continuous target tracking and recognition may be achieved when radar data is unavailable. Information fusion is necessary to associate independent multisource data to ensure accurate target track and identification is maintained. Exploiting the unique information obtained from multiple sensor modalities with non-sensor sources will enhance vehicle track and recognition performance and increase confidence in the reported results by providing confirmation of target tracks when multiple sources have overlapping coverage of the vehicle of interest. The author uses a fusion performance model in conjunction with a tracking and recognition performance model to assess which combination of information sources produce the greatest gains for both urban and rural environments for a typical sized ground vehicle.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121774943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Further reduction of size, weight and power consumption of the High Operating Temperature (HOT) infrared (IR) Integrated Detector-Dewar-Cooler Assemblies (IDDCA) eventually calls for development of high-speed cryocoolers. In case of integral rotary design, the immediate penalty is the more intensive slapping of compression and expansion pistons along with intensification of micro collisions inherent for the operation of crank-slide linkages featuring ball bearings. Resulting from this is the generation of impulsive vibration export, the spectrum of which features the driving frequency along with numerous multiples covering the entire range of audible frequencies. In a typical design of an infrared imager, the metal light-weight enclosure accommodates a directly mounted IDDCA and an optical train, thus serving as an optical bench and heat sink. This usually results in excitation of structural resonances in the said enclosure and, therefore, in excessive noise generation compromising the aural stealth. The author presents the complex approach to a design of aural undetectable infrared imagers in which the IDDCA is mounted upon the imager enclosure through a silent pad. Special attention is paid to resolving the line of sight stability and heat sinking issues. The demonstration imager relying on Ricor K562S based IDDCA meets the most stringent requirement to 10 meters aural non-detectability distance (per MIL-STD 1474D, Level II) even during boost cooldown phase of operation.
{"title":"Aural stealth of portable HOT infrared imager","authors":"A. Veprik","doi":"10.1117/12.2017125","DOIUrl":"https://doi.org/10.1117/12.2017125","url":null,"abstract":"Further reduction of size, weight and power consumption of the High Operating Temperature (HOT) infrared (IR) Integrated Detector-Dewar-Cooler Assemblies (IDDCA) eventually calls for development of high-speed cryocoolers. In case of integral rotary design, the immediate penalty is the more intensive slapping of compression and expansion pistons along with intensification of micro collisions inherent for the operation of crank-slide linkages featuring ball bearings. Resulting from this is the generation of impulsive vibration export, the spectrum of which features the driving frequency along with numerous multiples covering the entire range of audible frequencies. In a typical design of an infrared imager, the metal light-weight enclosure accommodates a directly mounted IDDCA and an optical train, thus serving as an optical bench and heat sink. This usually results in excitation of structural resonances in the said enclosure and, therefore, in excessive noise generation compromising the aural stealth. The author presents the complex approach to a design of aural undetectable infrared imagers in which the IDDCA is mounted upon the imager enclosure through a silent pad. Special attention is paid to resolving the line of sight stability and heat sinking issues. The demonstration imager relying on Ricor K562S based IDDCA meets the most stringent requirement to 10 meters aural non-detectability distance (per MIL-STD 1474D, Level II) even during boost cooldown phase of operation.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114645685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Vergara, R. Linares Herrero, R. Gutíerrez Álvarez, C. Fernández-Montojo, L. J. Gomez, V. Villamayor, A. Baldasano Ramírez, M. Montojo
In this work a breakthrough in the field of low cost uncooled infrared detectors is presented: an 80x80 MWIR VPD PbSe detector monolithically integrated with the corresponding Si-CMOS circuitry. Fast speed of response and high frame rates are, until date, non existing performances in the domain of low cost uncooled IR imagers. The new detector presented fills the gap. The device is capable to provide MWIR images to rates as high as 2 KHz, full frame, in real uncooled operation which converts it in an excellent solution for being used in applications where short events and fast transients dominate the system dynamics to be studied or detected. VPD PbSe technology is unique because combines all the main requirements demanded for a volume ready technology: 1. Simple processing 2. Good reproducibility and homogeneity 3. Processing compatible with big areas substrates 4. Si-CMOS compatible (no hybridation needed) 5. Low cost optics and packagin The new FPA represents a milestone in the road towards affordable uncooled MWIR imagers and it is the demonstration of VPD PbSe technology has reached industrial maturity. The device presented in the work was processed on 8-inch Si wafers with excellent results in terms of manufacturing yield and repeatability. The technology opens the MWIR band to SWaP concept.
{"title":"80×80 VPD PbSe: the first uncooled MWIR FPA monolithically integrated with a Si-CMOS ROIC","authors":"G. Vergara, R. Linares Herrero, R. Gutíerrez Álvarez, C. Fernández-Montojo, L. J. Gomez, V. Villamayor, A. Baldasano Ramírez, M. Montojo","doi":"10.1117/12.2015290","DOIUrl":"https://doi.org/10.1117/12.2015290","url":null,"abstract":"In this work a breakthrough in the field of low cost uncooled infrared detectors is presented: an 80x80 MWIR VPD PbSe detector monolithically integrated with the corresponding Si-CMOS circuitry. Fast speed of response and high frame rates are, until date, non existing performances in the domain of low cost uncooled IR imagers. The new detector presented fills the gap. The device is capable to provide MWIR images to rates as high as 2 KHz, full frame, in real uncooled operation which converts it in an excellent solution for being used in applications where short events and fast transients dominate the system dynamics to be studied or detected. VPD PbSe technology is unique because combines all the main requirements demanded for a volume ready technology: 1. Simple processing 2. Good reproducibility and homogeneity 3. Processing compatible with big areas substrates 4. Si-CMOS compatible (no hybridation needed) 5. Low cost optics and packagin The new FPA represents a milestone in the road towards affordable uncooled MWIR imagers and it is the demonstration of VPD PbSe technology has reached industrial maturity. The device presented in the work was processed on 8-inch Si wafers with excellent results in terms of manufacturing yield and repeatability. The technology opens the MWIR band to SWaP concept.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124909335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Landini, A. Cocchi, R. Bardazzi, Mauro Sardelli, Stefano Puntri
The market of the sights for the 5.56 mm assault rifles is dominated by mainly three types of systems: TWS (Thermal Weapon Sight), the Pocket Scope with Weapon Mount and the Clip-on. The latter are designed primarily for special forces and snipers use, while the TWS design is triggered mainly by the DRI (Detection, Recognition, Identification) requirements. The Pocket Scope design is focused on respecting the SWaP (Size, Weight and Power dissipation) requirements. Compared to the TWS systems, for the last two years there was a significant technological growth of the Pocket Scope/Weapon Mount solutions, concentrated on the compression of the overall dimensions. The trend for the assault rifles is the use of small size/light weight (SWaP) IR sights, suitable mainly for close combat operations but also for extraordinary use as pocket scopes – handheld or helmet mounted. The latest developments made by Selex ES S.p.A. are responding precisely to the above-mentioned trend, through a miniaturized Day/Night sight embedding state-of-the art sensors and using standard protocols (USB 2.0, Bluetooth 4.0) for interfacing with PDAs, Wearable computers, etc., while maintaining the “shoot around the corner” capability. Indeed, inside the miniaturized Day/Night sight architecture, a wireless link using Bluetooth technology has been implemented to transmit the video streaming of the rifle sight to an helmet mounted display. The video of the rifle sight is transmitted only to the eye-piece of the soldier shouldering the rifle.
5.56毫米突击步枪的瞄准具市场主要由三种类型的系统主导:TWS(热武器瞄准具),带有武器支架的口袋瞄准具和夹式瞄准具。后者主要设计用于特种部队和狙击手使用,而TWS设计主要由DRI(探测、识别、识别)需求触发。口袋瞄准镜的设计重点是尊重SWaP(尺寸、重量和功耗)要求。与TWS系统相比,在过去的两年里,袖珍瞄准镜/武器安装解决方案有了显著的技术发展,主要集中在整体尺寸的压缩上。突击步枪的趋势是使用小尺寸/轻重量(SWaP)红外瞄准具,主要适用于近距离战斗行动,但也适用于特殊用途的口袋瞄准具-手持或头盔安装。Selex ES S.p.A.的最新发展正是对上述趋势的回应,通过嵌入最先进传感器的小型化日/夜视点,并使用标准协议(USB 2.0,蓝牙4.0)与pda,可穿戴计算机等接口,同时保持“拍摄角落”的能力。实际上,在小型化的昼/夜瞄准具结构内部,采用蓝牙技术的无线链路已经实现,将步枪瞄准具的视频流传输到头盔显示器。步枪瞄准具的视频只传输到肩扛步枪的士兵的目镜。
{"title":"Miniaturized day/night sight in Soldato Futuro program","authors":"A. Landini, A. Cocchi, R. Bardazzi, Mauro Sardelli, Stefano Puntri","doi":"10.1117/12.2015814","DOIUrl":"https://doi.org/10.1117/12.2015814","url":null,"abstract":"The market of the sights for the 5.56 mm assault rifles is dominated by mainly three types of systems: TWS (Thermal Weapon Sight), the Pocket Scope with Weapon Mount and the Clip-on. The latter are designed primarily for special forces and snipers use, while the TWS design is triggered mainly by the DRI (Detection, Recognition, Identification) requirements. The Pocket Scope design is focused on respecting the SWaP (Size, Weight and Power dissipation) requirements. Compared to the TWS systems, for the last two years there was a significant technological growth of the Pocket Scope/Weapon Mount solutions, concentrated on the compression of the overall dimensions. The trend for the assault rifles is the use of small size/light weight (SWaP) IR sights, suitable mainly for close combat operations but also for extraordinary use as pocket scopes – handheld or helmet mounted. The latest developments made by Selex ES S.p.A. are responding precisely to the above-mentioned trend, through a miniaturized Day/Night sight embedding state-of-the art sensors and using standard protocols (USB 2.0, Bluetooth 4.0) for interfacing with PDAs, Wearable computers, etc., while maintaining the “shoot around the corner” capability. Indeed, inside the miniaturized Day/Night sight architecture, a wireless link using Bluetooth technology has been implemented to transmit the video streaming of the rifle sight to an helmet mounted display. The video of the rifle sight is transmitted only to the eye-piece of the soldier shouldering the rifle.","PeriodicalId":338283,"journal":{"name":"Defense, Security, and Sensing","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117257105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}