Pub Date : 1999-06-21DOI: 10.1109/CVBVS.1999.781098
G. Jones, B. Bhanu
Using SAR scattering center locations and magnitudes as features, invariances with articulation (i.e., turret rotation for the ZSU 23/4 gun and T72 tank), with configuration variants (e.g. fuel barrels, searchlights, etc.) and with a depression angle change are shown for real SAR images obtained from the MSTAR public data. This location and magnitude quasi-invariance forms a basis for an innovative SAR recognition engine that successfully identifies real articulated and non-standard configuration vehicles based on non-articulated, standard recognition models. Identification performance results are given as confusion matrices and ROC curves for articulated objects, for configuration variants, and for a small change in depression angle.
{"title":"Quasi-invariants for recognition of articulated and non-standard objects in SAR images","authors":"G. Jones, B. Bhanu","doi":"10.1109/CVBVS.1999.781098","DOIUrl":"https://doi.org/10.1109/CVBVS.1999.781098","url":null,"abstract":"Using SAR scattering center locations and magnitudes as features, invariances with articulation (i.e., turret rotation for the ZSU 23/4 gun and T72 tank), with configuration variants (e.g. fuel barrels, searchlights, etc.) and with a depression angle change are shown for real SAR images obtained from the MSTAR public data. This location and magnitude quasi-invariance forms a basis for an innovative SAR recognition engine that successfully identifies real articulated and non-standard configuration vehicles based on non-articulated, standard recognition models. Identification performance results are given as confusion matrices and ROC curves for articulated objects, for configuration variants, and for a small change in depression angle.","PeriodicalId":394469,"journal":{"name":"Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (CVBVS'99)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124140336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-21DOI: 10.1109/CVBVS.1999.781090
G. Castellano, J. Boyce, M. Sandler
A modified version of the CDWT optical flow algorithm developed by Magarey et al. is applied to the problem of moving target detection in noisy infrared image sequences. The optical flow algorithm is a hierarchical, phase-based approach. The modified version includes an explicit regularization of the motion field, which is of fundamental importance for the application in question. The data used consists of infrared imagery where pixel-size targets move in strongly cluttered backgrounds. To detect the targets different frames from the sequence are compared by subtraction of one from another. However, the motion of the sensor generates an apparent motion of the background across frames, and, as a consequence, the differences between background regions dominate the residue images. To avoid this effect, the estimated motion field between the frames is used to register the background spatially, so that only objects corresponding to potential targets appear in the residue images. Results of applying the method on 3 infrared image sequences are presented, which show that the target SNR is higher when the estimated motion field for the whole scene is explicitly regularized.
{"title":"Moving target detection in infrared imagery using a regularized CDWT optical flow","authors":"G. Castellano, J. Boyce, M. Sandler","doi":"10.1109/CVBVS.1999.781090","DOIUrl":"https://doi.org/10.1109/CVBVS.1999.781090","url":null,"abstract":"A modified version of the CDWT optical flow algorithm developed by Magarey et al. is applied to the problem of moving target detection in noisy infrared image sequences. The optical flow algorithm is a hierarchical, phase-based approach. The modified version includes an explicit regularization of the motion field, which is of fundamental importance for the application in question. The data used consists of infrared imagery where pixel-size targets move in strongly cluttered backgrounds. To detect the targets different frames from the sequence are compared by subtraction of one from another. However, the motion of the sensor generates an apparent motion of the background across frames, and, as a consequence, the differences between background regions dominate the residue images. To avoid this effect, the estimated motion field between the frames is used to register the background spatially, so that only objects corresponding to potential targets appear in the residue images. Results of applying the method on 3 infrared image sequences are presented, which show that the target SNR is higher when the estimated motion field for the whole scene is explicitly regularized.","PeriodicalId":394469,"journal":{"name":"Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (CVBVS'99)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114582429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-21DOI: 10.1109/CVBVS.1999.781097
I. Pavlidis, Douglas P. Perrin, N. Papanikolopoulos, W. Au, S. Sawtelle
The performance of computer vision algorithms has made great strides and it is good enough to be useful in a number of civilian and military applications. Algorithm advancement in Automatic Target Recognition (ATR) in particular; has reached a critical point. State-of-the-art ATRs are capable of delivering robust performance for certain operational scenarios. As Computer Vision technology matures and algorithms enter the civilian and military marketplace as products, the lack of a formal testing theory and tools become obvious. In this paper we present the design and implementation of a Ground Truth Tool (GTT) for Synthetic Aperture Radar (SAR) imagery. The tool serves as part of an evaluation system for SAR ATRs. It features a semi-automatic method for delineating image objects that draws upon the theory of deformable models. In comparison with other deformable model implementations, our version is stable and is supported by an extensive Graphical User Interface (GUI). Preliminary experimental tests show that the system can substantially increase the productivity and accuracy of the image analyst (IA).
{"title":"A ground truth tool for Synthetic Aperture Radar (SAR) imagery","authors":"I. Pavlidis, Douglas P. Perrin, N. Papanikolopoulos, W. Au, S. Sawtelle","doi":"10.1109/CVBVS.1999.781097","DOIUrl":"https://doi.org/10.1109/CVBVS.1999.781097","url":null,"abstract":"The performance of computer vision algorithms has made great strides and it is good enough to be useful in a number of civilian and military applications. Algorithm advancement in Automatic Target Recognition (ATR) in particular; has reached a critical point. State-of-the-art ATRs are capable of delivering robust performance for certain operational scenarios. As Computer Vision technology matures and algorithms enter the civilian and military marketplace as products, the lack of a formal testing theory and tools become obvious. In this paper we present the design and implementation of a Ground Truth Tool (GTT) for Synthetic Aperture Radar (SAR) imagery. The tool serves as part of an evaluation system for SAR ATRs. It features a semi-automatic method for delineating image objects that draws upon the theory of deformable models. In comparison with other deformable model implementations, our version is stable and is supported by an extensive Graphical User Interface (GUI). Preliminary experimental tests show that the system can substantially increase the productivity and accuracy of the image analyst (IA).","PeriodicalId":394469,"journal":{"name":"Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (CVBVS'99)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125713812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-21DOI: 10.1109/CVBVS.1999.781102
S. Chai, Antonio Gentile, W. Lugo-Beauchamp, J. Cruz-Rivera, D. S. Wills
Future military scenarios will rely on advanced imaging sensor technology beyond the visible spectrum to gain total battlefield awareness. Real-time processing of these data streams requires tremendous computational workloads and I/O throughputs. This paper presents three applications for hyper-spectral data streams, vector quantization, region autofocus, and K-means clustering, on the SIMD Pixel Processor (SIMPil). In SIMPil, an image sensor array (focal plane) is integrated on top of a SIMD computing layer to provide direct coupling between sensors and processors, alleviating I/O bandwidth bottlenecks while maintaining low power consumption and portability. Simulation results with sustained operation throughputs of 500-1500 Gops/sec support real-time performance and promote focal plane processing on SIMPil.
{"title":"Hyper-spectral image processing applications on the SIMD Pixel Processor for the digital battlefield","authors":"S. Chai, Antonio Gentile, W. Lugo-Beauchamp, J. Cruz-Rivera, D. S. Wills","doi":"10.1109/CVBVS.1999.781102","DOIUrl":"https://doi.org/10.1109/CVBVS.1999.781102","url":null,"abstract":"Future military scenarios will rely on advanced imaging sensor technology beyond the visible spectrum to gain total battlefield awareness. Real-time processing of these data streams requires tremendous computational workloads and I/O throughputs. This paper presents three applications for hyper-spectral data streams, vector quantization, region autofocus, and K-means clustering, on the SIMD Pixel Processor (SIMPil). In SIMPil, an image sensor array (focal plane) is integrated on top of a SIMD computing layer to provide direct coupling between sensors and processors, alleviating I/O bandwidth bottlenecks while maintaining low power consumption and portability. Simulation results with sustained operation throughputs of 500-1500 Gops/sec support real-time performance and promote focal plane processing on SIMPil.","PeriodicalId":394469,"journal":{"name":"Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (CVBVS'99)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128431424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-21DOI: 10.1109/CVBVS.1999.781093
I. Pavlidis, P. Symosek, B. Fritz, N. Papanikolopoulos
We undertook a study to determine if the automatic detection and counting of vehicle passengers is feasible. An automated passenger counting system would greatly facilitate the operation of freeway lanes reserved for car-pools (HOV lanes). In the present paper we report our findings regarding the appropriate sensor phenomenology and arrangement for the task. We propose a novel system based on fusion of near-infrared imaging signals and we demonstrate its adequacy with theoretical and experimental arguments.
{"title":"A near-infrared fusion scheme for automatic detection of vehicle passengers","authors":"I. Pavlidis, P. Symosek, B. Fritz, N. Papanikolopoulos","doi":"10.1109/CVBVS.1999.781093","DOIUrl":"https://doi.org/10.1109/CVBVS.1999.781093","url":null,"abstract":"We undertook a study to determine if the automatic detection and counting of vehicle passengers is feasible. An automated passenger counting system would greatly facilitate the operation of freeway lanes reserved for car-pools (HOV lanes). In the present paper we report our findings regarding the appropriate sensor phenomenology and arrangement for the task. We propose a novel system based on fusion of near-infrared imaging signals and we demonstrate its adequacy with theoretical and experimental arguments.","PeriodicalId":394469,"journal":{"name":"Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (CVBVS'99)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132852150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-21DOI: 10.1109/CVBVS.1999.781101
J. Keller, P. Gader, Xiaomei Wang
This paper presents a method for automatically generating descriptions of scenes represented by digital images acquired using laser radar (LADAR). A method for matching the scenes to linguistic descriptions is also presented. Both methods rely on fuzzy spatial relations. Primitive spatial relations between objects are computed using fuzzy mathematical morphology and compared to a previous method based on training a neural network to learn human preferences. For each pair of objects in a scene, the primitive spatial relations are combined into complex spatial relations using a fuzzy rule base. A scene description is generated using the highest confidence rule outputs. Scene matching is performed using the outputs of the rules that correspond to the linguistic description. The results show that seemingly significant differences in spatial relationship definitions have little impact on system performance and that reasonable match scores and descriptions can be generated from the fuzzy system.
{"title":"LADAR scene description using fuzzy morphology and rules","authors":"J. Keller, P. Gader, Xiaomei Wang","doi":"10.1109/CVBVS.1999.781101","DOIUrl":"https://doi.org/10.1109/CVBVS.1999.781101","url":null,"abstract":"This paper presents a method for automatically generating descriptions of scenes represented by digital images acquired using laser radar (LADAR). A method for matching the scenes to linguistic descriptions is also presented. Both methods rely on fuzzy spatial relations. Primitive spatial relations between objects are computed using fuzzy mathematical morphology and compared to a previous method based on training a neural network to learn human preferences. For each pair of objects in a scene, the primitive spatial relations are combined into complex spatial relations using a fuzzy rule base. A scene description is generated using the highest confidence rule outputs. Scene matching is performed using the outputs of the rules that correspond to the linguistic description. The results show that seemingly significant differences in spatial relationship definitions have little impact on system performance and that reasonable match scores and descriptions can be generated from the fuzzy system.","PeriodicalId":394469,"journal":{"name":"Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (CVBVS'99)","volume":"16 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120857628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-21DOI: 10.1109/CVBVS.1999.781100
A. Jiménez, R. Ceres, J. Pons
This paper describes a new laser-based computer vision system used for automatic fruit recognition. Most relevant vision studies for fruit harvesting are reviewed. Our system is based on an infrared laser-range finder sensor generating range and reflectance images and it is designed to detect spherical objects in non-structured environments. A special image restoration technique is defined and applied to improve image quality. Image analysis algorithms integrate both range and reflectance information to generate four characteristic primitives evidencing the presence of spherical objects. This machine vision system, which generates the 3-D location, radio and surface reflectivity of each spherical object has been applied to the AGRIBOT orange fruit harvester robot. Test results indicate good correct detection rates, unlikely false alarms and a robust behavior.
{"title":"A machine vision system using a laser radar applied to robotic fruit harvesting","authors":"A. Jiménez, R. Ceres, J. Pons","doi":"10.1109/CVBVS.1999.781100","DOIUrl":"https://doi.org/10.1109/CVBVS.1999.781100","url":null,"abstract":"This paper describes a new laser-based computer vision system used for automatic fruit recognition. Most relevant vision studies for fruit harvesting are reviewed. Our system is based on an infrared laser-range finder sensor generating range and reflectance images and it is designed to detect spherical objects in non-structured environments. A special image restoration technique is defined and applied to improve image quality. Image analysis algorithms integrate both range and reflectance information to generate four characteristic primitives evidencing the presence of spherical objects. This machine vision system, which generates the 3-D location, radio and surface reflectivity of each spherical object has been applied to the AGRIBOT orange fruit harvester robot. Test results indicate good correct detection rates, unlikely false alarms and a robust behavior.","PeriodicalId":394469,"journal":{"name":"Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (CVBVS'99)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122180644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-21DOI: 10.1109/CVBVS.1999.781089
A. Strehl, J. Aggarwal
In this paper we propose a system that detects independently moving objects (IMOs) in forward looking infra-red (FLIR) image sequences taken from an airborne, moving platform. Ego-motion effects are removed through a robust multi-scale affine image registration process. Consequently, areas with residual motion indicate object activity. These areas are detected, refined and selected using a Bayes' classifier. The remaining regions are clustered into pairs. Each pair represents an object's front and rear end. Using motion and scene knowledge we estimate object pose and establish a region-of-interest (ROI) for each pair. Edge elements within each ROI are used to segment the convex cover containing the IMO. We show detailed results on real, complex, cluttered and noisy sequences. Moreover, we outline the integration of our robust system into a comprehensive automatic target recognition (ATR) and action classification system.
{"title":"Detecting moving objects in airborne forward looking infra-red sequences","authors":"A. Strehl, J. Aggarwal","doi":"10.1109/CVBVS.1999.781089","DOIUrl":"https://doi.org/10.1109/CVBVS.1999.781089","url":null,"abstract":"In this paper we propose a system that detects independently moving objects (IMOs) in forward looking infra-red (FLIR) image sequences taken from an airborne, moving platform. Ego-motion effects are removed through a robust multi-scale affine image registration process. Consequently, areas with residual motion indicate object activity. These areas are detected, refined and selected using a Bayes' classifier. The remaining regions are clustered into pairs. Each pair represents an object's front and rear end. Using motion and scene knowledge we estimate object pose and establish a region-of-interest (ROI) for each pair. Edge elements within each ROI are used to segment the convex cover containing the IMO. We show detailed results on real, complex, cluttered and noisy sequences. Moreover, we outline the integration of our robust system into a comprehensive automatic target recognition (ATR) and action classification system.","PeriodicalId":394469,"journal":{"name":"Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (CVBVS'99)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123769261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-21DOI: 10.1109/CVBVS.1999.781099
M. A. Khabou, P. Gader, J. Keller
Morphological shared-weight neural networks (MSNN) combine the feature extraction capability of mathematical morphology with the function mapping capability of neural networks. This provides a trainable mechanism for translation invariant object detection using a variety of imaging sensors, including TV, forward-looking infrared (FLIR) and synthetic aperture radar (SAR). We provide an overview of previous results and new results with laser radar (LADAR). We present three sets of experiments. In the first set of experiments we use the MSNN to detect different types of targets simultaneously. In the second set we use the MSNN to detect only a particular type of target. In the third set we test a novel scenario: we train the MSNN to recognize a particular type of target using very few examples. A detection rate of 86% with a reasonable number of false alarms was achieved in the first set of experiments and a detection rate of close to 100% with very few false alarms was achieved in the second and third sets of experiments.
{"title":"Morphological shared-weight neural networks: a tool for automatic target recognition beyond the visible spectrum","authors":"M. A. Khabou, P. Gader, J. Keller","doi":"10.1109/CVBVS.1999.781099","DOIUrl":"https://doi.org/10.1109/CVBVS.1999.781099","url":null,"abstract":"Morphological shared-weight neural networks (MSNN) combine the feature extraction capability of mathematical morphology with the function mapping capability of neural networks. This provides a trainable mechanism for translation invariant object detection using a variety of imaging sensors, including TV, forward-looking infrared (FLIR) and synthetic aperture radar (SAR). We provide an overview of previous results and new results with laser radar (LADAR). We present three sets of experiments. In the first set of experiments we use the MSNN to detect different types of targets simultaneously. In the second set we use the MSNN to detect only a particular type of target. In the third set we test a novel scenario: we train the MSNN to recognize a particular type of target using very few examples. A detection rate of 86% with a reasonable number of false alarms was achieved in the first set of experiments and a detection rate of close to 100% with very few false alarms was achieved in the second and third sets of experiments.","PeriodicalId":394469,"journal":{"name":"Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (CVBVS'99)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128504251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-06-21DOI: 10.1109/CVBVS.1999.781095
K. Owens, L. Matthies
One goal of the "Demo III" unmanned ground vehicle program is to enable autonomous nighttime navigation at speeds of up to 10 m.p.h. To perform obstacle detection at night with stereo vision will require night vision cameras that produce adequate image quality for the driving speeds, vehicle dynamics, obstacle sizes, and scene conditions that will be encountered. This paper analyzes the suitability of four classes of night vision cameras (3-5 /spl mu/m cooled FLIR, 8-12 /spl mu/m cooled FLIR, 8-12 /spl mu/m uncooled FLIR, and image intensifiers) for night stereo vision, using criteria based on stereo matching quality, image signal to noise ratio, motion blur and synchronization capability. We find that only cooled FLIRs will enable stereo vision performance that meets the goals of the Demo III program for nighttime autonomous mobility.
{"title":"Passive night vision sensor comparison for unmanned ground vehicle stereo vision navigation","authors":"K. Owens, L. Matthies","doi":"10.1109/CVBVS.1999.781095","DOIUrl":"https://doi.org/10.1109/CVBVS.1999.781095","url":null,"abstract":"One goal of the \"Demo III\" unmanned ground vehicle program is to enable autonomous nighttime navigation at speeds of up to 10 m.p.h. To perform obstacle detection at night with stereo vision will require night vision cameras that produce adequate image quality for the driving speeds, vehicle dynamics, obstacle sizes, and scene conditions that will be encountered. This paper analyzes the suitability of four classes of night vision cameras (3-5 /spl mu/m cooled FLIR, 8-12 /spl mu/m cooled FLIR, 8-12 /spl mu/m uncooled FLIR, and image intensifiers) for night stereo vision, using criteria based on stereo matching quality, image signal to noise ratio, motion blur and synchronization capability. We find that only cooled FLIRs will enable stereo vision performance that meets the goals of the Demo III program for nighttime autonomous mobility.","PeriodicalId":394469,"journal":{"name":"Proceedings IEEE Workshop on Computer Vision Beyond the Visible Spectrum: Methods and Applications (CVBVS'99)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117182197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}