Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466320
Jonathan Fry, M. Pusateri
Digital multispectral night vision goggles incorporate both imagers and displays that often have different resolutions. While both thermal imager and micro-display technologies continue to produce larger arrays, thermal imagers still lag well behind displays and can require interpolation by a factor of 2.5 in both horizontal and vertical directions. In goggle applications, resizing the imagery streams to the size of the display must occur in real-time with minimal latency. In addition to low latency, a resizing algorithm must produce acceptable imagery, necessitating an understanding of the resized image fidelity and spatial smoothness. While both spatial and spatial frequency domain resizing techniques are available, most spatial frequency techniques require a complete frame for operation introducing unacceptable latency. Spatial domain techniques can be implemented on a neighborhood basis allowing latencies equivalent to several row clock pulses to be achieved. We have already implemented bilinear re-sampling in hardware and, while bilinear re-sampling supports moderate up-sizes with reasonable image quality, its deficiencies are apparent at interpolation ratios of two and greater. We are developing hardware implementations of both bicubic and biquintic resizing algorithms. We present the results of comparison between hardware ready versions of the bicubic and biquintic algorithms with the existing bilinear. We also discuss the hardware requirements for bicubic and biquintic compared to the existing bilinear resizing.
{"title":"A comparison of bicubic and biquintic interpolators suitable for real-time hardware implementation","authors":"Jonathan Fry, M. Pusateri","doi":"10.1109/AIPR.2009.5466320","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466320","url":null,"abstract":"Digital multispectral night vision goggles incorporate both imagers and displays that often have different resolutions. While both thermal imager and micro-display technologies continue to produce larger arrays, thermal imagers still lag well behind displays and can require interpolation by a factor of 2.5 in both horizontal and vertical directions. In goggle applications, resizing the imagery streams to the size of the display must occur in real-time with minimal latency. In addition to low latency, a resizing algorithm must produce acceptable imagery, necessitating an understanding of the resized image fidelity and spatial smoothness. While both spatial and spatial frequency domain resizing techniques are available, most spatial frequency techniques require a complete frame for operation introducing unacceptable latency. Spatial domain techniques can be implemented on a neighborhood basis allowing latencies equivalent to several row clock pulses to be achieved. We have already implemented bilinear re-sampling in hardware and, while bilinear re-sampling supports moderate up-sizes with reasonable image quality, its deficiencies are apparent at interpolation ratios of two and greater. We are developing hardware implementations of both bicubic and biquintic resizing algorithms. We present the results of comparison between hardware ready versions of the bicubic and biquintic algorithms with the existing bilinear. We also discuss the hardware requirements for bicubic and biquintic compared to the existing bilinear resizing.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121338951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466322
Peter Doucette, Ann Martin, Chris Kavanagh, Tim McIntyre, Steven Barton, J. Grodecki, S. Malitz, Matthew Tang, J. Nolting
The application of quantitative performance evaluation methods can provide useful insights in determining the utility of computer-assisted methods for delineating geographic features from remotely sensed images. Evaluation concepts are demonstrated with road centerlines in particular, but are applicable to similar feature types such as paths, trails, or rivers. The two comparative measures used to differentiate conventional versus computer-assisted delineation are 1) user clock time, and 2) spatial consistency. Our evaluation results with road centerlines demonstrate how such quantitative analyses can be used to determine the utility of computer-assisted methods from both developmental and operational perspectives.
{"title":"Evaluation methods for curvilinear feature extraction","authors":"Peter Doucette, Ann Martin, Chris Kavanagh, Tim McIntyre, Steven Barton, J. Grodecki, S. Malitz, Matthew Tang, J. Nolting","doi":"10.1109/AIPR.2009.5466322","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466322","url":null,"abstract":"The application of quantitative performance evaluation methods can provide useful insights in determining the utility of computer-assisted methods for delineating geographic features from remotely sensed images. Evaluation concepts are demonstrated with road centerlines in particular, but are applicable to similar feature types such as paths, trails, or rivers. The two comparative measures used to differentiate conventional versus computer-assisted delineation are 1) user clock time, and 2) spatial consistency. Our evaluation results with road centerlines demonstrate how such quantitative analyses can be used to determine the utility of computer-assisted methods from both developmental and operational perspectives.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124012613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466308
A. Schaum
In autonomous hyperspectral remote sensing systems, the physical causes of false alarms are not all understood. Some arise from vagaries in sensor performance, especially in non-visible wavelengths. Consequently, many false target declarations are characterized simply as outliers, anomalies conforming to no physical or statistical models. Other false alarms arise from clutter spectra too similar to target spectra. To eliminate the recurrence of such difficult errors, deployed systems should allow operator feedback to their signal processing systems. Here we describe how a hyperspectral system using even advanced detection algorithms, based on a elliptically contoured distribution models, can be enhanced by allowing it to learn from its mistakes.
{"title":"Advanced hyperspectral detection based on elliptically contoured distribution models and operator feedback","authors":"A. Schaum","doi":"10.1109/AIPR.2009.5466308","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466308","url":null,"abstract":"In autonomous hyperspectral remote sensing systems, the physical causes of false alarms are not all understood. Some arise from vagaries in sensor performance, especially in non-visible wavelengths. Consequently, many false target declarations are characterized simply as outliers, anomalies conforming to no physical or statistical models. Other false alarms arise from clutter spectra too similar to target spectra. To eliminate the recurrence of such difficult errors, deployed systems should allow operator feedback to their signal processing systems. Here we describe how a hyperspectral system using even advanced detection algorithms, based on a elliptically contoured distribution models, can be enhanced by allowing it to learn from its mistakes.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131920286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466301
N. Verma, Payal Gupta, P. Agrawal, Yan Cui
This paper presents improved mountain clustering technique based MRI (magnetic resonance imaging) brain image segmentation for spotting tumors. The proposed technique is compared with some existing techniques such as K-Means and FCM, clustering. The performance of all these clustering techniques is compared in terms of cluster entropy as a measure of information and also is visually compared for image segmentation of various brain tumor MRI images. The cluster entropy is heuristically determined, but is found to be effective in forming correct clusters as verified by visual assessment.
{"title":"MRI brain image segmentation for spotting tumors using improved mountain clustering approach","authors":"N. Verma, Payal Gupta, P. Agrawal, Yan Cui","doi":"10.1109/AIPR.2009.5466301","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466301","url":null,"abstract":"This paper presents improved mountain clustering technique based MRI (magnetic resonance imaging) brain image segmentation for spotting tumors. The proposed technique is compared with some existing techniques such as K-Means and FCM, clustering. The performance of all these clustering techniques is compared in terms of cluster entropy as a measure of information and also is visually compared for image segmentation of various brain tumor MRI images. The cluster entropy is heuristically determined, but is found to be effective in forming correct clusters as verified by visual assessment.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"173 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132683815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466318
J. Irvine
Automated target cueing (ATC) can assist analysts in searching large volumes of imagery. Performance of most automated systems is less than perfect, requiring an analyst to review the results to dismiss false alarms or confirm correct detections. This paper explores methods for improving the presentation and visualization of the ATC output, enabling more efficient and effective review of the detections flagged by the ATC. The approach relies on the interaction between the user and the ATC results. Confirmation of correct detections and dismissal of false alarms provides information to update the visualization. We present a description of the visualization method and illustrate it with results using panchromatic imagery of vehicles.
{"title":"User guided visualization for target search","authors":"J. Irvine","doi":"10.1109/AIPR.2009.5466318","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466318","url":null,"abstract":"Automated target cueing (ATC) can assist analysts in searching large volumes of imagery. Performance of most automated systems is less than perfect, requiring an analyst to review the results to dismiss false alarms or confirm correct detections. This paper explores methods for improving the presentation and visualization of the ATC output, enabling more efficient and effective review of the detections flagged by the ATC. The approach relies on the interaction between the user and the ATC results. Confirmation of correct detections and dismissal of false alarms provides information to update the visualization. We present a description of the visualization method and illustrate it with results using panchromatic imagery of vehicles.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124760605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466326
S. Azary, P. Anderson, R. Gaborski
In this paper we describe a method to evolve biologically inspired motion detection systems utilizing artificial neural networks (ANN's). Previously, the evolution of neural networks has focused on feed-forward neural networks or networks with predefined architectures. The purpose of this paper is to present a novel method for evolving neural networks with no predefined architectures to solve various problems including motion detection models. The neural network models are evolved with genetic algorithms using an encoding that defines a functional network with no restriction on recurrence, activation function types, or the number of nodes that compose the final ANN. The genetic algorithm operates on a population of potential solutions where each potential network is represented in a chromosome. The structure of each chromosome in the population is defined with a weight matrix which allows for efficient simulation of outputs. Each chromosome is evaluated by a fitness function that scores how well the actual output of an ANN compares to the expected output. Crossovers and mutations are made with specified probabilities between population members to evolve new members of the population. After a number of iterations a near optimal network is evolved that solves the problem at hand. The approach has proven to be sufficient to create biologically realistic motion detection neural network models with results that are comparable to results obtained from the standard Reichardt model.
{"title":"Biologically inspired motion detection neural network models evolved using genetic algorithms","authors":"S. Azary, P. Anderson, R. Gaborski","doi":"10.1109/AIPR.2009.5466326","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466326","url":null,"abstract":"In this paper we describe a method to evolve biologically inspired motion detection systems utilizing artificial neural networks (ANN's). Previously, the evolution of neural networks has focused on feed-forward neural networks or networks with predefined architectures. The purpose of this paper is to present a novel method for evolving neural networks with no predefined architectures to solve various problems including motion detection models. The neural network models are evolved with genetic algorithms using an encoding that defines a functional network with no restriction on recurrence, activation function types, or the number of nodes that compose the final ANN. The genetic algorithm operates on a population of potential solutions where each potential network is represented in a chromosome. The structure of each chromosome in the population is defined with a weight matrix which allows for efficient simulation of outputs. Each chromosome is evaluated by a fitness function that scores how well the actual output of an ANN compares to the expected output. Crossovers and mutations are made with specified probabilities between population members to evolve new members of the population. After a number of iterations a near optimal network is evolved that solves the problem at hand. The approach has proven to be sufficient to create biologically realistic motion detection neural network models with results that are comparable to results obtained from the standard Reichardt model.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131292015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466296
Yuheng Wang, P. Anderson, R. Gaborski
This paper introduces a hybrid face recognition model that combines biologically inspired features and Local Binary Features. The structure of the model is mainly based on the human visual ventral pathway. Previously, object-centered models focus on extracting global view-invariant representation of faces (I. Biederman, 1987) while feed-forward view-based models (HMAX model by Riesenhuber and Poggio, 1999) extract local features of faces by simulating responses of neurons in the human visual system. In this paper we first review the current main face recognition algorithms: Local Binary Pattern model and R&P model. This is followed by a detailed description of their implementation and advantages in overcoming intra-class variance. Results from our model are compared to the original Riesenhuber and Poggio model and Local Binary Pattern model (T. Ahonen et al, 2005). Then the paper will focus on our hybrid biological model which takes advantages of both structural information and biological features. Our model shows improved recognition rates and increased tolerance to intra-personal view differences.
介绍了一种结合生物特征和局部二值特征的混合人脸识别模型。该模型的结构主要基于人类视觉腹侧通路。以前,以对象为中心的模型侧重于提取人脸的全局视图不变表示(I. Biederman, 1987),而前馈视图模型(Riesenhuber和Poggio的HMAX模型,1999)通过模拟人类视觉系统中神经元的反应来提取人脸的局部特征。本文首先综述了当前主要的人脸识别算法:局部二值模式模型和R&P模型。接下来是对它们的实现和克服类内差异的优势的详细描述。我们的模型结果与原始的Riesenhuber和Poggio模型以及局部二元模式模型(T. Ahonen et al, 2005)进行了比较。然后,本文将重点介绍我们的混合生物模型,该模型既利用了结构信息,又利用了生物特征。我们的模型显示了更高的识别率和对个人观点差异的容忍度。
{"title":"Face recognition using a hybrid model","authors":"Yuheng Wang, P. Anderson, R. Gaborski","doi":"10.1109/AIPR.2009.5466296","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466296","url":null,"abstract":"This paper introduces a hybrid face recognition model that combines biologically inspired features and Local Binary Features. The structure of the model is mainly based on the human visual ventral pathway. Previously, object-centered models focus on extracting global view-invariant representation of faces (I. Biederman, 1987) while feed-forward view-based models (HMAX model by Riesenhuber and Poggio, 1999) extract local features of faces by simulating responses of neurons in the human visual system. In this paper we first review the current main face recognition algorithms: Local Binary Pattern model and R&P model. This is followed by a detailed description of their implementation and advantages in overcoming intra-class variance. Results from our model are compared to the original Riesenhuber and Poggio model and Local Binary Pattern model (T. Ahonen et al, 2005). Then the paper will focus on our hybrid biological model which takes advantages of both structural information and biological features. Our model shows improved recognition rates and increased tolerance to intra-personal view differences.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126095860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466299
Yuetian Xu, R. Madison
Bag-of-words is a popular and successful approach to performing object recognition. Its performance is limited by not considering relative geometry information. This limitation is particularly stark when there is significant image noise. We propose a “bag-of-phrases” model which extends bag-of-words by enforcing geometric consistency through application of a “geometric grammar” in a filter cascade. Experimental results on a computer generated dataset show increased robustness to clutter and noise as demonstrated by more than two orders of magnitude reduction in false positives compared with bag-of-words.
{"title":"Robust object recognition using a cascade of geometric consistency filters","authors":"Yuetian Xu, R. Madison","doi":"10.1109/AIPR.2009.5466299","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466299","url":null,"abstract":"Bag-of-words is a popular and successful approach to performing object recognition. Its performance is limited by not considering relative geometry information. This limitation is particularly stark when there is significant image noise. We propose a “bag-of-phrases” model which extends bag-of-words by enforcing geometric consistency through application of a “geometric grammar” in a filter cascade. Experimental results on a computer generated dataset show increased robustness to clutter and noise as demonstrated by more than two orders of magnitude reduction in false positives compared with bag-of-words.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133796074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466309
N. Markuzon, S. Kolitz
Wildfires cause extensive damage to nature and human developments. Substantial funds are spent preparing for and fighting them. This work develops a data driven approach to modeling the probabilistic risk of a currently burning fire becoming large and dangerous. We based our model upon observations of fire, weather and surrounding extracted from remote satellites. Data driven models reached good recognition accuracy in predicting fire danger in the coming day or two. We intend using the predictions in planning algorithms, e.g. flight plans for unmanned fire surveillance aircraft, to fight the fires in a more efficient and timely manner.
{"title":"Data driven approach to estimating fire danger from satellite images and weather information","authors":"N. Markuzon, S. Kolitz","doi":"10.1109/AIPR.2009.5466309","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466309","url":null,"abstract":"Wildfires cause extensive damage to nature and human developments. Substantial funds are spent preparing for and fighting them. This work develops a data driven approach to modeling the probabilistic risk of a currently burning fire becoming large and dangerous. We based our model upon observations of fire, weather and surrounding extracted from remote satellites. Data driven models reached good recognition accuracy in predicting fire danger in the coming day or two. We intend using the predictions in planning algorithms, e.g. flight plans for unmanned fire surveillance aircraft, to fight the fires in a more efficient and timely manner.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129190678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466298
E. Williams, M. Pusateri, J. Scott
Visible band and Infrared (IR) band camera and vision system development has been inspired by the human and animal vision systems. This paper will discuss the development of the Electro-Optical/Infrared (EO/IR) spectrum cameras from the front end optics, the detector or photon to electron convertor, preprocessing such as non-uniformity correction, automatic gain control, foveal vision processing done by the human eye, the gimbal system (human or animal eye ball and head motion), and the analog and digital paths of the data (optic nerve in humans). The computer vision algorithms (human or animal brain vision processing) will not be discussed in this paper. The Integrated Design Services in the College of Engineering at Penn State University has been developing EO/IR camera and sensor based computer vision systems for several years and combined with more than twenty years of developing imaging sensor stabilized platforms will use this imaging system development expertise to describe how the human and animal vision systems inspired the design and development of the computer based vision system. This paper will illustrate a block diagram of both the human eye and a typical EO/IR camera while comparing the two imaging systems.
{"title":"Biologically-inspired visible and infrared camera technology development","authors":"E. Williams, M. Pusateri, J. Scott","doi":"10.1109/AIPR.2009.5466298","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466298","url":null,"abstract":"Visible band and Infrared (IR) band camera and vision system development has been inspired by the human and animal vision systems. This paper will discuss the development of the Electro-Optical/Infrared (EO/IR) spectrum cameras from the front end optics, the detector or photon to electron convertor, preprocessing such as non-uniformity correction, automatic gain control, foveal vision processing done by the human eye, the gimbal system (human or animal eye ball and head motion), and the analog and digital paths of the data (optic nerve in humans). The computer vision algorithms (human or animal brain vision processing) will not be discussed in this paper. The Integrated Design Services in the College of Engineering at Penn State University has been developing EO/IR camera and sensor based computer vision systems for several years and combined with more than twenty years of developing imaging sensor stabilized platforms will use this imaging system development expertise to describe how the human and animal vision systems inspired the design and development of the computer based vision system. This paper will illustrate a block diagram of both the human eye and a typical EO/IR camera while comparing the two imaging systems.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127364078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}