首页 > 最新文献

33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)最新文献

英文 中文
Image primitive signatures 图像原始签名
Pub Date : 2004-10-13 DOI: 10.1109/AIPR.2004.28
J. Kinser
Image signatures are generated from the comparison of segments contained within an image to a database of segments collected over a large variety of images. It is impossible to retain all of the segments from all of the images so the segments are clustered becomes an image primitive as each cluster contains a unique set of similar segments. The size of the image signature is NK where N is the number of segments and K is the number of clusters. These numbers are significantly smaller than the dimensions of the image and so a signature is a condensed representation of the contents of the image.
图像签名是通过将图像中包含的片段与从大量图像上收集的片段数据库进行比较而生成的。从所有图像中保留所有的片段是不可能的,所以这些片段被聚类成为一个图像原语,因为每个聚类包含一组唯一的相似片段。图像签名的大小为NK,其中N为段数,K为簇数。这些数字明显小于图像的尺寸,因此签名是图像内容的浓缩表示。
{"title":"Image primitive signatures","authors":"J. Kinser","doi":"10.1109/AIPR.2004.28","DOIUrl":"https://doi.org/10.1109/AIPR.2004.28","url":null,"abstract":"Image signatures are generated from the comparison of segments contained within an image to a database of segments collected over a large variety of images. It is impossible to retain all of the segments from all of the images so the segments are clustered becomes an image primitive as each cluster contains a unique set of similar segments. The size of the image signature is NK where N is the number of segments and K is the number of clusters. These numbers are significantly smaller than the dimensions of the image and so a signature is a condensed representation of the contents of the image.","PeriodicalId":120814,"journal":{"name":"33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115678416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Robust detection and recognition of buildings in urban environments from LADAR data 基于LADAR数据的城市环境中建筑物的鲁棒检测和识别
Pub Date : 2004-10-13 DOI: 10.1109/AIPR.2004.40
R. Madhavan, T. Hong
Successful unmanned ground vehicle (UGV) navigation in urban areas requires the competence of the vehicle to cope with Global Positioning System (GPS) outages and/or unreliable position estimates due to multipathing. At the National Institute of Standards and Technology (NIST) we are developing registration algorithms using LADAR (LAser Detection And Ranging) data to cope with such scenarios. In this paper, we present a building detection and recognition (BDR) algorithm using LADAR range images acquired from UGVs towards reliable and efficient registration. We verify the proposed algorithms using field data obtained from a Riegl LADAR range sensor mounted on a UGV operating in a variety of unknown urban environments. The presented results show the robustness and efficacy of the BDR algorithm.
无人地面车辆(UGV)在城市地区的成功导航需要车辆能够应对全球定位系统(GPS)中断和/或由于多路径导致的不可靠位置估计。在美国国家标准与技术研究所(NIST),我们正在开发使用LADAR(激光探测和测距)数据的注册算法来应对这种情况。本文提出了一种基于地面车辆雷达距离图像的建筑物检测与识别(BDR)算法,以实现可靠、高效的配准。我们使用安装在UGV上的Riegl LADAR距离传感器获得的现场数据验证了所提出的算法,该传感器在各种未知的城市环境中运行。实验结果表明了BDR算法的鲁棒性和有效性。
{"title":"Robust detection and recognition of buildings in urban environments from LADAR data","authors":"R. Madhavan, T. Hong","doi":"10.1109/AIPR.2004.40","DOIUrl":"https://doi.org/10.1109/AIPR.2004.40","url":null,"abstract":"Successful unmanned ground vehicle (UGV) navigation in urban areas requires the competence of the vehicle to cope with Global Positioning System (GPS) outages and/or unreliable position estimates due to multipathing. At the National Institute of Standards and Technology (NIST) we are developing registration algorithms using LADAR (LAser Detection And Ranging) data to cope with such scenarios. In this paper, we present a building detection and recognition (BDR) algorithm using LADAR range images acquired from UGVs towards reliable and efficient registration. We verify the proposed algorithms using field data obtained from a Riegl LADAR range sensor mounted on a UGV operating in a variety of unknown urban environments. The presented results show the robustness and efficacy of the BDR algorithm.","PeriodicalId":120814,"journal":{"name":"33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124738723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
A computational framework for real-time detection and recognition of large number of classes 一种用于实时检测和识别大量类别的计算框架
Pub Date : 2004-10-13 DOI: 10.1109/AIPR.2004.1
Li Tao, V. Asari
Inspired by recent advances in real-time vision for certain applications, we propose a framework for developing and implementing systems that are capable of detecting and recognizing a large number of objects in real time on a top desktop workstation with field programmable gate array (FPGA) devices. To avoid explicit segmentation, detection and recognition is performed by scanning through local windows of input scenes at multiple scales. This is achieved by using a new feature family (named as topological local spectral histogram (ToLoSH) features, consisting of histograms of local regions of filtered images) and a lookup table decision tree (i.e. a decision tree where each node is implemented as lookup tables) as the classifier to reduce the average time per local window while achieving high accuracy. We show through analysis and empirical studies that ToLoSH features are effective to discriminate a large number of object classes and can be computed using only three instructions. Given the choice of the ToLoSH feature family and lookup table decision tree classifiers, the problem of real-time scene interpretation becomes a joint optimization problem of learning an optimal classifier and associated optimal ToLoSH features. To show the feasibility of the proposed framework, we have constructed a decision lookup table tree for a dataset consisting of textures, faces, and objects. We argue that the proposed framework may reconcile some of the fundamental issues in visual recognition modeling.
受某些应用的实时视觉最新进展的启发,我们提出了一个框架,用于开发和实现能够在具有现场可编程门阵列(FPGA)设备的顶级桌面工作站上实时检测和识别大量对象的系统。为了避免显式分割,检测和识别是通过在多个尺度上扫描输入场景的局部窗口来完成的。这是通过使用新的特征族(称为拓扑局部光谱直方图(ToLoSH)特征,由过滤图像的局部区域直方图组成)和查找表决策树(即每个节点实现为查找表的决策树)作为分类器来减少每个局部窗口的平均时间,同时实现高精度来实现的。我们通过分析和实证研究表明,ToLoSH特征可以有效地区分大量的对象类别,并且只需三条指令就可以计算出来。给定ToLoSH特征族和查找表决策树分类器的选择,实时场景解释问题成为学习最优分类器和关联最优ToLoSH特征的联合优化问题。为了展示所提出框架的可行性,我们为包含纹理、人脸和对象的数据集构建了决策查找表树。我们认为所提出的框架可以调和视觉识别建模中的一些基本问题。
{"title":"A computational framework for real-time detection and recognition of large number of classes","authors":"Li Tao, V. Asari","doi":"10.1109/AIPR.2004.1","DOIUrl":"https://doi.org/10.1109/AIPR.2004.1","url":null,"abstract":"Inspired by recent advances in real-time vision for certain applications, we propose a framework for developing and implementing systems that are capable of detecting and recognizing a large number of objects in real time on a top desktop workstation with field programmable gate array (FPGA) devices. To avoid explicit segmentation, detection and recognition is performed by scanning through local windows of input scenes at multiple scales. This is achieved by using a new feature family (named as topological local spectral histogram (ToLoSH) features, consisting of histograms of local regions of filtered images) and a lookup table decision tree (i.e. a decision tree where each node is implemented as lookup tables) as the classifier to reduce the average time per local window while achieving high accuracy. We show through analysis and empirical studies that ToLoSH features are effective to discriminate a large number of object classes and can be computed using only three instructions. Given the choice of the ToLoSH feature family and lookup table decision tree classifiers, the problem of real-time scene interpretation becomes a joint optimization problem of learning an optimal classifier and associated optimal ToLoSH features. To show the feasibility of the proposed framework, we have constructed a decision lookup table tree for a dataset consisting of textures, faces, and objects. We argue that the proposed framework may reconcile some of the fundamental issues in visual recognition modeling.","PeriodicalId":120814,"journal":{"name":"33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126310465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Adaptive road detection through continuous environment learning 通过持续的环境学习自适应道路检测
Pub Date : 2004-10-13 DOI: 10.1109/AIPR.2004.9
Mike Foedisch, A. Takeuchi
The Intelligent Systems Division of the National Institute of Standards and Technology has been engaged for several years in developing real-time systems for autonomous driving. A road detection program is an essential part of the project. Previously we developed an adaptive road detection system based on color histograms using a neural network. This, however, still required human involvement during the initialization step. As a continuation of the project, we have expanded the system so that it can adapt to the new environment without any human intervention. This system updates the neural network continuously based on the road image structure. In order to reduce the possibility of misclassifying road and non-road, we have implemented an adaptive road feature acquisition method.
美国国家标准与技术研究院智能系统部多年来一直致力于开发自动驾驶实时系统。道路检测程序是该项目的重要组成部分。在此之前,我们利用神经网络开发了一种基于颜色直方图的自适应道路检测系统。然而,这在初始化步骤中仍然需要人工参与。作为项目的延续,我们扩展了系统,使其能够在没有任何人为干预的情况下适应新的环境。该系统基于道路图像结构不断更新神经网络。为了减少道路与非道路的误分类可能性,我们实现了一种自适应道路特征获取方法。
{"title":"Adaptive road detection through continuous environment learning","authors":"Mike Foedisch, A. Takeuchi","doi":"10.1109/AIPR.2004.9","DOIUrl":"https://doi.org/10.1109/AIPR.2004.9","url":null,"abstract":"The Intelligent Systems Division of the National Institute of Standards and Technology has been engaged for several years in developing real-time systems for autonomous driving. A road detection program is an essential part of the project. Previously we developed an adaptive road detection system based on color histograms using a neural network. This, however, still required human involvement during the initialization step. As a continuation of the project, we have expanded the system so that it can adapt to the new environment without any human intervention. This system updates the neural network continuously based on the road image structure. In order to reduce the possibility of misclassifying road and non-road, we have implemented an adaptive road feature acquisition method.","PeriodicalId":120814,"journal":{"name":"33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)","volume":"2011 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128001607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
A simple OCR method from strong perspective view 一个简单的OCR方法从强透视视图
Pub Date : 2004-10-13 DOI: 10.1109/AIPR.2004.8
Mi-Ae Ko, Young-Mo Kim
Among many practical factors that need to be considered for a reliable character recognition system in 3D space, the location of the visual angle of a camera might play a crucial role. Different viewpoints in 3D space produces distorted license plate images in a camera. For this reason, a method is developed to segment and to recognize characters of license plate objects undergoing variant perspective view. A method for segmenting license plate characters on a moving vehicle in actual outdoor environment is based upon object contours. And the proposed method for recognizing is constructed from a feature-based approach, parameterized by an affine invariant parameters and the affine invariant features. Experimental results show that the performance of the proposed method is simple and robust, particularly when objects are heavily distorted with strong perspective view.
在3D空间中可靠的字符识别系统需要考虑的许多实际因素中,相机视角的位置可能起着至关重要的作用。在三维空间中不同的视点会在相机中产生扭曲的车牌图像。为此,提出了一种车牌物体在不同视角下的字符分割与识别方法。一种基于物体轮廓的实际室外环境下移动车辆车牌字符分割方法。该方法采用基于特征的方法,由仿射不变参数和仿射不变特征参数化。实验结果表明,该方法具有简单、鲁棒性强的特点,尤其适用于具有强烈视角的严重扭曲目标。
{"title":"A simple OCR method from strong perspective view","authors":"Mi-Ae Ko, Young-Mo Kim","doi":"10.1109/AIPR.2004.8","DOIUrl":"https://doi.org/10.1109/AIPR.2004.8","url":null,"abstract":"Among many practical factors that need to be considered for a reliable character recognition system in 3D space, the location of the visual angle of a camera might play a crucial role. Different viewpoints in 3D space produces distorted license plate images in a camera. For this reason, a method is developed to segment and to recognize characters of license plate objects undergoing variant perspective view. A method for segmenting license plate characters on a moving vehicle in actual outdoor environment is based upon object contours. And the proposed method for recognizing is constructed from a feature-based approach, parameterized by an affine invariant parameters and the affine invariant features. Experimental results show that the performance of the proposed method is simple and robust, particularly when objects are heavily distorted with strong perspective view.","PeriodicalId":120814,"journal":{"name":"33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131300546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Real time texture classification using field programmable gate arrays 使用现场可编程门阵列的实时纹理分类
Pub Date : 2004-10-13 DOI: 10.1109/AIPR.2004.38
Geoffrey Wall, Faizal Iqbal, J. Isaacs, Xiuwen Liu, S. Foo
In this paper we present a novel hardware/software approach to implement a highly accurate texture classification algorithm. We propose the use of field programmable gate arrays (FPGAs) to efficiently compute multiple convolutions in parallel that is required by the spectral histogram representation we employ. The combination of custom hardware and software allows us to have a classifier that is able to achieve results of over 99% accuracy at a rate of roughly 6000 image classifications per second on a challenging real texture dataset.
本文提出了一种新的硬件/软件方法来实现高精度的纹理分类算法。我们建议使用现场可编程门阵列(fpga)来有效地并行计算我们采用的光谱直方图表示所需的多个卷积。定制硬件和软件的结合使我们能够在具有挑战性的真实纹理数据集上以每秒大约6000个图像分类的速度实现超过99%准确率的分类器。
{"title":"Real time texture classification using field programmable gate arrays","authors":"Geoffrey Wall, Faizal Iqbal, J. Isaacs, Xiuwen Liu, S. Foo","doi":"10.1109/AIPR.2004.38","DOIUrl":"https://doi.org/10.1109/AIPR.2004.38","url":null,"abstract":"In this paper we present a novel hardware/software approach to implement a highly accurate texture classification algorithm. We propose the use of field programmable gate arrays (FPGAs) to efficiently compute multiple convolutions in parallel that is required by the spectral histogram representation we employ. The combination of custom hardware and software allows us to have a classifier that is able to achieve results of over 99% accuracy at a rate of roughly 6000 image classifications per second on a challenging real texture dataset.","PeriodicalId":120814,"journal":{"name":"33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123393109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Multiple-aperture imaging spectrometer: computer simulation and experimental validation 多孔径成像光谱仪:计算机模拟与实验验证
Pub Date : 2004-10-13 DOI: 10.1109/AIPR.2004.32
R. Kendrick, Eric H. Smith, D. Christie, D. Bennett, D. Theil, E. Barrett
The Lockheed Martin Advanced Technology Center (LMVATC) is actively investigating alternate applications of coherently phased sparse-aperture optical imaging arrays. Controlling the relative phasing of the apertures enables these arrays to function as imaging interferometers, providing high spectral resolution as well as high spatial resolution imagery. In this paper we: a) summarize the basic theory of multiple-aperture imaging interferometers; b) illustrate the theory with Fourier transform imaging spectrometer (FTIS) simulations, using the Rochester Institute of Technology hyper-spectral scene simulator (DIRSIG) as our source of simulated input data; c) validate the theory with experimental results derived with an LM/ATC optical FTIS tested.
洛克希德·马丁公司先进技术中心(LMVATC)正在积极研究相干相控稀疏孔径光学成像阵列的替代应用。控制孔径的相对相位使这些阵列能够作为成像干涉仪,提供高光谱分辨率以及高空间分辨率的图像。本文综述了多孔径成像干涉仪的基本原理;b)使用罗切斯特理工学院的超光谱场景模拟器(DIRSIG)作为模拟输入数据的来源,通过傅里叶变换成像光谱仪(FTIS)模拟来说明该理论;c)用LM/ATC光学FTIS测试得到的实验结果验证理论。
{"title":"Multiple-aperture imaging spectrometer: computer simulation and experimental validation","authors":"R. Kendrick, Eric H. Smith, D. Christie, D. Bennett, D. Theil, E. Barrett","doi":"10.1109/AIPR.2004.32","DOIUrl":"https://doi.org/10.1109/AIPR.2004.32","url":null,"abstract":"The Lockheed Martin Advanced Technology Center (LMVATC) is actively investigating alternate applications of coherently phased sparse-aperture optical imaging arrays. Controlling the relative phasing of the apertures enables these arrays to function as imaging interferometers, providing high spectral resolution as well as high spatial resolution imagery. In this paper we: a) summarize the basic theory of multiple-aperture imaging interferometers; b) illustrate the theory with Fourier transform imaging spectrometer (FTIS) simulations, using the Rochester Institute of Technology hyper-spectral scene simulator (DIRSIG) as our source of simulated input data; c) validate the theory with experimental results derived with an LM/ATC optical FTIS tested.","PeriodicalId":120814,"journal":{"name":"33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122791672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A multi-view approach on modular PCA for illumination and pose invariant face recognition 基于模态PCA的多视角光照与姿态不变人脸识别
Pub Date : 2004-10-13 DOI: 10.1109/AIPR.2004.4
P. Sankaran, V. Asari
A modified approach on modular PCA for face recognition is presented in this paper. The proposed changes aim to improve the recognition rates for modular PCA for face images with large variation in light and facial expression. The eyes form one of the most invariant regions on the face. A sub-image from this region is considered. Weight vectors from this region are appended to the existing weight vector for modular PCA. The accuracies for the modified method, the original method and PCA method are evaluated under conditions of varying pose, illumination and expressions using standard face databases.
提出了一种改进的模块化PCA人脸识别方法。提出的改进旨在提高模块化PCA对光照和面部表情变化较大的人脸图像的识别率。眼睛是面部最不变的区域之一。考虑该区域的一个子图像。该区域的权重向量被附加到现有的权重向量上,用于模块化主成分分析。在不同姿态、光照和表情条件下,利用标准人脸数据库对改进后的方法、原始方法和主成分分析方法的精度进行了评估。
{"title":"A multi-view approach on modular PCA for illumination and pose invariant face recognition","authors":"P. Sankaran, V. Asari","doi":"10.1109/AIPR.2004.4","DOIUrl":"https://doi.org/10.1109/AIPR.2004.4","url":null,"abstract":"A modified approach on modular PCA for face recognition is presented in this paper. The proposed changes aim to improve the recognition rates for modular PCA for face images with large variation in light and facial expression. The eyes form one of the most invariant regions on the face. A sub-image from this region is considered. Weight vectors from this region are appended to the existing weight vector for modular PCA. The accuracies for the modified method, the original method and PCA method are evaluated under conditions of varying pose, illumination and expressions using standard face databases.","PeriodicalId":120814,"journal":{"name":"33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132850318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Monitoring and reporting of fingerprint image quality and match accuracy for a large user application 监控和报告指纹图像质量和匹配精度的大用户应用
Pub Date : 2004-10-13 DOI: 10.1109/AIPR.2004.30
Teddy Ko, R. Krishnan
The main objective of this paper is to present the methodology used for measuring and monitoring the quality of fingerprint database and fingerprint match performance of a large user fingerprint identification system. The Department of Homeland Security's (DHS) biometric identification system is used as an example for this study. In addition the paper presents lessons learned during system performance testing and independent validation and verification analysis of large scale systems such as DHS's biometric system and recommend improvements for the current test methodology.
本文的主要目的是介绍一种用于测量和监控大型用户指纹识别系统的指纹数据库质量和指纹匹配性能的方法。本研究以美国国土安全部(DHS)的生物识别系统为例。此外,本文还介绍了在系统性能测试和大型系统(如国土安全部的生物识别系统)的独立验证和验证分析中吸取的经验教训,并建议对当前测试方法进行改进。
{"title":"Monitoring and reporting of fingerprint image quality and match accuracy for a large user application","authors":"Teddy Ko, R. Krishnan","doi":"10.1109/AIPR.2004.30","DOIUrl":"https://doi.org/10.1109/AIPR.2004.30","url":null,"abstract":"The main objective of this paper is to present the methodology used for measuring and monitoring the quality of fingerprint database and fingerprint match performance of a large user fingerprint identification system. The Department of Homeland Security's (DHS) biometric identification system is used as an example for this study. In addition the paper presents lessons learned during system performance testing and independent validation and verification analysis of large scale systems such as DHS's biometric system and recommend improvements for the current test methodology.","PeriodicalId":120814,"journal":{"name":"33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133292699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
A multiresolution time domain approach to RF image formation 射频图像形成的多分辨率时域方法
Pub Date : 2004-10-13 DOI: 10.1109/AIPR.2004.5
R. Bonneau
Conventional image formation approaches rely on frequency domain Fourier methods to create images of objects. Most rely on integrating spatial resolution in the Fourier domain and do not accurately factor in the spatial aperture function to create an image because of the uniform spatial sampling necessary for the Fourier transform. We propose a multiresolution approach based on a Greens function inverse scattering method that allows us to solve for the object function directly in the time domain thereby allowing a more accurate rendering of the object in question.
传统的图像形成方法依赖于频域傅立叶方法来创建物体的图像。大多数依赖于积分空间分辨率的傅里叶域和不准确的因素在空间孔径函数,以创建一个图像,因为均匀的空间采样必要的傅里叶变换。我们提出了一种基于格林函数逆散射方法的多分辨率方法,该方法允许我们直接在时域中求解目标函数,从而允许对所讨论的目标进行更准确的渲染。
{"title":"A multiresolution time domain approach to RF image formation","authors":"R. Bonneau","doi":"10.1109/AIPR.2004.5","DOIUrl":"https://doi.org/10.1109/AIPR.2004.5","url":null,"abstract":"Conventional image formation approaches rely on frequency domain Fourier methods to create images of objects. Most rely on integrating spatial resolution in the Fourier domain and do not accurately factor in the spatial aperture function to create an image because of the uniform spatial sampling necessary for the Fourier transform. We propose a multiresolution approach based on a Greens function inverse scattering method that allows us to solve for the object function directly in the time domain thereby allowing a more accurate rendering of the object in question.","PeriodicalId":120814,"journal":{"name":"33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116021654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
33rd Applied Imagery Pattern Recognition Workshop (AIPR'04)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1