首页 > 最新文献

Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)最新文献

英文 中文
A structured probabilistic model for recognition 用于识别的结构化概率模型
C. Schmid
In this paper we derive a probabilistic model for recognition based on local descriptors and spatial relations between these descriptors. Our model takes into account the variability of local descriptors, their saliency as well as the probability of spatial configurations. It is structured to clearly separate the probability of point-wise correspondences from the spatial coherence of sets of correspondences. For each descriptor of the query image, several correspondences in the image database exist. Each of these point-wise correspondences is weighted by its variability and its saliency. We then search for sets of correspondences which reinforce each other, that is which are spatially coherent. The recognized model is the one which obtains the highest evidence from these sets. To validate our probabilistic model, it is compared to an existing method for image retrieval. The experimental results are given for a database containing more than 1000 images. They clearly show the significant gain obtained by adding the probabilistic model.
本文提出了一种基于局部描述符和局部描述符之间的空间关系的概率识别模型。我们的模型考虑了局部描述符的可变性,它们的显著性以及空间配置的概率。它的结构是为了清楚地将逐点对应的概率与对应集的空间相干性分开。对于查询图像的每个描述符,在图像数据库中存在多个对应。这些逐点对应的每一个都是由其可变性和显著性加权的。然后我们寻找一组相互加强的对应,也就是空间上连贯的对应。被识别的模型是从这些集合中获得最高证据的模型。为了验证我们的概率模型,将其与现有的图像检索方法进行了比较。给出了包含1000多幅图像的数据库的实验结果。它们清楚地显示了通过添加概率模型获得的显著增益。
{"title":"A structured probabilistic model for recognition","authors":"C. Schmid","doi":"10.1109/CVPR.1999.784725","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784725","url":null,"abstract":"In this paper we derive a probabilistic model for recognition based on local descriptors and spatial relations between these descriptors. Our model takes into account the variability of local descriptors, their saliency as well as the probability of spatial configurations. It is structured to clearly separate the probability of point-wise correspondences from the spatial coherence of sets of correspondences. For each descriptor of the query image, several correspondences in the image database exist. Each of these point-wise correspondences is weighted by its variability and its saliency. We then search for sets of correspondences which reinforce each other, that is which are spatially coherent. The recognized model is the one which obtains the highest evidence from these sets. To validate our probabilistic model, it is compared to an existing method for image retrieval. The experimental results are given for a database containing more than 1000 images. They clearly show the significant gain obtained by adding the probabilistic model.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"48 1","pages":"485-490 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86066183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Pose clustering with density estimation and structural constraints 基于密度估计和结构约束的姿态聚类
S. Moss, E. Hancock
This paper describes a statistical framework for object alignment by pose clustering. The idea underlying pose clustering is to transform the alignment process from the image domain to that of the appropriate transformation parameters. It commence by taking k-tuples from the primitive-sets for the model and the data. The size of the k-tuples is such that there are sufficient measurements available to estimate the full-set of transformation parameters. By pairing each k-tuple in the model and each k-tuple in the data, a set of transformation parameter estimates or alignment votes is accumulated. The work reported here draws on three ideas. Firstly, we estimate maximum likelihood alignment parameters by using the the EM algorithm to fit a mixture model to the set of transformation parameter votes. Secondly, we control the order of the underlying structure model using a minimum description length criterion. Finally, we limit problems of combinatorial background by imposing structural constraints on the k-tuples.
本文描述了一种基于姿态聚类的目标对齐统计框架。姿态聚类的基本思想是将对齐过程从图像域转换为相应的转换参数域。它首先从模型和数据的基本集合中取k元组。k元组的大小使得有足够的测量值可用来估计转换参数的全部集合。通过将模型中的每个k元组与数据中的每个k元组配对,可以累积一组转换参数估计或对齐投票。这里报告的工作借鉴了三个想法。首先,我们使用EM算法估计最大似然对齐参数,将混合模型拟合到转换参数投票集上。其次,我们使用最小描述长度准则来控制底层结构模型的顺序。最后,我们通过对k元组施加结构约束来限制组合背景问题。
{"title":"Pose clustering with density estimation and structural constraints","authors":"S. Moss, E. Hancock","doi":"10.1109/CVPR.1999.784613","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784613","url":null,"abstract":"This paper describes a statistical framework for object alignment by pose clustering. The idea underlying pose clustering is to transform the alignment process from the image domain to that of the appropriate transformation parameters. It commence by taking k-tuples from the primitive-sets for the model and the data. The size of the k-tuples is such that there are sufficient measurements available to estimate the full-set of transformation parameters. By pairing each k-tuple in the model and each k-tuple in the data, a set of transformation parameter estimates or alignment votes is accumulated. The work reported here draws on three ideas. Firstly, we estimate maximum likelihood alignment parameters by using the the EM algorithm to fit a mixture model to the set of transformation parameter votes. Secondly, we control the order of the underlying structure model using a minimum description length criterion. Finally, we limit problems of combinatorial background by imposing structural constraints on the k-tuples.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"33 1","pages":"85-91 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82740236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Eigenshapes for 3D object recognition in range data 基于距离数据的三维目标识别特征形状
Richard J. Campbell, P. Flynn
Much of the recent research in object recognition has adopted an appearance-based scheme, wherein objects to be recognized are represented as a collection of prototypes in a multidimensional space spanned by a number of characteristic vectors (eigen-images) obtained from training views. In this paper, we extend the appearance-based recognition scheme to handle range (shape) data. The result of training is a set of 'eigensurfaces' that capture the gross shape of the objects. These techniques are used to form a system that recognizes objects under an arbitrary rotational pose transformation. The system has been tested on a 20 object database including free-form objects and a 54 object database of manufactured parts. Experiments with the system point out advantages and also highlight challenges that must be studied in future research.
最近在物体识别方面的许多研究都采用了基于外观的方案,其中待识别的物体被表示为由从训练视图中获得的许多特征向量(特征图像)所跨越的多维空间中的原型集合。在本文中,我们扩展了基于外观的识别方案来处理距离(形状)数据。训练的结果是一组捕捉物体大致形状的“特征面”。这些技术被用来形成一个识别任意旋转姿态变换下的物体的系统。该系统已在一个包含自由曲面对象的20个对象数据库和一个包含54个对象的成品零件数据库上进行了测试。系统的实验表明了该系统的优点,同时也指出了未来研究中需要研究的问题。
{"title":"Eigenshapes for 3D object recognition in range data","authors":"Richard J. Campbell, P. Flynn","doi":"10.1109/CVPR.1999.784728","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784728","url":null,"abstract":"Much of the recent research in object recognition has adopted an appearance-based scheme, wherein objects to be recognized are represented as a collection of prototypes in a multidimensional space spanned by a number of characteristic vectors (eigen-images) obtained from training views. In this paper, we extend the appearance-based recognition scheme to handle range (shape) data. The result of training is a set of 'eigensurfaces' that capture the gross shape of the objects. These techniques are used to form a system that recognizes objects under an arbitrary rotational pose transformation. The system has been tested on a 20 object database including free-form objects and a 54 object database of manufactured parts. Experiments with the system point out advantages and also highlight challenges that must be studied in future research.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"20 1","pages":"505-510 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91491224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Extracting textured vertical facades from controlled close-range imagery 从控制的近距离图像中提取有纹理的垂直立面
S. Coorg, S. Teller
We are developing a system to extract geodetic, textured CAD models from thousands of initially uncontrolled, close-range ground and aerial images of urban scenes. Here we describe one component of the system, which operates after the imagery has been controlled or geo-referenced. This fully automatic component detects significant vertical facades in the scene, then extrudes them to meet an inferred, triangulated terrain and procedurally generated roof polygons. The algorithm then estimates for each surface a computer graphics texture, or diffuse reflectance map, from the many available observations of that surface. We present the results of the algorithm on a complex dataset: nearly 4,000 high-resolution digital images of a small (200 meter square) office park, acquired from close range under highly varying lighting conditions, amidst significant occlusion due both to multiple inter-occluding structures, and dense foliage. While the results are of less fidelity than that would be achievable by an interactive system, our algorithm is the first to be demonstrated on such a large, real-world dataset.
我们正在开发一种系统,从数千张最初不受控制的城市场景近距离地面和航空图像中提取大地测量学和纹理CAD模型。在这里,我们描述了系统的一个组成部分,它在图像被控制或地理参考后运行。这个全自动组件检测场景中重要的垂直立面,然后将它们挤出以满足推断的三角地形和程序生成的屋顶多边形。然后,该算法根据对该表面的许多可用观察结果,估计每个表面的计算机图形纹理或漫反射图。我们在一个复杂的数据集上展示了该算法的结果:在高度变化的照明条件下,近4000幅小型(200平方米)办公园区的高分辨率数字图像在近距离获得,由于多个相互遮挡的结构和茂密的树叶造成了严重的遮挡。虽然结果的保真度不如交互式系统所能达到的保真度,但我们的算法是第一个在如此庞大的真实数据集上进行演示的算法。
{"title":"Extracting textured vertical facades from controlled close-range imagery","authors":"S. Coorg, S. Teller","doi":"10.1109/CVPR.1999.787004","DOIUrl":"https://doi.org/10.1109/CVPR.1999.787004","url":null,"abstract":"We are developing a system to extract geodetic, textured CAD models from thousands of initially uncontrolled, close-range ground and aerial images of urban scenes. Here we describe one component of the system, which operates after the imagery has been controlled or geo-referenced. This fully automatic component detects significant vertical facades in the scene, then extrudes them to meet an inferred, triangulated terrain and procedurally generated roof polygons. The algorithm then estimates for each surface a computer graphics texture, or diffuse reflectance map, from the many available observations of that surface. We present the results of the algorithm on a complex dataset: nearly 4,000 high-resolution digital images of a small (200 meter square) office park, acquired from close range under highly varying lighting conditions, amidst significant occlusion due both to multiple inter-occluding structures, and dense foliage. While the results are of less fidelity than that would be achievable by an interactive system, our algorithm is the first to be demonstrated on such a large, real-world dataset.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"7 1","pages":"625-632 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88841293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 86
Norm/sup 2/-based face recognition 基于规范/sup /的人脸识别
D. B. Graham, N. Allinson
Increasingly the problems of recognising faces under a variety of viewing conditions, including depth rotations, is being considered in the field. The concept of norm-based coding in face recognition is not new but has been little investigated in machine models. Here we describe a norm-based face recognition system which is capable of generalising from a single training view to recognise novel views of target faces. The system is based upon the characteristic nature faces as they move through a pose-varying eigenspace of facial images and deviations from the norm of a gallery of face images. We illustrate the use of the technique for a large range of pose variation.
该领域越来越多地考虑在各种观看条件下识别人脸的问题,包括深度旋转。人脸识别中基于规范编码的概念并不新鲜,但在机器模型中研究甚少。在这里,我们描述了一个基于规范的人脸识别系统,它能够从单一的训练视图泛化到识别目标人脸的新视图。该系统是基于特征的自然面孔,因为他们通过一个姿态变化的特征空间的面部图像和偏离标准的面部图像画廊。我们说明了在大范围的姿势变化中使用该技术。
{"title":"Norm/sup 2/-based face recognition","authors":"D. B. Graham, N. Allinson","doi":"10.1109/CVPR.1999.786998","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786998","url":null,"abstract":"Increasingly the problems of recognising faces under a variety of viewing conditions, including depth rotations, is being considered in the field. The concept of norm-based coding in face recognition is not new but has been little investigated in machine models. Here we describe a norm-based face recognition system which is capable of generalising from a single training view to recognise novel views of target faces. The system is based upon the characteristic nature faces as they move through a pose-varying eigenspace of facial images and deviations from the norm of a gallery of face images. We illustrate the use of the technique for a large range of pose variation.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"62 1","pages":"586-591 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89144175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Q-warping: direct computation of quadratic reference surfaces q -warp:直接计算二次参考曲面
A. Shashua, Y. Wexler
We consider the problem of wrapping around an object, of which two views are available, a reference surface and recovering the resulting parametric flow using direct computations (via spatio-temporal derivatives). The well known examples are affine flow models and B-parameter flow models - both describing a flow field of a planar reference surface. We extend those classic flow models to deal with a quadric reference surface and work out the explicit parametric form of the flow field. As a result we derive a simple warping algorithm that maps between two views and leaves a residual flow proportional to the 30 deviation of the surface from a virtual quadric surface. The applications include image morphing, model building, image stabilization, and disparate view correspondence.
我们考虑环绕一个对象的问题,其中两个视图是可用的,一个参考表面和使用直接计算(通过时空导数)恢复产生的参数流。众所周知的例子是仿射流模型和b参数流模型,它们都描述了平面参考表面的流场。我们将这些经典的流动模型扩展到二次参考曲面,并推导出流场的显式参数形式。因此,我们推导出一种简单的扭曲算法,该算法在两个视图之间进行映射,并留下与虚拟二次曲面的表面偏差30成比例的剩余流。应用包括图像变形、模型构建、图像稳定和不同视图对应。
{"title":"Q-warping: direct computation of quadratic reference surfaces","authors":"A. Shashua, Y. Wexler","doi":"10.1109/CVPR.1999.786960","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786960","url":null,"abstract":"We consider the problem of wrapping around an object, of which two views are available, a reference surface and recovering the resulting parametric flow using direct computations (via spatio-temporal derivatives). The well known examples are affine flow models and B-parameter flow models - both describing a flow field of a planar reference surface. We extend those classic flow models to deal with a quadric reference surface and work out the explicit parametric form of the flow field. As a result we derive a simple warping algorithm that maps between two views and leaves a residual flow proportional to the 30 deviation of the surface from a virtual quadric surface. The applications include image morphing, model building, image stabilization, and disparate view correspondence.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"41 6","pages":"333-338 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91436785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
A simple technique for self-calibration 一种简单的自校准技术
Paulo R. S. Mendonça, R. Cipolla
This paper introduces an extension of Hartley's self-calibration technique based on properties of the essential matrix, allowing for the stable computation of varying focal lengths and principal point. It is well known that the three singular values of an essential must satisfy two conditions: one of them must be zero and the other two must be identical. An essential matrix is obtained from the fundamental matrix by a transformation involving the intrinsic parameters of the pair of cameras associated with the two views. Thus, constraints on the essential matrix can be translated into constraints on the intrinsic parameters of the pair of cameras. This allows for a search in the space of intrinsic parameters of the cameras in order to minimize a cost function related to the constraints. This approach is shown to be simpler than other methods, with comparable accuracy in the results. Another advantage of the technique is that it does not require as input a consistent set of weakly calibrated camera matrices (as defined by Harley) for the whole image sequence, i.e. a set of cameras consistent with the correspondences and known up to a projective transformation.
本文介绍了一种基于本质矩阵性质的哈特利自校准技术的扩展,使其能够稳定地计算变焦距和主点。众所周知,一个本质的三个奇异值必须满足两个条件:其中一个必须为零,另外两个必须相同。通过对与两个视图相关联的一对摄像机的固有参数进行变换,从基本矩阵得到本质矩阵。因此,对本质矩阵的约束可以转化为对一对相机的内在参数的约束。这允许在相机的内在参数空间中搜索,以便最小化与约束相关的成本函数。这种方法被证明比其他方法更简单,结果具有相当的准确性。该技术的另一个优点是,它不需要为整个图像序列输入一组一致的弱校准相机矩阵(由Harley定义),即一组与对应一致的相机,并且已知到投影变换。
{"title":"A simple technique for self-calibration","authors":"Paulo R. S. Mendonça, R. Cipolla","doi":"10.1109/CVPR.1999.786984","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786984","url":null,"abstract":"This paper introduces an extension of Hartley's self-calibration technique based on properties of the essential matrix, allowing for the stable computation of varying focal lengths and principal point. It is well known that the three singular values of an essential must satisfy two conditions: one of them must be zero and the other two must be identical. An essential matrix is obtained from the fundamental matrix by a transformation involving the intrinsic parameters of the pair of cameras associated with the two views. Thus, constraints on the essential matrix can be translated into constraints on the intrinsic parameters of the pair of cameras. This allows for a search in the space of intrinsic parameters of the cameras in order to minimize a cost function related to the constraints. This approach is shown to be simpler than other methods, with comparable accuracy in the results. Another advantage of the technique is that it does not require as input a consistent set of weakly calibrated camera matrices (as defined by Harley) for the whole image sequence, i.e. a set of cameras consistent with the correspondences and known up to a projective transformation.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"27 1","pages":"500-505 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80726246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 136
Visual recognition of multi-agent action using binary temporal relations 基于二元时间关系的多智能体动作视觉识别
S. Intille, A. Bobick
A probabilistic framework for representing and visually recognizing complex multi-agent action is presented. Motivated by work in model-based object recognition and designed for the recognition of action from visual evidence, the representation has three components: (1) temporal structure descriptions representing the temporal relationships between agent goals, (2) belief networks for probabilistically representing and recognizing individual agent goals from visual evidence, and (3) belief networks automatically generated from the temporal structure descriptions that support the recognition of the complex action. We describe our current work on recognizing American football plays from noisy trajectory data.
提出了一种表示和视觉识别复杂多智能体动作的概率框架。受基于模型的对象识别工作的启发,并设计用于从视觉证据中识别动作,该表示有三个组成部分:(1)表征智能体目标间时间关系的时间结构描述;(2)基于视觉证据概率表征和识别单个智能体目标的信念网络;(3)基于时间结构描述自动生成的支持复杂动作识别的信念网络。我们描述了我们目前在从噪声轨迹数据中识别美式足球比赛方面的工作。
{"title":"Visual recognition of multi-agent action using binary temporal relations","authors":"S. Intille, A. Bobick","doi":"10.1109/CVPR.1999.786917","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786917","url":null,"abstract":"A probabilistic framework for representing and visually recognizing complex multi-agent action is presented. Motivated by work in model-based object recognition and designed for the recognition of action from visual evidence, the representation has three components: (1) temporal structure descriptions representing the temporal relationships between agent goals, (2) belief networks for probabilistically representing and recognizing individual agent goals from visual evidence, and (3) belief networks automatically generated from the temporal structure descriptions that support the recognition of the complex action. We describe our current work on recognizing American football plays from noisy trajectory data.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"55 1","pages":"56-62 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81165426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Invariant recognition in hyperspectral images 高光谱图像的不变性识别
G. Healey, D. Slater
The spectral radiance measured for a material by an airborne hyperspectral sensor depends strongly on. The illumination environment and the atmospheric conditions. This dependence has limited the success of material identification algorithms that rely exclusively on the information contained in hyperspectral image data. In this paper we use a comprehensive physical model to show that the set of observed 0.4-2.5 /spl mu/m spectral radiance vectors for a material lies in a lour-dimensional subspace of the hyperspectral measurement space. The physical model captures the dependence of reflected sunlight, reflected skylight, and path radiance terms on the scene geometry and on the distribution of atmospheric gases and aerosols over a wide range of conditions. Using the subspace model, we develop a local maximum likelihood algorithm for automated material identification that is invariant to illumination, atmospheric conditions, and the scene geometry. We demonstrate the invariant algorithm for the automated identification of material samples in HYDICE imagery acquired under different illumination and atmospheric conditions.
机载高光谱传感器测量材料的光谱辐亮度在很大程度上取决于。照明环境和大气条件。这种依赖限制了仅依赖于高光谱图像数据中包含的信息的材料识别算法的成功。在本文中,我们使用一个综合的物理模型表明,观测到的材料的0.4-2.5 /spl mu/m光谱辐射向量集位于高光谱测量空间的低维子空间。物理模型捕获了反射阳光、反射天窗和路径辐射对场景几何形状和大气气体和气溶胶分布的依赖性。利用子空间模型,我们开发了一种局部最大似然算法,用于自动材料识别,该算法不受照明、大气条件和场景几何形状的影响。我们展示了在不同光照和大气条件下获取的HYDICE图像中材料样本的自动识别的不变性算法。
{"title":"Invariant recognition in hyperspectral images","authors":"G. Healey, D. Slater","doi":"10.1109/CVPR.1999.786975","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786975","url":null,"abstract":"The spectral radiance measured for a material by an airborne hyperspectral sensor depends strongly on. The illumination environment and the atmospheric conditions. This dependence has limited the success of material identification algorithms that rely exclusively on the information contained in hyperspectral image data. In this paper we use a comprehensive physical model to show that the set of observed 0.4-2.5 /spl mu/m spectral radiance vectors for a material lies in a lour-dimensional subspace of the hyperspectral measurement space. The physical model captures the dependence of reflected sunlight, reflected skylight, and path radiance terms on the scene geometry and on the distribution of atmospheric gases and aerosols over a wide range of conditions. Using the subspace model, we develop a local maximum likelihood algorithm for automated material identification that is invariant to illumination, atmospheric conditions, and the scene geometry. We demonstrate the invariant algorithm for the automated identification of material samples in HYDICE imagery acquired under different illumination and atmospheric conditions.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"2 1","pages":"438-443 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84758030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Stereo panorama with a single camera 立体全景与一个单一的相机
Shmuel Peleg, M. Ben-Ezra
Full panoramic images, covering 360 degrees, can be created either by using panoramic cameras or by mosaicing together many regular images. Creating panoramic views in stereo, where one panorama is generated for the left eye, and another panorama is generated for the right eye is more problematic. Earlier attempts to mosaic images from a rotating pair of stereo cameras faced severe problems of parallax and of scale changes. A new family of multiple viewpoint image projections, the Circular Projections, is developed. Two panoramic images taken using such projections can serve as a panoramic stereo pair. A system is described to generates a stereo panoramic image using circular projections from images or video taken by a single rotating camera. The system works in real-time on a PC. It should be noted that the stereo images are created without computation of 3D structure, and the depth effects are created only in the viewer's brain.
覆盖360度的全景图像可以通过使用全景相机或将许多常规图像拼接在一起来创建。在立体声中创建全景视图,其中左眼生成一个全景,右眼生成另一个全景更有问题。早期尝试从一对旋转立体摄像机中拼接图像时,面临着视差和比例变化的严重问题。提出了一种新的多视点图像投影方法——圆形投影。使用这种投影拍摄的两幅全景图像可以作为全景立体图像对。描述了一种系统,该系统使用由单个旋转摄像机拍摄的图像或视频的圆形投影来生成立体全景图像。该系统在PC上实时工作。需要注意的是,立体图像是在没有计算3D结构的情况下产生的,深度效果只是在观看者的大脑中产生的。
{"title":"Stereo panorama with a single camera","authors":"Shmuel Peleg, M. Ben-Ezra","doi":"10.1109/CVPR.1999.786969","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786969","url":null,"abstract":"Full panoramic images, covering 360 degrees, can be created either by using panoramic cameras or by mosaicing together many regular images. Creating panoramic views in stereo, where one panorama is generated for the left eye, and another panorama is generated for the right eye is more problematic. Earlier attempts to mosaic images from a rotating pair of stereo cameras faced severe problems of parallax and of scale changes. A new family of multiple viewpoint image projections, the Circular Projections, is developed. Two panoramic images taken using such projections can serve as a panoramic stereo pair. A system is described to generates a stereo panoramic image using circular projections from images or video taken by a single rotating camera. The system works in real-time on a PC. It should be noted that the stereo images are created without computation of 3D structure, and the depth effects are created only in the viewer's brain.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"10 1","pages":"395-401 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84779790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 230
期刊
Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1