首页 > 最新文献

Proceedings Ninth IEEE International Conference on Computer Vision最新文献

英文 中文
Minimum risk distance measure for object recognition 用于物体识别的最小风险距离测量
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238349
S. Mahamud, M. Hebert
The optimal distance measure for a given discrimination task under the nearest neighbor framework has been shown to be the likelihood that a pair of measurements have different class labels [S. Mahamud et al., (2002)]. For implementation and efficiency considerations, the optimal distance measure was approximated by combining more elementary distance measures defined on simple feature spaces. We address two important issues that arise in practice for such an approach: (a) What form should the elementary distance measure in each feature space take? We motivate the need to use the optimal distance measure in simple feature spaces as the elementary distance measures; such distance measures have the desirable property that they are invariant to distance-respecting transformations, (b) How do we combine the elementary distance measures ? We present the precise statistical assumptions under which a linear logistic model holds exactly. We benchmark our model with three other methods on a challenging face discrimination task and show that our approach is competitive with the state of the art.
在最近邻框架下,对于给定的识别任务,最优距离度量已被证明是一对测量值具有不同类标签的可能性[S]。Mahamud et al.,(2002)。为了实现和效率的考虑,将定义在简单特征空间上的更多基本距离度量组合在一起来逼近最优距离度量。我们解决了这种方法在实践中出现的两个重要问题:(a)每个特征空间中的基本距离度量应该采取什么形式?我们激发了在简单特征空间中使用最优距离度量作为基本距离度量的需求;(b)我们如何结合基本距离度量?我们提出了精确的统计假设,在此假设下,线性逻辑模型完全成立。我们将我们的模型与其他三种方法在具有挑战性的人脸识别任务上进行基准测试,并表明我们的方法与最先进的方法相比具有竞争力。
{"title":"Minimum risk distance measure for object recognition","authors":"S. Mahamud, M. Hebert","doi":"10.1109/ICCV.2003.1238349","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238349","url":null,"abstract":"The optimal distance measure for a given discrimination task under the nearest neighbor framework has been shown to be the likelihood that a pair of measurements have different class labels [S. Mahamud et al., (2002)]. For implementation and efficiency considerations, the optimal distance measure was approximated by combining more elementary distance measures defined on simple feature spaces. We address two important issues that arise in practice for such an approach: (a) What form should the elementary distance measure in each feature space take? We motivate the need to use the optimal distance measure in simple feature spaces as the elementary distance measures; such distance measures have the desirable property that they are invariant to distance-respecting transformations, (b) How do we combine the elementary distance measures ? We present the precise statistical assumptions under which a linear logistic model holds exactly. We benchmark our model with three other methods on a challenging face discrimination task and show that our approach is competitive with the state of the art.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125458375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Multiclass spectral clustering 多类光谱聚类
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238361
Stella X. Yu, Jianbo Shi
We propose a principled account on multiclass spectral clustering. Given a discrete clustering formulation, we first solve a relaxed continuous optimization problem by eigen-decomposition. We clarify the role of eigenvectors as a generator of all optimal solutions through orthonormal transforms. We then solve an optimal discretization problem, which seeks a discrete solution closest to the continuous optima. The discretization is efficiently computed in an iterative fashion using singular value decomposition and nonmaximum suppression. The resulting discrete solutions are nearly global-optimal. Our method is robust to random initialization and converges faster than other clustering methods. Experiments on real image segmentation are reported.
我们提出了一个关于多类光谱聚类的原则性解释。给出一个离散聚类公式,首先利用特征分解求解一个松弛连续优化问题。我们通过正交变换阐明了特征向量作为所有最优解的生成器的作用。然后,我们求解一个最优离散化问题,该问题寻求最接近连续最优的离散解。利用奇异值分解和非极大值抑制,以迭代的方式有效地计算离散化。得到的离散解几乎是全局最优的。该方法对随机初始化具有鲁棒性,收敛速度快于其他聚类方法。本文报道了真实图像分割的实验。
{"title":"Multiclass spectral clustering","authors":"Stella X. Yu, Jianbo Shi","doi":"10.1109/ICCV.2003.1238361","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238361","url":null,"abstract":"We propose a principled account on multiclass spectral clustering. Given a discrete clustering formulation, we first solve a relaxed continuous optimization problem by eigen-decomposition. We clarify the role of eigenvectors as a generator of all optimal solutions through orthonormal transforms. We then solve an optimal discretization problem, which seeks a discrete solution closest to the continuous optima. The discretization is efficiently computed in an iterative fashion using singular value decomposition and nonmaximum suppression. The resulting discrete solutions are nearly global-optimal. Our method is robust to random initialization and converges faster than other clustering methods. Experiments on real image segmentation are reported.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123307295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1056
Reflectance-based classification of color edges 基于反射率的彩色边缘分类
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238438
T. Gevers
We aim at using color information to classify the physical nature of edges in video. To achieve physics-based edge classification, we first propose a novel approach to color edge detection by automatic noise-adaptive thresholding derived from sensor noise analysis. Then, we present a taxonomy on color edge types. As a result, a parameter-free edge classifier is obtained by labeling color transitions into one of the following types: (1) shadow-geometry, (2) highlight edges, (3) material edges. The proposed method is empirically verified on images showing complex real world scenes.
我们的目的是利用颜色信息对视频中边缘的物理性质进行分类。为了实现基于物理的边缘分类,我们首先提出了一种基于传感器噪声分析的自动噪声自适应阈值检测颜色边缘的新方法。然后,我们提出了一种颜色边缘类型的分类方法。因此,通过将颜色过渡标记为以下类型之一,可以获得无参数边缘分类器:(1)阴影几何,(2)高光边缘,(3)材料边缘。该方法在复杂的真实场景图像上得到了经验验证。
{"title":"Reflectance-based classification of color edges","authors":"T. Gevers","doi":"10.1109/ICCV.2003.1238438","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238438","url":null,"abstract":"We aim at using color information to classify the physical nature of edges in video. To achieve physics-based edge classification, we first propose a novel approach to color edge detection by automatic noise-adaptive thresholding derived from sensor noise analysis. Then, we present a taxonomy on color edge types. As a result, a parameter-free edge classifier is obtained by labeling color transitions into one of the following types: (1) shadow-geometry, (2) highlight edges, (3) material edges. The proposed method is empirically verified on images showing complex real world scenes.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126810052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
What does motion reveal about transparency? 关于透明度,运动揭示了什么?
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238462
M. Ben-Ezra, S. Nayar
The perception of transparent objects from images is known to be a very hard problem in vision. Given a single image, it is difficult to even detect the presence of transparent objects in the scene. In this paper, we explore what can be said about transparent objects by a moving observer. We show how features that are imaged through a transparent object behave differently from those that are rigidly attached to the scene. We present a novel model-based approach to recover the shapes and the poses of transparent objects from known motion. The objects can be complex in that they may be composed of multiple layers with different refractive indices. We have conducted numerous simulations to verify the practical feasibility of our algorithm. We have applied it to real scenes that include transparent objects and recovered the shapes of the objects with high accuracy.
从图像中感知透明物体是一个非常困难的视觉问题。对于单个图像,甚至很难检测到场景中透明物体的存在。在本文中,我们探讨了一个移动的观察者对透明物体的看法。我们展示了通过透明对象成像的特征如何与那些刚性附着在场景中的特征表现不同。我们提出了一种新的基于模型的方法来从已知的运动中恢复透明物体的形状和姿态。物体可以是复杂的,因为它们可以由具有不同折射率的多层组成。我们已经进行了大量的模拟来验证我们算法的实际可行性。我们将其应用到包含透明物体的真实场景中,并以较高的精度恢复了物体的形状。
{"title":"What does motion reveal about transparency?","authors":"M. Ben-Ezra, S. Nayar","doi":"10.1109/ICCV.2003.1238462","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238462","url":null,"abstract":"The perception of transparent objects from images is known to be a very hard problem in vision. Given a single image, it is difficult to even detect the presence of transparent objects in the scene. In this paper, we explore what can be said about transparent objects by a moving observer. We show how features that are imaged through a transparent object behave differently from those that are rigidly attached to the scene. We present a novel model-based approach to recover the shapes and the poses of transparent objects from known motion. The objects can be complex in that they may be composed of multiple layers with different refractive indices. We have conducted numerous simulations to verify the practical feasibility of our algorithm. We have applied it to real scenes that include transparent objects and recovered the shapes of the objects with high accuracy.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121838502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 110
Controlling model complexity in flow estimation 流估计中模型复杂度控制
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238445
Zoran Duric, Fayin Li, H. Wechsler, V. Cherkassky
This paper describes a novel application of statistical learning theory (SLT) to control model complexity in flow estimation. SLT provides analytical generalization bounds suitable for practical model selection from small and noisy data sets of image measurements (normal flow). The method addresses the aperture problem by using the penalized risk (ridge regression). We demonstrate an application of this method on both synthetic and real image sequences and use it for motion interpolation and extrapolation. Our experimental results show that our approach compares favorably against alternative model selection methods such as the Akaike's final prediction error, Schwartz's criterion, generalized cross-validation, and Shibata's model selector.
本文描述了统计学习理论(SLT)在流量估计中控制模型复杂性的新应用。SLT提供了适合于从图像测量(正常流)的小而有噪声的数据集中选择实际模型的分析泛化边界。该方法通过使用惩罚风险(脊回归)来解决孔径问题。我们演示了该方法在合成和真实图像序列上的应用,并将其用于运动插值和外推。我们的实验结果表明,我们的方法优于其他模型选择方法,如Akaike的最终预测误差,Schwartz的标准,广义交叉验证和Shibata的模型选择器。
{"title":"Controlling model complexity in flow estimation","authors":"Zoran Duric, Fayin Li, H. Wechsler, V. Cherkassky","doi":"10.1109/ICCV.2003.1238445","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238445","url":null,"abstract":"This paper describes a novel application of statistical learning theory (SLT) to control model complexity in flow estimation. SLT provides analytical generalization bounds suitable for practical model selection from small and noisy data sets of image measurements (normal flow). The method addresses the aperture problem by using the penalized risk (ridge regression). We demonstrate an application of this method on both synthetic and real image sequences and use it for motion interpolation and extrapolation. Our experimental results show that our approach compares favorably against alternative model selection methods such as the Akaike's final prediction error, Schwartz's criterion, generalized cross-validation, and Shibata's model selector.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123819423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Selection of scale-invariant parts for object class recognition 目标类识别中比例不变部件的选择
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238407
Gyuri Dorkó, C. Schmid
We introduce a novel method for constructing and selecting scale-invariant object parts. Scale-invariant local descriptors are first grouped into basic parts. A classifier is then learned for each of these parts, and feature selection is used to determine the most discriminative ones. This approach allows robust pan detection, and it is invariant under scale changes-that is, neither the training images nor the test images have to be normalized. The proposed method is evaluated in car detection tasks with significant variations in viewing conditions, and promising results are demonstrated. Different local regions, classifiers and feature selection methods are quantitatively compared. Our evaluation shows that local invariant descriptors are an appropriate representation for object classes such as cars, and it underlines the importance of feature selection.
提出了一种构造和选择尺度不变物体部件的新方法。首先将尺度不变局部描述符分成基本部分。然后为每个部分学习分类器,并使用特征选择来确定最具判别性的部分。这种方法允许鲁棒的平移检测,并且它在尺度变化下是不变的——也就是说,训练图像和测试图像都不需要归一化。该方法在具有显著视觉条件变化的汽车检测任务中进行了评估,并证明了令人满意的结果。对不同的局部区域、分类器和特征选择方法进行了定量比较。我们的评估表明,局部不变描述符是对象类(如汽车)的适当表示,它强调了特征选择的重要性。
{"title":"Selection of scale-invariant parts for object class recognition","authors":"Gyuri Dorkó, C. Schmid","doi":"10.1109/ICCV.2003.1238407","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238407","url":null,"abstract":"We introduce a novel method for constructing and selecting scale-invariant object parts. Scale-invariant local descriptors are first grouped into basic parts. A classifier is then learned for each of these parts, and feature selection is used to determine the most discriminative ones. This approach allows robust pan detection, and it is invariant under scale changes-that is, neither the training images nor the test images have to be normalized. The proposed method is evaluated in car detection tasks with significant variations in viewing conditions, and promising results are demonstrated. Different local regions, classifiers and feature selection methods are quantitatively compared. Our evaluation shows that local invariant descriptors are an appropriate representation for object classes such as cars, and it underlines the importance of feature selection.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"31 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113964349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 352
Tracking objects using density matching and shape priors 使用密度匹配和形状先验跟踪对象
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238466
Zhang Tao, D. Freedman
We present a novel method for tracking objects by combining density matching with shape priors. Density matching is a tracking method which operates by maximizing the Bhattacharyya similarity measure between the photometric distribution from an estimated image region and a model photometric distribution. Such trackers can be expressed as PDE-based curve evolutions, which can be implemented using level sets. Shape priors can be combined with this level-set implementation of density matching by representing the shape priors as a series of level sets; a variational approach allows for a natural, parametrization-independent shape term to be derived. Experimental results on real image sequences are shown.
提出了一种将密度匹配与形状先验相结合的目标跟踪方法。密度匹配是一种通过最大化估计图像区域的光度分布与模型光度分布之间的Bhattacharyya相似度量来进行跟踪的方法。这种跟踪器可以表示为基于pde的曲线演化,可以使用水平集实现。形状先验可以通过将形状先验表示为一系列水平集来与密度匹配的水平集实现相结合;变分方法允许一个自然的,参数化无关的形状项被导出。给出了在真实图像序列上的实验结果。
{"title":"Tracking objects using density matching and shape priors","authors":"Zhang Tao, D. Freedman","doi":"10.1109/ICCV.2003.1238466","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238466","url":null,"abstract":"We present a novel method for tracking objects by combining density matching with shape priors. Density matching is a tracking method which operates by maximizing the Bhattacharyya similarity measure between the photometric distribution from an estimated image region and a model photometric distribution. Such trackers can be expressed as PDE-based curve evolutions, which can be implemented using level sets. Shape priors can be combined with this level-set implementation of density matching by representing the shape priors as a series of level sets; a variational approach allows for a natural, parametrization-independent shape term to be derived. Experimental results on real image sequences are shown.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123065062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 105
A Caratheodory-Fejer approach to robust multiframe tracking 鲁棒多帧跟踪的Caratheodory-Fejer方法
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238465
O. Camps, Hwasup Lim, M. C. Mazzaro, M. Sznaier
A requirement common to most dynamic vision applications is the ability to track objects in a sequence of frames. This problem has been extensively studied in the past few years, leading to several techniques, such as unscented particle filter based trackers, that exploit a combination of the (assumed) target dynamics, empirically learned noise distributions and past position observations. While successful in many scenarios, these trackers remain fragile to occlusion and model uncertainty in the target dynamics. As we show in this paper, these difficulties can be addressed by modeling the dynamics of the target as an unknown operator that satisfies certain interpolation conditions. Results from interpolation theory can then be used to find this operator by solving a convex optimization problem. As illustrated with several examples, combining this operator with Kalman and UPF techniques leads to both robustness improvement and computational complexity reduction.
大多数动态视觉应用程序的共同要求是能够在一系列帧中跟踪对象。在过去的几年里,这个问题得到了广泛的研究,导致了几种技术,例如基于无气味粒子过滤器的跟踪器,这些技术利用了(假设的)目标动力学,经验学习的噪声分布和过去位置观察的组合。虽然在许多情况下是成功的,但这些跟踪器在目标动力学中仍然容易受到遮挡和模型不确定性的影响。正如我们在本文中所展示的,这些困难可以通过将目标的动力学建模为满足某些插值条件的未知算子来解决。插值理论的结果可以通过求解一个凸优化问题来找到这个算子。如几个例子所示,将该算子与Kalman和UPF技术相结合,既可以提高鲁棒性,又可以降低计算复杂度。
{"title":"A Caratheodory-Fejer approach to robust multiframe tracking","authors":"O. Camps, Hwasup Lim, M. C. Mazzaro, M. Sznaier","doi":"10.1109/ICCV.2003.1238465","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238465","url":null,"abstract":"A requirement common to most dynamic vision applications is the ability to track objects in a sequence of frames. This problem has been extensively studied in the past few years, leading to several techniques, such as unscented particle filter based trackers, that exploit a combination of the (assumed) target dynamics, empirically learned noise distributions and past position observations. While successful in many scenarios, these trackers remain fragile to occlusion and model uncertainty in the target dynamics. As we show in this paper, these difficulties can be addressed by modeling the dynamics of the target as an unknown operator that satisfies certain interpolation conditions. Results from interpolation theory can then be used to find this operator by solving a convex optimization problem. As illustrated with several examples, combining this operator with Kalman and UPF techniques leads to both robustness improvement and computational complexity reduction.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127819100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Recognising panoramas 承认全景照片
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238630
Matthew A. Brown, D. Lowe
The problem considered in this paper is the fully automatic construction of panoramas. Fundamentally, this problem requires recognition, as we need to know which parts of the panorama join up. Previous approaches have used human input or restrictions on the image sequence for the matching step. In this work we use object recognition techniques based on invariant local features to select matching images, and a probabilistic model for verification. Because of this our method is insensitive to the ordering, orientation, scale and illumination of the images. It is also insensitive to 'noise' images which are not part of the panorama at all, that is, it recognises panoramas. This suggests a useful application for photographers: the system takes as input the images on an entire flash card or film, recognises images that form part of a panorama, and stitches them with no user input whatsoever.
本文考虑的问题是全景图的全自动构造。从根本上说,这个问题需要识别,因为我们需要知道全景的哪些部分连接在一起。以前的方法使用人工输入或对图像序列的限制来进行匹配步骤。在这项工作中,我们使用基于不变局部特征的目标识别技术来选择匹配图像,并使用概率模型进行验证。因此,我们的方法对图像的顺序、方向、比例和照明不敏感。它对完全不属于全景的“噪声”图像也不敏感,也就是说,它可以识别全景图像。这为摄影师提供了一个有用的应用:该系统将整个闪存卡或胶卷上的图像作为输入,识别构成全景的图像,并在不需要用户输入的情况下将它们拼接起来。
{"title":"Recognising panoramas","authors":"Matthew A. Brown, D. Lowe","doi":"10.1109/ICCV.2003.1238630","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238630","url":null,"abstract":"The problem considered in this paper is the fully automatic construction of panoramas. Fundamentally, this problem requires recognition, as we need to know which parts of the panorama join up. Previous approaches have used human input or restrictions on the image sequence for the matching step. In this work we use object recognition techniques based on invariant local features to select matching images, and a probabilistic model for verification. Because of this our method is insensitive to the ordering, orientation, scale and illumination of the images. It is also insensitive to 'noise' images which are not part of the panorama at all, that is, it recognises panoramas. This suggests a useful application for photographers: the system takes as input the images on an entire flash card or film, recognises images that form part of a panorama, and stitches them with no user input whatsoever.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132749575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1032
Obstacle detection using projective invariant and vanishing lines 基于投影不变量和消失线的障碍物检测
Pub Date : 2003-10-13 DOI: 10.1109/ICCV.2003.1238363
R. Okada, Y. Taniguchi, K. Furukawa, K. Onoguchi
We present a novel method for detecting vehicles as obstacles in various road scenes using a single onboard camera. Vehicles are detected by testing whether the motion of a set of three horizontal line segments, which are always on the vehicles, satisfies the motion constraint of the ground plane or that of the surface plane of the vehicles. The motion constraint of each plane is derived from the projective invariant combined with the vanishing line of the plane that is a prior knowledge of road scenes. The proposed method is implemented into a newly developed onboard LSI. Experimental results for real road scenes under various conditions show the effectiveness of the proposed method.
我们提出了一种新的方法来检测车辆障碍物在各种道路场景中使用单个车载摄像头。检测车辆的方法是检测始终在车辆上的一组三个水平线的运动是否满足地面或车辆表面的运动约束。每个平面的运动约束由投影不变量与平面的消失线结合得到,该消失线是道路场景的先验知识。该方法已在一种新开发的板载集成电路中实现。不同条件下的真实道路场景实验结果表明了该方法的有效性。
{"title":"Obstacle detection using projective invariant and vanishing lines","authors":"R. Okada, Y. Taniguchi, K. Furukawa, K. Onoguchi","doi":"10.1109/ICCV.2003.1238363","DOIUrl":"https://doi.org/10.1109/ICCV.2003.1238363","url":null,"abstract":"We present a novel method for detecting vehicles as obstacles in various road scenes using a single onboard camera. Vehicles are detected by testing whether the motion of a set of three horizontal line segments, which are always on the vehicles, satisfies the motion constraint of the ground plane or that of the surface plane of the vehicles. The motion constraint of each plane is derived from the projective invariant combined with the vanishing line of the plane that is a prior knowledge of road scenes. The proposed method is implemented into a newly developed onboard LSI. Experimental results for real road scenes under various conditions show the effectiveness of the proposed method.","PeriodicalId":131580,"journal":{"name":"Proceedings Ninth IEEE International Conference on Computer Vision","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114619883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
期刊
Proceedings Ninth IEEE International Conference on Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1