首页 > 最新文献

Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)最新文献

英文 中文
Computing rectifying homographies for stereo vision 计算校正立体视觉的同形词
Charles T. Loop, Zhengyou Zhang
Image rectification is the process of applying a pair of 2D projective transforms, or homographies, to a pair of images whose epipolar geometry is known so that epipolar lines in the original images map to horizontally aligned lines in the transformed images. We propose a novel technique for image rectification based on geometrically well defined criteria such that image distortion due to rectification is minimized. This is achieved by decomposing each homography into a specialized projective transform, a similarity transform, followed by a shearing transform. The effect of image distortion at each stage is carefully considered.
图像校正是将一对二维投影变换(或同形变换)应用于一对已知极坐标几何的图像,使原始图像中的极坐标线映射到转换后图像中的水平对齐线的过程。我们提出了一种基于几何上定义良好的标准的图像校正新技术,使由于校正引起的图像失真最小化。这是通过将每个单应变换分解成一个专门的投影变换,一个相似变换,然后是一个剪切变换来实现的。在每个阶段都仔细考虑了图像畸变的影响。
{"title":"Computing rectifying homographies for stereo vision","authors":"Charles T. Loop, Zhengyou Zhang","doi":"10.1109/CVPR.1999.786928","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786928","url":null,"abstract":"Image rectification is the process of applying a pair of 2D projective transforms, or homographies, to a pair of images whose epipolar geometry is known so that epipolar lines in the original images map to horizontally aligned lines in the transformed images. We propose a novel technique for image rectification based on geometrically well defined criteria such that image distortion due to rectification is minimized. This is achieved by decomposing each homography into a specialized projective transform, a similarity transform, followed by a shearing transform. The effect of image distortion at each stage is carefully considered.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"78 1","pages":"125-131 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87094612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 444
Extracting nonrigid motion and 3D structure of hurricanes from satellite image sequences without correspondences 从无对应的卫星图像序列中提取飓风的非刚体运动和三维结构
Lin Zhou, C. Kambhamettu, Dmitry Goldgof
Image sequences capturing Hurricane Luis through meteorological satellites (GOES-8 and GOES-9) are used to estimate hurricane-top heights (structure) and hurricane winds (motion). This problem is difficult not only due to the absence of correspondence but also due to the lack of depth cues in the 2D hurricane images (scaled orthographic projection). In this paper, we present a structure and motion analysis system, called SMAS. In this system, the hurricane images are first segmented into small square areas. We assume that each small area is undergoing similar nonrigid motion. A suitable nonrigid motion model for cloud motion is first defined. Then, non-linear least-square method is used to fit the nonrigid motion model for each area in order to estimate the structure, motion model, and 3D nonrigid motion correspondences. Finally, the recovered hurricane-top heights and winds are presented along with an error analysis. Both structure and 3D motion correspondences are estimated to subpixel accuracy. Our results are very encouraging, and have many potential applications in earth and space sciences, especially in cloud models for weather prediction.
通过气象卫星(GOES-8和GOES-9)捕获飓风路易斯的图像序列用于估计飓风顶部高度(结构)和飓风风(运动)。这个问题之所以困难,不仅是因为缺乏对应关系,还因为在二维飓风图像(比例正射影)中缺乏深度线索。在本文中,我们提出了一个结构和运动分析系统,称为SMAS。在这个系统中,飓风图像首先被分割成小的方形区域。我们假设每个小区域都在进行类似的非刚性运动。首先定义了适合云运动的非刚体运动模型。然后,采用非线性最小二乘法拟合各区域的非刚体运动模型,以估计结构、运动模型和三维非刚体运动对应关系。最后,给出了恢复的飓风顶高和风速,并进行了误差分析。结构和三维运动对应估计到亚像素精度。我们的研究结果非常鼓舞人心,在地球和空间科学领域有许多潜在的应用,特别是在天气预报的云模型方面。
{"title":"Extracting nonrigid motion and 3D structure of hurricanes from satellite image sequences without correspondences","authors":"Lin Zhou, C. Kambhamettu, Dmitry Goldgof","doi":"10.1109/CVPR.1999.784643","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784643","url":null,"abstract":"Image sequences capturing Hurricane Luis through meteorological satellites (GOES-8 and GOES-9) are used to estimate hurricane-top heights (structure) and hurricane winds (motion). This problem is difficult not only due to the absence of correspondence but also due to the lack of depth cues in the 2D hurricane images (scaled orthographic projection). In this paper, we present a structure and motion analysis system, called SMAS. In this system, the hurricane images are first segmented into small square areas. We assume that each small area is undergoing similar nonrigid motion. A suitable nonrigid motion model for cloud motion is first defined. Then, non-linear least-square method is used to fit the nonrigid motion model for each area in order to estimate the structure, motion model, and 3D nonrigid motion correspondences. Finally, the recovered hurricane-top heights and winds are presented along with an error analysis. Both structure and 3D motion correspondences are estimated to subpixel accuracy. Our results are very encouraging, and have many potential applications in earth and space sciences, especially in cloud models for weather prediction.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"54 1","pages":"280-285 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76850843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Generic object detection using model based segmentation 基于模型分割的通用目标检测
Zhiqian Wang, J. Ben-Arie
This paper presents a novel approach for detection and segmentation of generic shapes in cluttered images. The underlying assumption is that generic objects that are man made, frequently have surfaces which closely resemble standard model shapes such as rectangles, semi-circles etc. Due to the perspective transformations of optical imaging systems, a model shape may appear differently in the image with various orientations and aspect ratios. The set of possible appearances can be represented compactly by a few vectorial eigenbases that are derived from a small set of model shapes which are affine transformed in a wide parameter range. Instead of regular boundary of standard models, we apply a vectorial boundary which improves robustness to noise, background clutter and partial occlusion. The detection of generic shapes is realized by detecting local peaks of a similarity measure between the image edge map and an eigenspace combined set of the appearances. At each local maxima, a fast search approach based on a novel representation by an angle space is employed to determine the best matching between models and the underlying subimage. We find that angular representation in multidimensional search corresponds better to Euclidean distance than conventional projection and yields improved classification of noisy shapes. Experiments are performed in various interfering distortions, and robust detection and segmentation are achieved.
本文提出了一种检测和分割杂乱图像中一般形状的新方法。潜在的假设是,一般人造物体的表面通常与标准模型形状(如矩形、半圆等)非常相似。由于光学成像系统的透视变换,在不同的方向和纵横比下,模型形状可能在图像中呈现不同的形状。可能的外观集可以用几个向量特征基紧凑地表示,这些特征基是从一小组模型形状中导出的,这些模型形状在很宽的参数范围内进行仿射变换。我们采用向量边界代替标准模型的规则边界,提高了对噪声、背景杂波和部分遮挡的鲁棒性。通用形状的检测是通过检测图像边缘映射和特征空间组合集之间的相似度量的局部峰值来实现的。在每个局部最大值处,采用基于角度空间的新表示的快速搜索方法来确定模型与底层子图像之间的最佳匹配。我们发现多维搜索中的角度表示比传统的投影更符合欧几里得距离,并且改进了噪声形状的分类。在各种干扰失真条件下进行了实验,实现了鲁棒检测和分割。
{"title":"Generic object detection using model based segmentation","authors":"Zhiqian Wang, J. Ben-Arie","doi":"10.1109/CVPR.1999.784716","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784716","url":null,"abstract":"This paper presents a novel approach for detection and segmentation of generic shapes in cluttered images. The underlying assumption is that generic objects that are man made, frequently have surfaces which closely resemble standard model shapes such as rectangles, semi-circles etc. Due to the perspective transformations of optical imaging systems, a model shape may appear differently in the image with various orientations and aspect ratios. The set of possible appearances can be represented compactly by a few vectorial eigenbases that are derived from a small set of model shapes which are affine transformed in a wide parameter range. Instead of regular boundary of standard models, we apply a vectorial boundary which improves robustness to noise, background clutter and partial occlusion. The detection of generic shapes is realized by detecting local peaks of a similarity measure between the image edge map and an eigenspace combined set of the appearances. At each local maxima, a fast search approach based on a novel representation by an angle space is employed to determine the best matching between models and the underlying subimage. We find that angular representation in multidimensional search corresponds better to Euclidean distance than conventional projection and yields improved classification of noisy shapes. Experiments are performed in various interfering distortions, and robust detection and segmentation are achieved.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"16 1","pages":"428-433 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73784272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Explanation-based facial motion tracking using a piecewise Bezier volume deformation model 基于解释的基于分段贝塞尔体变形模型的面部运动跟踪
Hai Tao, Thomas S. Huang
Capturing real motions from video sequences is a powerful method for automatic building of facial articulation models. In this paper, we propose an explanation-based facial motion tracking algorithm based on a piecewise Bezier volume deformation model (PBVD). The PBVD is a suitable model both for the synthesis and the analysis of facial images. It is linear and independent of the facial mesh structure. With this model, basic facial movements, or action units, are interactively defined. By changing the magnitudes of these action units, animated facial images are generated. The magnitudes of these action units can also be computed from real video sequences using a model-based tracking algorithm. However, in order to customize the articulation model for a particular face, the predefined PBVD action units need to be adaptively modified. In this paper, we first briefly introduce the PBVD model and its application in facial animation. Then a multiresolution PBVD-based motion tracking algorithm is presented. Finally, we describe an explanation-based tracking algorithm that takes the predefined action units as the initial articulation model and adaptively improves them during the tracking process to obtain a more realistic articulation model. Experimental results on PBVD-based animation, model-based tracking, and explanation-based tracking are shown in this paper.
从视频序列中捕捉真实动作是自动建立面部关节模型的一种有效方法。本文提出了一种基于分段贝塞尔体积变形模型(PBVD)的基于解释的面部运动跟踪算法。PBVD是一种适合于人脸图像合成和分析的模型。它是线性的,独立于面部网格结构。在这个模型中,基本的面部动作或动作单元是交互式定义的。通过改变这些动作单元的大小,可以生成动画面部图像。这些动作单元的大小也可以使用基于模型的跟踪算法从真实视频序列中计算出来。然而,为了定制特定人脸的关节模型,需要自适应地修改预定义的PBVD动作单元。本文首先简要介绍了PBVD模型及其在人脸动画中的应用。然后提出了一种基于pbvd的多分辨率运动跟踪算法。最后,我们描述了一种基于解释的跟踪算法,该算法将预定义的动作单元作为初始衔接模型,并在跟踪过程中自适应地改进它们,以获得更真实的衔接模型。本文给出了基于pbvd的动画、基于模型的跟踪和基于解释的跟踪的实验结果。
{"title":"Explanation-based facial motion tracking using a piecewise Bezier volume deformation model","authors":"Hai Tao, Thomas S. Huang","doi":"10.1109/CVPR.1999.787002","DOIUrl":"https://doi.org/10.1109/CVPR.1999.787002","url":null,"abstract":"Capturing real motions from video sequences is a powerful method for automatic building of facial articulation models. In this paper, we propose an explanation-based facial motion tracking algorithm based on a piecewise Bezier volume deformation model (PBVD). The PBVD is a suitable model both for the synthesis and the analysis of facial images. It is linear and independent of the facial mesh structure. With this model, basic facial movements, or action units, are interactively defined. By changing the magnitudes of these action units, animated facial images are generated. The magnitudes of these action units can also be computed from real video sequences using a model-based tracking algorithm. However, in order to customize the articulation model for a particular face, the predefined PBVD action units need to be adaptively modified. In this paper, we first briefly introduce the PBVD model and its application in facial animation. Then a multiresolution PBVD-based motion tracking algorithm is presented. Finally, we describe an explanation-based tracking algorithm that takes the predefined action units as the initial articulation model and adaptively improves them during the tracking process to obtain a more realistic articulation model. Experimental results on PBVD-based animation, model-based tracking, and explanation-based tracking are shown in this paper.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"42 1","pages":"611-617 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75580711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 117
Material classification for 3D objects in aerial hyperspectral images 航空高光谱图像中三维物体的材料分类
D. Slater, G. Healey
Automated material classification from airborne imagery is an important capability for many applications including target recognition and geospatial database construction. Hyperspectral imagery provides a rich source of information for this purpose but utilization is complicated by the variability in a material's observed spectral signature due to the ambient conditions and the scene geometry. In this paper, we present a method that uses a single spectral radiance function measured from a material under unknown conditions to synthesize a comprehensive set of radiance spectra that corresponds to that material over a wide range of conditions. This set of radiance spectra can be used to build a hyperspectral subspace representation that can be used for material identification over a wide range of circumstances. We demonstrate the use of these algorithms for model synthesis and material mapping using HYDICE imagery acquired at Fort Hood, Texas. The method correctly maps several classes of roofing materials, roads, and vegetation over significant spectral changes due to variation in surface orientation. We show that the approach outperforms methods based on direct spectral comparison.
从航空图像中自动分类材料是目标识别和地理空间数据库构建等许多应用的重要能力。高光谱图像为此目的提供了丰富的信息来源,但由于环境条件和场景几何形状导致材料观察到的光谱特征的可变性,使用起来很复杂。在本文中,我们提出了一种方法,该方法使用从未知条件下的材料测量的单一光谱辐射函数来合成一套全面的辐射光谱,该光谱对应于该材料在各种条件下的辐射光谱。这组辐射光谱可用于建立高光谱子空间表示,可用于在各种情况下的材料识别。我们使用在德克萨斯州胡德堡获得的HYDICE图像来演示这些算法在模型合成和材料映射中的应用。该方法正确地绘制了几种类型的屋顶材料、道路和植被,这些植被由于表面方向的变化而发生了显著的光谱变化。我们表明,该方法优于基于直接光谱比较的方法。
{"title":"Material classification for 3D objects in aerial hyperspectral images","authors":"D. Slater, G. Healey","doi":"10.1109/CVPR.1999.784641","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784641","url":null,"abstract":"Automated material classification from airborne imagery is an important capability for many applications including target recognition and geospatial database construction. Hyperspectral imagery provides a rich source of information for this purpose but utilization is complicated by the variability in a material's observed spectral signature due to the ambient conditions and the scene geometry. In this paper, we present a method that uses a single spectral radiance function measured from a material under unknown conditions to synthesize a comprehensive set of radiance spectra that corresponds to that material over a wide range of conditions. This set of radiance spectra can be used to build a hyperspectral subspace representation that can be used for material identification over a wide range of circumstances. We demonstrate the use of these algorithms for model synthesis and material mapping using HYDICE imagery acquired at Fort Hood, Texas. The method correctly maps several classes of roofing materials, roads, and vegetation over significant spectral changes due to variation in surface orientation. We show that the approach outperforms methods based on direct spectral comparison.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"26 1","pages":"268-273 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73232609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Explaining optical flow events with parameterized spatio-temporal models 用参数化时空模型解释光流事件
Michael J. Black
A spatio-temporal representation for complex optical flow events is developed that generalizes traditional parameterized motion models (e.g. affine). These generative spatio-temporal models may be non-linear or stochastic and are event-specific in that they characterize a particular type of object motion (e.g. sitting or walking). Within a Bayesian framework we seek the appropriate model, phase, rate, spatial position, and scale to account for the image variation. The posterior distribution over this parameter space conditioned on image measurements is typically non-Gaussian. The distribution is represented using factored sampling and is predicted and updated over time using the condensation algorithm. The resulting framework automatically detects, localizes, and recognizes motion events.
在传统参数化运动模型(如仿射)的基础上,提出了复杂光流事件的时空表征方法。这些生成的时空模型可能是非线性的或随机的,并且是特定于事件的,因为它们表征特定类型的物体运动(例如坐或走)。在贝叶斯框架中,我们寻求适当的模型、相位、速率、空间位置和尺度来解释图像的变化。该参数空间的后验分布以图像测量为条件,通常是非高斯分布。该分布使用因子采样表示,并使用冷凝算法随时间预测和更新。生成的框架会自动检测、定位和识别运动事件。
{"title":"Explaining optical flow events with parameterized spatio-temporal models","authors":"Michael J. Black","doi":"10.1109/CVPR.1999.786959","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786959","url":null,"abstract":"A spatio-temporal representation for complex optical flow events is developed that generalizes traditional parameterized motion models (e.g. affine). These generative spatio-temporal models may be non-linear or stochastic and are event-specific in that they characterize a particular type of object motion (e.g. sitting or walking). Within a Bayesian framework we seek the appropriate model, phase, rate, spatial position, and scale to account for the image variation. The posterior distribution over this parameter space conditioned on image measurements is typically non-Gaussian. The distribution is represented using factored sampling and is predicted and updated over time using the condensation algorithm. The resulting framework automatically detects, localizes, and recognizes motion events.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"77 1","pages":"326-332 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74912455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
Implicit representation and scene reconstruction from probability density functions 基于概率密度函数的隐式表示和场景重建
S. Seitz, P. Anandan
A technique is presented for representing linear features as probability density functions in two or three dimensions. Three chief advantages of this approach are (1) a unified representation and algebra for manipulating points, lines, and planes, (2) seamless incorporation of uncertainty information, and (3) a very simple recursive solution for maximum likelihood shape estimation. Applications to uncalibrated affine scene reconstruction are presented, with results on images of an outdoor environment.
提出了一种将线性特征表示为二维或三维概率密度函数的方法。这种方法的三个主要优点是:(1)操作点、线和面的统一表示和代数,(2)不确定性信息的无缝结合,以及(3)最大似然形状估计的非常简单的递归解决方案。介绍了非校准仿射场景重建的应用,并给出了室外环境图像的重建结果。
{"title":"Implicit representation and scene reconstruction from probability density functions","authors":"S. Seitz, P. Anandan","doi":"10.1109/CVPR.1999.784604","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784604","url":null,"abstract":"A technique is presented for representing linear features as probability density functions in two or three dimensions. Three chief advantages of this approach are (1) a unified representation and algebra for manipulating points, lines, and planes, (2) seamless incorporation of uncertainty information, and (3) a very simple recursive solution for maximum likelihood shape estimation. Applications to uncalibrated affine scene reconstruction are presented, with results on images of an outdoor environment.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"22 1","pages":"28-34 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73898246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Efficient techniques for wide-angle stereo vision using surface projection models 利用表面投影模型实现广角立体视觉的高效技术
Philip W. Smith, Keith B. Johnson, M. Abidi
Wide-Angle lenses are not often used for 3D reconstruction tasks, in spite of the potential advantages offered by their increased field-of-view, because (1) existing algorithms for high-distortion lens compensation perform poorly at image extremities and (2) procedures for the reconstruction of recti-linear images place a large burden on system resources. In this paper, a projection model based on quadric surfaces is presented which accurately characterizes the effect of wide-angle lenses across the entire image and allows for the use of novel feature matching strategies that do not require nonlinear distortion compensation.
尽管广角镜头具有增加视场的潜在优势,但它并不经常用于3D重建任务,因为(1)现有的高畸变镜头补偿算法在图像边缘处表现不佳,(2)直线图像重建程序对系统资源造成了很大的负担。本文提出了一种基于二次曲面的投影模型,该模型准确地表征了广角镜头在整个图像中的效果,并允许使用不需要非线性失真补偿的新颖特征匹配策略。
{"title":"Efficient techniques for wide-angle stereo vision using surface projection models","authors":"Philip W. Smith, Keith B. Johnson, M. Abidi","doi":"10.1109/CVPR.1999.786926","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786926","url":null,"abstract":"Wide-Angle lenses are not often used for 3D reconstruction tasks, in spite of the potential advantages offered by their increased field-of-view, because (1) existing algorithms for high-distortion lens compensation perform poorly at image extremities and (2) procedures for the reconstruction of recti-linear images place a large burden on system resources. In this paper, a projection model based on quadric surfaces is presented which accurately characterizes the effect of wide-angle lenses across the entire image and allows for the use of novel feature matching strategies that do not require nonlinear distortion compensation.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"3 1","pages":"113-118 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73923881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Bayesian multi-camera surveillance 贝叶斯多摄像头监控
Vera M. Kettnaker, R. Zabih
The task of multicamera surveillance is to reconstruct the paths taken by all moving objects that are temporally visible from multiple non-overlapping cameras. We present a Bayesian formalization of this task, where the optimal solution is the set of object paths with the highest posterior probability given the observed data. We show how to efficiently approximate the maximum a posteriori solution by linear programming and present initial experimental results.
多摄像机监控的任务是重建从多个非重叠摄像机中暂时可见的所有运动物体所采取的路径。我们提出了该任务的贝叶斯形式化,其中最优解是给定观测数据的具有最高后验概率的目标路径集。我们展示了如何用线性规划有效地逼近最大后验解,并给出了初步的实验结果。
{"title":"Bayesian multi-camera surveillance","authors":"Vera M. Kettnaker, R. Zabih","doi":"10.1109/CVPR.1999.784638","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784638","url":null,"abstract":"The task of multicamera surveillance is to reconstruct the paths taken by all moving objects that are temporally visible from multiple non-overlapping cameras. We present a Bayesian formalization of this task, where the optimal solution is the set of object paths with the highest posterior probability given the observed data. We show how to efficiently approximate the maximum a posteriori solution by linear programming and present initial experimental results.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"12 1","pages":"253-259 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74722130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 338
Detection and removal of line scratches in motion picture films 电影胶片中线条划痕的检测与去除
L. Joyeux, Olivier Buisson, B. Besserer, S. Boukir
Line scratches are common degradations in motion picture films. This paper presents an efficient method for line scratches detection strengthened by a Kalman filter. A new interpolation technique, dealing with both low and high frequencies (i.e. film grain) around the line artifacts, is investigated to achieve a nearby invisible reconstruction of damaged areas. Our line scratches detection and removal techniques have been validated on several film sequences.
线条划痕是电影胶片中常见的劣化现象。本文提出了一种基于卡尔曼滤波的线路划痕检测方法。研究了一种新的插值技术,处理线伪影周围的低频和高频(即薄膜颗粒),以实现损坏区域的近不可见重建。我们的线条划痕检测和去除技术已经在几个电影序列上得到了验证。
{"title":"Detection and removal of line scratches in motion picture films","authors":"L. Joyeux, Olivier Buisson, B. Besserer, S. Boukir","doi":"10.1109/CVPR.1999.786991","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786991","url":null,"abstract":"Line scratches are common degradations in motion picture films. This paper presents an efficient method for line scratches detection strengthened by a Kalman filter. A new interpolation technique, dealing with both low and high frequencies (i.e. film grain) around the line artifacts, is investigated to achieve a nearby invisible reconstruction of damaged areas. Our line scratches detection and removal techniques have been validated on several film sequences.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"97 1","pages":"548-553 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76553220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 121
期刊
Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1