首页 > 最新文献

Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)最新文献

英文 中文
Elastic registration of medical images using radial basis functions with compact support 基于紧支撑径向基函数的医学图像弹性配准
M. Fornefett, K. Rohr, H. Stiehl
We introduce radial basis functions with compact support for elastic registration of medical images. With these basis functions the influence of a landmark on the registration result is limited to a circle in 2D and, respectively, to a sphere in 3D. Therefore, the registration can be locally constrained which especially allows to deal with rather local changes in medical images due to, e.g., tumor resection. An important property of the used RBFs is that they are positive definite. Thus, the solvability of the resulting system of equations is always guaranteed. We demonstrate our approach for synthetic as well as for 2D and 3D tomographic images.
引入具有紧凑支持的径向基函数用于医学图像的弹性配准。利用这些基函数,地标对配准结果的影响分别局限于二维的圆和三维的球体。因此,配准可以局部约束,这尤其允许处理由于肿瘤切除等原因导致的医学图像的局部变化。所使用的rbf的一个重要性质是它们是正定的。因此,所得到的方程组的可解性总是得到保证的。我们展示了我们的方法合成以及二维和三维断层成像图像。
{"title":"Elastic registration of medical images using radial basis functions with compact support","authors":"M. Fornefett, K. Rohr, H. Stiehl","doi":"10.1109/CVPR.1999.786970","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786970","url":null,"abstract":"We introduce radial basis functions with compact support for elastic registration of medical images. With these basis functions the influence of a landmark on the registration result is limited to a circle in 2D and, respectively, to a sphere in 3D. Therefore, the registration can be locally constrained which especially allows to deal with rather local changes in medical images due to, e.g., tumor resection. An important property of the used RBFs is that they are positive definite. Thus, the solvability of the resulting system of equations is always guaranteed. We demonstrate our approach for synthetic as well as for 2D and 3D tomographic images.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"10 1","pages":"402-407 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75986358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 78
Calibration of image sequences for model visualisation 用于模型可视化的图像序列校准
A. Broadhurst, R. Cipolla
The object of this paper is to find a quick and accurate method for computing the projection matrices of an image sequence, so that the error is distributed evenly along the sequence. It assumes that a set of correspondences between points in the images is known, and that these points represent rigid points in the world. This paper extends the algebraic minimisation approach developed by Hartley so that it can be used for long image sequences. This is achieved by initially computing a trifocal tensor using the three most extreme views. The intermediate views are then computed linearly using the trifocal tensor. An iterative algorithm as presented which perturbs the twelve entries of one camera matrix so that the algebraic error along the whole sequence is minimised.
本文的目的是寻找一种快速准确的计算图像序列投影矩阵的方法,使误差沿序列均匀分布。它假设图像中点之间的一组对应关系是已知的,并且这些点表示世界中的刚性点。本文扩展了Hartley开发的代数最小化方法,使其可以用于长图像序列。这是通过使用三个最极端的视图初始计算三焦张量来实现的。然后使用三焦张量线性计算中间视图。提出了一种迭代算法,对一个摄像机矩阵的12个元素进行扰动,使整个序列的代数误差最小化。
{"title":"Calibration of image sequences for model visualisation","authors":"A. Broadhurst, R. Cipolla","doi":"10.1109/CVPR.1999.786924","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786924","url":null,"abstract":"The object of this paper is to find a quick and accurate method for computing the projection matrices of an image sequence, so that the error is distributed evenly along the sequence. It assumes that a set of correspondences between points in the images is known, and that these points represent rigid points in the world. This paper extends the algebraic minimisation approach developed by Hartley so that it can be used for long image sequences. This is achieved by initially computing a trifocal tensor using the three most extreme views. The intermediate views are then computed linearly using the trifocal tensor. An iterative algorithm as presented which perturbs the twelve entries of one camera matrix so that the algebraic error along the whole sequence is minimised.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"28 1","pages":"100-105 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74368278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Detecting and tracking moving objects for video surveillance 用于视频监控的运动物体检测和跟踪
I. Cohen, G. Medioni
We address the problem of detection and tracking of moving objects in a video stream obtained from a moving airborne platform. The proposed method relies on a graph representation of moving objects which allows to derive and maintain a dynamic template of each moving object by enforcing their temporal coherence. This inferred template along with the graph representation used in our approach allows us to characterize objects trajectories as an optimal path in a graph. The proposed tracker allows to deal with partial occlusions, stop and go motion in very challenging situations. We demonstrate results on a number of different real sequences. We then define an evaluation methodology to quantify our results and show how tracking overcome detection errors.
我们解决了从移动的机载平台获得的视频流中检测和跟踪运动物体的问题。所提出的方法依赖于移动对象的图形表示,允许通过强制它们的时间一致性来派生和维护每个移动对象的动态模板。这个推断模板以及在我们的方法中使用的图形表示允许我们将对象轨迹表征为图形中的最佳路径。提出的跟踪器允许处理部分闭塞,在非常具有挑战性的情况下走走停停。我们在许多不同的实序列上证明了结果。然后,我们定义一个评估方法来量化我们的结果,并展示跟踪如何克服检测错误。
{"title":"Detecting and tracking moving objects for video surveillance","authors":"I. Cohen, G. Medioni","doi":"10.1109/CVPR.1999.784651","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784651","url":null,"abstract":"We address the problem of detection and tracking of moving objects in a video stream obtained from a moving airborne platform. The proposed method relies on a graph representation of moving objects which allows to derive and maintain a dynamic template of each moving object by enforcing their temporal coherence. This inferred template along with the graph representation used in our approach allows us to characterize objects trajectories as an optimal path in a graph. The proposed tracker allows to deal with partial occlusions, stop and go motion in very challenging situations. We demonstrate results on a number of different real sequences. We then define an evaluation methodology to quantify our results and show how tracking overcome detection errors.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"18 1","pages":"319-325 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74428868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 250
Object recognition with color cooccurrence histograms 具有颜色共现直方图的物体识别
Peng Chang, J. Krumm
We use the color cooccurrence histogram (CH) for recognizing objects in images. The color CH keeps track of the number of pairs of certain colored pixels that occur at certain separation distances in image space. The color CH adds geometric information to the normal color histogram, which abstracts away all geometry. We compute model CHs based on images of known objects taken from different points of view. These model CHs are then matched to subregions in test images to find the object. By adjusting the number of colors and the number of distances used in the CH, we can adjust the tolerance of the algorithm to changes in lighting, viewpoint, and the flexibility of the object We develop a mathematical model of the algorithm's false alarm probability and use this as a principled way of picking most of the algorithm's adjustable parameters. We demonstrate our algorithm on different objects, showing that it recognizes objects in spite of confusing background clutter partial occlusions, and flexing of the object.
我们使用颜色共现直方图(CH)来识别图像中的物体。颜色CH跟踪在图像空间中某些分离距离上出现的某些彩色像素对的数量。颜色CH将几何信息添加到正常的颜色直方图中,抽象掉所有的几何信息。我们根据从不同角度拍摄的已知物体的图像计算模型CHs。然后将这些模型CHs与测试图像中的子区域进行匹配以找到目标。通过调整CH中使用的颜色数量和距离数量,我们可以调整算法对照明,视点和对象灵活性变化的容忍度。我们开发了算法虚警概率的数学模型,并将其用作选择大多数算法可调参数的原则方法。我们在不同的对象上演示了我们的算法,表明它可以在令人困惑的背景杂波、部分遮挡和对象弯曲的情况下识别目标。
{"title":"Object recognition with color cooccurrence histograms","authors":"Peng Chang, J. Krumm","doi":"10.1109/CVPR.1999.784727","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784727","url":null,"abstract":"We use the color cooccurrence histogram (CH) for recognizing objects in images. The color CH keeps track of the number of pairs of certain colored pixels that occur at certain separation distances in image space. The color CH adds geometric information to the normal color histogram, which abstracts away all geometry. We compute model CHs based on images of known objects taken from different points of view. These model CHs are then matched to subregions in test images to find the object. By adjusting the number of colors and the number of distances used in the CH, we can adjust the tolerance of the algorithm to changes in lighting, viewpoint, and the flexibility of the object We develop a mathematical model of the algorithm's false alarm probability and use this as a principled way of picking most of the algorithm's adjustable parameters. We demonstrate our algorithm on different objects, showing that it recognizes objects in spite of confusing background clutter partial occlusions, and flexing of the object.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"47 1","pages":"498-504 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80570817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 226
Illumination distribution from shadows 阴影照明分布
Imari Sato, Yoichi Sato, K. Ikeuchi
The image irradiance of a three-dimensional object is known to be the function of three components: the distribution of light sources, the shape, and reflectance of a real object surface. In the past, recovering the shape and reflectance of an object surface from the recorded image brightness has been intensively investigated. On the other hand, there has been little progress in recovering illumination from the knowledge of the shape and reflectance of a real object. In this paper, we propose a new method for estimating the illumination distribution of a real scene from image brightness observed on a real object surface in that scene. More specifically, we recover the illumination distribution of the scene from a radiance distribution inside shadows cast by an object of known shape onto another object surface of known shape and reflectance. By using the occlusion information of the incoming light, we are able to reliably estimate the illumination distribution of a real scene, even in a complex illumination environment.
众所周知,三维物体的图像辐照度是三个组成部分的函数:光源的分布、物体表面的形状和反射率。过去,从记录的图像亮度中恢复物体表面的形状和反射率已经得到了深入的研究。另一方面,在从真实物体的形状和反射率知识中恢复照明方面进展甚微。本文提出了一种基于真实场景中真实物体表面的图像亮度估计真实场景照度分布的新方法。更具体地说,我们从一个已知形状的物体投射到另一个已知形状和反射率的物体表面的阴影内的亮度分布中恢复场景的照明分布。利用入射光的遮挡信息,即使在复杂的照明环境下,我们也能可靠地估计真实场景的照明分布。
{"title":"Illumination distribution from shadows","authors":"Imari Sato, Yoichi Sato, K. Ikeuchi","doi":"10.1109/CVPR.1999.786956","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786956","url":null,"abstract":"The image irradiance of a three-dimensional object is known to be the function of three components: the distribution of light sources, the shape, and reflectance of a real object surface. In the past, recovering the shape and reflectance of an object surface from the recorded image brightness has been intensively investigated. On the other hand, there has been little progress in recovering illumination from the knowledge of the shape and reflectance of a real object. In this paper, we propose a new method for estimating the illumination distribution of a real scene from image brightness observed on a real object surface in that scene. More specifically, we recover the illumination distribution of the scene from a radiance distribution inside shadows cast by an object of known shape onto another object surface of known shape and reflectance. By using the occlusion information of the incoming light, we are able to reliably estimate the illumination distribution of a real scene, even in a complex illumination environment.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"211 1","pages":"306-312 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78125149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 80
Model based segmentation of nuclei 基于模型的核分割
G. Cong, B. Parvin
A new approach for segmentation of nuclei observed with an epi-fluorescence microscope is presented. The technique is model based and uses local feature activities such as step-edge segments, roof-edge segments, and concave corners to construct a set of initial hypotheses. These local feature activities are extracted using either local or global operators to form a possible set of hypotheses. Each hypothesis is expressed as a hyperquadric for better stability, compactness, and error handling. The search space is expressed as an assignment matrix with an appropriate cost function to ensure local adjacency, and global consistency. Each possible configuration of a set of nuclei defines a path, and the path with the least error corresponds to best representation. This result is then presented to an operator who verifies and eliminates a small number of errors.
提出了一种利用荧光显微镜观察细胞核分割的新方法。该技术是基于模型的,利用台阶边缘段、屋顶边缘段和凹角等局部特征活动来构造一组初始假设。这些局部特征活动使用局部或全局操作符提取,以形成一组可能的假设。为了更好的稳定性、紧凑性和错误处理,每个假设都被表示为超二次曲面。将搜索空间表示为具有适当代价函数的分配矩阵,以确保局部邻接性和全局一致性。一组核的每种可能构型都定义了一条路径,误差最小的路径对应于最佳表示。然后将结果呈现给操作员,由操作员进行验证并消除少量错误。
{"title":"Model based segmentation of nuclei","authors":"G. Cong, B. Parvin","doi":"10.1109/CVPR.1999.786948","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786948","url":null,"abstract":"A new approach for segmentation of nuclei observed with an epi-fluorescence microscope is presented. The technique is model based and uses local feature activities such as step-edge segments, roof-edge segments, and concave corners to construct a set of initial hypotheses. These local feature activities are extracted using either local or global operators to form a possible set of hypotheses. Each hypothesis is expressed as a hyperquadric for better stability, compactness, and error handling. The search space is expressed as an assignment matrix with an appropriate cost function to ensure local adjacency, and global consistency. Each possible configuration of a set of nuclei defines a path, and the path with the least error corresponds to best representation. This result is then presented to an operator who verifies and eliminates a small number of errors.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"79 1","pages":"256-261 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90963933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Image interpolation by joint view triangulation 联合视图三角剖分的图像插值
M. Lhuillier, Long Quan
Creating novel views by interpolating prestored images or view morphing has many applications in visual simulation. We present in this paper a new method of automatically interpolating two images which tackles two most difficult problems of morphing due to the lack of depth informational pixel matching and visibility handling. We first describe a quasi-dense matching algorithm based on region growing with the best first strategy for match propagation. Then, we describe a robust construction of matched planar patches using local geometric constraints encoded by a homography. After that we introduce a novel representation, joint view triangulation, for visible and half-occluded patches in two images to handle their visibility during the creation of new view. Finally we demonstrate these techniques on real image pairs.
通过插值预先存储的图像或视图变形来创建新的视图在视觉仿真中有许多应用。本文提出了一种自动插值两幅图像的新方法,该方法解决了由于缺乏深度信息像素匹配和可见性处理而导致的两个最困难的变形问题。我们首先描述了一种基于区域增长的准密集匹配算法,该算法具有匹配传播的最佳优先策略。然后,我们描述了一种基于局部几何约束的平面匹配块的鲁棒构造。之后,我们引入了一种新的表示,联合视图三角测量,用于两幅图像中的可见和半遮挡斑块,以在创建新视图时处理它们的可见性。最后,我们在实际图像对上演示了这些技术。
{"title":"Image interpolation by joint view triangulation","authors":"M. Lhuillier, Long Quan","doi":"10.1109/CVPR.1999.784621","DOIUrl":"https://doi.org/10.1109/CVPR.1999.784621","url":null,"abstract":"Creating novel views by interpolating prestored images or view morphing has many applications in visual simulation. We present in this paper a new method of automatically interpolating two images which tackles two most difficult problems of morphing due to the lack of depth informational pixel matching and visibility handling. We first describe a quasi-dense matching algorithm based on region growing with the best first strategy for match propagation. Then, we describe a robust construction of matched planar patches using local geometric constraints encoded by a homography. After that we introduce a novel representation, joint view triangulation, for visible and half-occluded patches in two images to handle their visibility during the creation of new view. Finally we demonstrate these techniques on real image pairs.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"33 1","pages":"139-145 Vol. 2"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84529897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
Eigen-texture method: Appearance compression based on 3D model 特征纹理法:基于三维模型的外观压缩
K. Nishino, Yoichi Sato, K. Ikeuchi
Image-based and model-based methods are two representative rendering methods for generating virtual images of objects from their real images. Extensive research on these two methods has been made in CV and CG communities. However, both methods still have several drawbacks when it comes to applying them to the mixed reality where we integrate such virtual images with real background images. To overcome these difficulties, we propose a new method which we refer to as the Eigen-Texture method. The proposed method samples appearances of a real object under various illumination and viewing conditions, and compresses them in the 2D coordinate system defined on the 3D model surface. The 3D model is generated from a sequence of range images. The Eigen-Texture method is practical because it does not require any detailed reflectance analysis of the object surface, and has great advantages due to the accurate 3D geometric models. This paper describes the method, and reports on its implementation.
基于图像的方法和基于模型的方法是两种具有代表性的绘制方法,用于从物体的真实图像生成物体的虚拟图像。这两种方法已经在CV和CG社区进行了广泛的研究。然而,当将这两种方法应用于混合现实时,我们将这些虚拟图像与真实背景图像集成在一起,这两种方法仍然存在一些缺点。为了克服这些困难,我们提出了一种新的方法,我们称之为特征纹理方法。该方法对真实物体在不同光照和观看条件下的外观进行采样,并将其压缩到三维模型表面上定义的二维坐标系中。3D模型是由一系列距离图像生成的。本征纹理法不需要对物体表面进行任何详细的反射率分析,具有实用性强的特点,并且由于三维几何模型的精确,具有很大的优势。本文描述了该方法,并报告了其实现过程。
{"title":"Eigen-texture method: Appearance compression based on 3D model","authors":"K. Nishino, Yoichi Sato, K. Ikeuchi","doi":"10.1109/CVPR.1999.787003","DOIUrl":"https://doi.org/10.1109/CVPR.1999.787003","url":null,"abstract":"Image-based and model-based methods are two representative rendering methods for generating virtual images of objects from their real images. Extensive research on these two methods has been made in CV and CG communities. However, both methods still have several drawbacks when it comes to applying them to the mixed reality where we integrate such virtual images with real background images. To overcome these difficulties, we propose a new method which we refer to as the Eigen-Texture method. The proposed method samples appearances of a real object under various illumination and viewing conditions, and compresses them in the 2D coordinate system defined on the 3D model surface. The 3D model is generated from a sequence of range images. The Eigen-Texture method is practical because it does not require any detailed reflectance analysis of the object surface, and has great advantages due to the accurate 3D geometric models. This paper describes the method, and reports on its implementation.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"21 1","pages":"618-624 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85416334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 117
Simultaneous depth recovery and image restoration from defocused images 同时深度恢复和图像恢复从散焦图像
A. Rajagopalan, S. Chaudhuri
We propose a method for simultaneous recovery of depth and restoration of scene intensity, given two defocused images of a scene. The space-variant blur parameter and the focused image of the scene are modeled as Markov random fields (MRFs). Line fields are included to preserve discontinuities. The joint posterior distribution of the blur parameter and the intensity process is examined for locality property and we derive an important result that the posterior is again Markov. The result enables us to obtain the maximum a posterior (MAP) estimates of the blur parameter and the focused image, within reasonable computational limits. The estimates of depth and the quality of the restored image are found to be quite good, even in the presence of discontinuities.
我们提出了一种同时恢复景深和恢复场景强度的方法,给出了一个场景的两个散焦图像。将空间变模糊参数和场景聚焦图像建模为马尔可夫随机场(mrf)。包括线场以保持不连续。研究了模糊参数和强度过程的联合后验分布的局部性,得到了后验仍然是马尔可夫的重要结果。结果使我们能够在合理的计算范围内获得模糊参数和聚焦图像的最大后验(MAP)估计。我们发现,即使在存在不连续的情况下,对深度的估计和恢复图像的质量也相当好。
{"title":"Simultaneous depth recovery and image restoration from defocused images","authors":"A. Rajagopalan, S. Chaudhuri","doi":"10.1109/CVPR.1999.786962","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786962","url":null,"abstract":"We propose a method for simultaneous recovery of depth and restoration of scene intensity, given two defocused images of a scene. The space-variant blur parameter and the focused image of the scene are modeled as Markov random fields (MRFs). Line fields are included to preserve discontinuities. The joint posterior distribution of the blur parameter and the intensity process is examined for locality property and we derive an important result that the posterior is again Markov. The result enables us to obtain the maximum a posterior (MAP) estimates of the blur parameter and the focused image, within reasonable computational limits. The estimates of depth and the quality of the restored image are found to be quite good, even in the presence of discontinuities.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"99 1","pages":"348-353 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81019172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Optimal rigid motion estimation and performance evaluation with bootstrap 基于自举的最优刚性运动估计与性能评估
B. Matei, P. Meer
A new method for 3D rigid motion estimation is derived under the most general assumption that the measurements are corrupted by inhomogeneous and anisotropic, i.e., heteroscedastic noise. This is the case, for example, when the motion of a calibrated stereo-head is to be determined from image pairs. Linearization in the quaternion space transforms the problem into a multivariate, heteroscedastic errors-in-variables (HEIV) regression, from which the rotation and translation estimates are obtained simultaneously. The significant performance improvement is illustrated, for real data, by comparison with the results of quaternion, subspace and renormalization based approaches described in the literature. Extensive use as made of bootstrap, an advanced numerical tool from statistics, both to estimate the covariances of the 3D data points and to obtain confidence regions for the rotation and translation estimates. Bootstrap enables an accurate recovery of these information using only the two image pairs serving as input.
在测量结果受非均匀和各向异性噪声(即异方差噪声)干扰的最普遍假设下,导出了一种新的三维刚性运动估计方法。例如,当要从图像对确定校准的立体头的运动时,就是这种情况。四元数空间的线性化将问题转化为多元异方差变量误差(HEIV)回归,同时获得旋转和平移估计。通过与文献中描述的基于四元数、子空间和重整化的方法的结果比较,说明了对实际数据的显着性能改进。广泛使用bootstrap,这是一种来自统计学的高级数值工具,既可以估计三维数据点的协方差,也可以获得旋转和平移估计的置信区域。Bootstrap仅使用两个图像对作为输入,就可以准确地恢复这些信息。
{"title":"Optimal rigid motion estimation and performance evaluation with bootstrap","authors":"B. Matei, P. Meer","doi":"10.1109/CVPR.1999.786961","DOIUrl":"https://doi.org/10.1109/CVPR.1999.786961","url":null,"abstract":"A new method for 3D rigid motion estimation is derived under the most general assumption that the measurements are corrupted by inhomogeneous and anisotropic, i.e., heteroscedastic noise. This is the case, for example, when the motion of a calibrated stereo-head is to be determined from image pairs. Linearization in the quaternion space transforms the problem into a multivariate, heteroscedastic errors-in-variables (HEIV) regression, from which the rotation and translation estimates are obtained simultaneously. The significant performance improvement is illustrated, for real data, by comparison with the results of quaternion, subspace and renormalization based approaches described in the literature. Extensive use as made of bootstrap, an advanced numerical tool from statistics, both to estimate the covariances of the 3D data points and to obtain confidence regions for the rotation and translation estimates. Bootstrap enables an accurate recovery of these information using only the two image pairs serving as input.","PeriodicalId":20644,"journal":{"name":"Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)","volume":"3 1","pages":"339-345 Vol. 1"},"PeriodicalIF":0.0,"publicationDate":"1999-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82416473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
期刊
Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1