首页 > 最新文献

Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)最新文献

英文 中文
Fast multiple-baseline stereo with occlusion 快速多基线立体遮挡
M. Drouin, Martin Trudeau, S. Roy
This paper presents a new and fast algorithm for multi-baseline stereo designed to handle the occlusion problem. The algorithm is a hybrid between fast heuristic occlusion overcoming algorithms that precompute an approximate visibility and slower methods that use correct visibility handling. Our approach is based on iterative dynamic programming and computes simultaneously disparity and camera visibility. Interestingly, dynamic programming makes it possible to compute exactly part of the visibility information. The remainder is obtained through heuristics. The validity of our scheme is established using real imagery with ground truth and compares favorably with other state-of-the-art multi-baseline stereo algorithms.
针对遮挡问题,提出了一种新的快速多基线立体图像算法。该算法是一种混合的快速启发式遮挡克服算法,预先计算一个近似的可见性和较慢的方法,使用正确的可见性处理。我们的方法基于迭代动态规划,同时计算视差和相机可见性。有趣的是,动态规划使得精确计算可见性信息的一部分成为可能。余数通过启发式计算得到。我们的方案的有效性是通过使用真实的地面图像来确定的,并且与其他最先进的多基线立体算法相比具有优势。
{"title":"Fast multiple-baseline stereo with occlusion","authors":"M. Drouin, Martin Trudeau, S. Roy","doi":"10.1109/3DIM.2005.40","DOIUrl":"https://doi.org/10.1109/3DIM.2005.40","url":null,"abstract":"This paper presents a new and fast algorithm for multi-baseline stereo designed to handle the occlusion problem. The algorithm is a hybrid between fast heuristic occlusion overcoming algorithms that precompute an approximate visibility and slower methods that use correct visibility handling. Our approach is based on iterative dynamic programming and computes simultaneously disparity and camera visibility. Interestingly, dynamic programming makes it possible to compute exactly part of the visibility information. The remainder is obtained through heuristics. The validity of our scheme is established using real imagery with ground truth and compares favorably with other state-of-the-art multi-baseline stereo algorithms.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125977200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
An improved calibration technique for coupled single-row telemeter and CCD camera 一种改进的单排遥测仪与CCD相机耦合标定技术
R. Dupont, R. Keriven, P. Fuchs
Toward a successful 3D and textural reconstruction of urban scenes, the use of both single-row based telemetric and photographic data in a same framework has proved to be a powerful technique. A necessary condition to obtain good results is to accurately calibrate the telemetric and photographic sensors together. We present a study of this calibration process and propose an improved extrinsic calibration technique. It is based on an existing technique which consists in scanning a planar pattern in several poses, giving a set of relative position and orientation constraints. The innovation is the use of a more appropriate laser beam distance between telemetric points and the planar target. Moreover, we use robust methods to manage outliers at several steps of the algorithm. Improved results on both theoretical and experimental data are given.
为了成功地对城市场景进行3D和纹理重建,在同一框架中使用单行遥测和摄影数据已被证明是一种强大的技术。准确标定遥测传感器和照相传感器是获得良好测量结果的必要条件。我们对这一校准过程进行了研究,并提出了一种改进的外部校准技术。它基于一种现有的技术,该技术包括在多个姿态下扫描平面图案,给出一组相对位置和方向约束。其创新之处在于在遥测点与平面目标之间使用了更合适的激光束距离。此外,我们在算法的几个步骤中使用鲁棒方法来管理异常值。给出了理论和实验数据的改进结果。
{"title":"An improved calibration technique for coupled single-row telemeter and CCD camera","authors":"R. Dupont, R. Keriven, P. Fuchs","doi":"10.1109/3DIM.2005.19","DOIUrl":"https://doi.org/10.1109/3DIM.2005.19","url":null,"abstract":"Toward a successful 3D and textural reconstruction of urban scenes, the use of both single-row based telemetric and photographic data in a same framework has proved to be a powerful technique. A necessary condition to obtain good results is to accurately calibrate the telemetric and photographic sensors together. We present a study of this calibration process and propose an improved extrinsic calibration technique. It is based on an existing technique which consists in scanning a planar pattern in several poses, giving a set of relative position and orientation constraints. The innovation is the use of a more appropriate laser beam distance between telemetric points and the planar target. Moreover, we use robust methods to manage outliers at several steps of the algorithm. Improved results on both theoretical and experimental data are given.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128325960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Contour point tracking by enforcement of rigidity constraints 基于刚性约束的轮廓点跟踪
Ricardo Oliveira, J. Costeira, J. Xavier
The aperture problem is one of the omnipresent issues in computer vision. Its local character constrains point matching to high textured areas, so that points in gradient-oriented regions (such as straight lines) can not be reliably matched. We propose a new method to overcome this problem by devising a global matching strategy under the factorization framework. We solve the n-frame correspondence problem under this context by assuming the rigidity of the scene. To this end, a geometric constraint is used that selects the matching solution resulting in a rank-4 observation matrix. The rank of the observation matrix is a function of the matching solutions associated to each image and as such a simultaneous solution for all frames has to be found. An optimization procedure is used in this text in order to find the solution.
孔径问题是计算机视觉中普遍存在的问题之一。它的局部特征将点匹配限制在高度纹理化的区域,使得梯度导向区域(如直线)中的点无法可靠匹配。我们提出了一种新的方法,通过在分解框架下设计一个全局匹配策略来克服这一问题。我们通过假设场景的刚性来解决这种情况下的n帧对应问题。为此,使用几何约束来选择匹配解,从而产生4级观察矩阵。观测矩阵的秩是与每个图像相关联的匹配解的函数,因此必须找到所有帧的同时解。本文采用了一个优化程序来求解。
{"title":"Contour point tracking by enforcement of rigidity constraints","authors":"Ricardo Oliveira, J. Costeira, J. Xavier","doi":"10.1109/3DIM.2005.27","DOIUrl":"https://doi.org/10.1109/3DIM.2005.27","url":null,"abstract":"The aperture problem is one of the omnipresent issues in computer vision. Its local character constrains point matching to high textured areas, so that points in gradient-oriented regions (such as straight lines) can not be reliably matched. We propose a new method to overcome this problem by devising a global matching strategy under the factorization framework. We solve the n-frame correspondence problem under this context by assuming the rigidity of the scene. To this end, a geometric constraint is used that selects the matching solution resulting in a rank-4 observation matrix. The rank of the observation matrix is a function of the matching solutions associated to each image and as such a simultaneous solution for all frames has to be found. An optimization procedure is used in this text in order to find the solution.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131607048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A flexible 3D modeling system based on combining shape-from-silhouette with light-sectioning algorithm 基于轮廓形与光切分算法相结合的柔性三维建模系统
T. Terauchi, Y. Oue, K. Fujimura
In this paper we present a flexible modeling system for obtaining the texture-mapped 3D geometric model. The modeling system uses an algorithm combining shape-from-silhouette with light-sectioning. In the algorithm, at first, a rough shape model is obtained by shape-from-silhouette method almost automatically. Next, concavities and complex parts on the object surface are obtained by light-sectioning method with manual scanning. For applying light-sectioning method to volume data, we propose volumetric light-sectioning algorithm. Then our modeling system can realize easy and accurate generation of 3D geometric model.
本文提出了一种基于纹理映射的三维几何模型的柔性建模系统。该建模系统使用了一种结合轮廓形状和光切分的算法。该算法首先采用基于轮廓的方法,几乎自动地得到轮廓的粗糙模型;其次,采用人工扫描的光切分法获得物体表面凹陷和复杂部位;为了将光切分方法应用于体数据,我们提出了体光切分算法。该建模系统可以实现简单、准确的三维几何模型生成。
{"title":"A flexible 3D modeling system based on combining shape-from-silhouette with light-sectioning algorithm","authors":"T. Terauchi, Y. Oue, K. Fujimura","doi":"10.1109/3DIM.2005.8","DOIUrl":"https://doi.org/10.1109/3DIM.2005.8","url":null,"abstract":"In this paper we present a flexible modeling system for obtaining the texture-mapped 3D geometric model. The modeling system uses an algorithm combining shape-from-silhouette with light-sectioning. In the algorithm, at first, a rough shape model is obtained by shape-from-silhouette method almost automatically. Next, concavities and complex parts on the object surface are obtained by light-sectioning method with manual scanning. For applying light-sectioning method to volume data, we propose volumetric light-sectioning algorithm. Then our modeling system can realize easy and accurate generation of 3D geometric model.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126233185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Euclidean reconstruction from translational motion using multiple cameras 利用多摄像机从平移运动中进行欧几里得重建
Pär Hammarstedt, A. Heyden
We investigate the possibility of Euclidean reconstruction from translational motion, using multiple uncalibrated cameras. We show that in the case of multiple cameras viewing a translating scene, no additional constraints are given by the translational motion compared to the more general case with one camera viewing a scene undergoing a general motion. However, the knowledge of translational motion allows an intermediate affine reconstruction from each camera, and aids in the reconstruction process by simplifying several steps, resulting in a more reliable algorithm for 3D reconstruction. We also identify the critical directions of translation, for which no affine reconstruction is possible. Experiments on real and simulated data are performed to illustrate that the method works in practice.
我们研究的可能性欧几里得重建从平移运动,使用多个未校准的摄像机。我们表明,在多个摄像机观看平移场景的情况下,与一个摄像机观看经历一般运动的场景的更一般情况相比,平移运动没有额外的约束。然而,平移运动的知识允许从每个相机进行中间仿射重建,并通过简化几个步骤来帮助重建过程,从而产生更可靠的3D重建算法。我们还确定了翻译的关键方向,因为没有仿射重建是可能的。在实际数据和模拟数据上进行了实验,验证了该方法的有效性。
{"title":"Euclidean reconstruction from translational motion using multiple cameras","authors":"Pär Hammarstedt, A. Heyden","doi":"10.1109/3DIM.2005.36","DOIUrl":"https://doi.org/10.1109/3DIM.2005.36","url":null,"abstract":"We investigate the possibility of Euclidean reconstruction from translational motion, using multiple uncalibrated cameras. We show that in the case of multiple cameras viewing a translating scene, no additional constraints are given by the translational motion compared to the more general case with one camera viewing a scene undergoing a general motion. However, the knowledge of translational motion allows an intermediate affine reconstruction from each camera, and aids in the reconstruction process by simplifying several steps, resulting in a more reliable algorithm for 3D reconstruction. We also identify the critical directions of translation, for which no affine reconstruction is possible. Experiments on real and simulated data are performed to illustrate that the method works in practice.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125691661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A mechanism for range image integration without image registration 一种不需要图像配准的距离图像集成机制
L. Zagorchev, A. Goshtasby
A mechanism is introduced that automatically integrates multi-view range images without registering the images. The mechanism is based on a reference double-frame that acts as the coordinate system of the scene. A single-view range image of a scene is obtained by sweeping a laser line over the scene by hand and analyzing the acquired light stripes. Range images captured from different views of the scene are in the coordinate system of the double-frame, and thus, automatically integrate without further processing.
介绍了一种无需配准的多视距图像自动集成机制。该机制是基于一个参考双帧,作为场景的坐标系。通过在场景上手动扫描激光线并分析获得的光条纹,可以获得场景的单视图范围图像。从场景的不同视角捕获的距离图像在双帧坐标系中,因此无需进一步处理即可自动整合。
{"title":"A mechanism for range image integration without image registration","authors":"L. Zagorchev, A. Goshtasby","doi":"10.1109/3DIM.2005.10","DOIUrl":"https://doi.org/10.1109/3DIM.2005.10","url":null,"abstract":"A mechanism is introduced that automatically integrates multi-view range images without registering the images. The mechanism is based on a reference double-frame that acts as the coordinate system of the scene. A single-view range image of a scene is obtained by sweeping a laser line over the scene by hand and analyzing the acquired light stripes. Range images captured from different views of the scene are in the coordinate system of the double-frame, and thus, automatically integrate without further processing.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133859540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
3D models from extended uncalibrated video sequences: addressing key-frame selection and projective drift 从扩展的未校准视频序列的3D模型:解决关键帧选择和投影漂移
Jason Repko, M. Pollefeys
In this paper, we present an approach that is able to reconstruct 3D models from extended video sequences captured with an uncalibrated hand-held camera. We focus on two specific issues: (1) key-frame selection; and (2) projective drift. Given a long video sequence it is often not practical to work with all video frames. In addition, to allow for effective outlier rejection and motion estimation it is necessary to have a sufficient baseline between frames. For this purpose, we propose a key-frame selection procedure based on a robust model selection criterion. Our approach guarantees that the camera motion can be estimated reliably by analyzing the feature correspondences between three consecutive views. Another problem for long uncalibrated video sequences is projective drift. Error accumulation leads to a non-projective distortion of the model. This causes the projective basis at the beginning and the end of the sequence to become inconsistent and leads to the failure of self-calibration. We propose a self-calibration approach that is insensitive to this global projective drift. After self-calibration triplets of key-frames are aligned using absolute orientation and hierarchically merged into a complete metric reconstruction. Next, we compute a detailed 3D surface model using stereo matching. The 3D model is textured using some of the frames.
在本文中,我们提出了一种能够从未校准的手持相机捕获的扩展视频序列中重建3D模型的方法。我们主要关注两个具体问题:(1)关键帧选择;(2)投影漂移。给定一个长视频序列,使用所有视频帧通常是不实际的。此外,为了允许有效的异常值抑制和运动估计,有必要在帧之间有足够的基线。为此,我们提出了一种基于鲁棒模型选择准则的关键帧选择过程。该方法通过分析三个连续视图之间的特征对应关系,保证了摄像机运动的可靠估计。长时间未校准视频序列的另一个问题是投影漂移。误差累积导致模型的非投影畸变。这导致序列开始和结束时的投影基不一致,导致自校准失败。我们提出了一种对这种全局投影漂移不敏感的自校准方法。自定标后,采用绝对方向对关键帧三组进行对齐,并分层合并成完整的度量重构。接下来,我们使用立体匹配计算详细的三维表面模型。3D模型使用一些帧进行纹理处理。
{"title":"3D models from extended uncalibrated video sequences: addressing key-frame selection and projective drift","authors":"Jason Repko, M. Pollefeys","doi":"10.1109/3dim.2005.4","DOIUrl":"https://doi.org/10.1109/3dim.2005.4","url":null,"abstract":"In this paper, we present an approach that is able to reconstruct 3D models from extended video sequences captured with an uncalibrated hand-held camera. We focus on two specific issues: (1) key-frame selection; and (2) projective drift. Given a long video sequence it is often not practical to work with all video frames. In addition, to allow for effective outlier rejection and motion estimation it is necessary to have a sufficient baseline between frames. For this purpose, we propose a key-frame selection procedure based on a robust model selection criterion. Our approach guarantees that the camera motion can be estimated reliably by analyzing the feature correspondences between three consecutive views. Another problem for long uncalibrated video sequences is projective drift. Error accumulation leads to a non-projective distortion of the model. This causes the projective basis at the beginning and the end of the sequence to become inconsistent and leads to the failure of self-calibration. We propose a self-calibration approach that is insensitive to this global projective drift. After self-calibration triplets of key-frames are aligned using absolute orientation and hierarchically merged into a complete metric reconstruction. Next, we compute a detailed 3D surface model using stereo matching. The 3D model is textured using some of the frames.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"340 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124780411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Acquisition of view-based 3D object models using supervised, unstructured data 使用监督的非结构化数据获取基于视图的3D对象模型
Kevin Coogan, I. Green
Existing techniques for view-based 3D object recognition using computer vision rely on training the system on a particular object before it is introduced into an environment. This training often consists of taking over 100 images at predetermined points around the viewing sphere in an attempt to account for most angles for viewing the object. However, in many circumstances, the environment is well known and we only expect to see a small subset of all possible appearances. In this paper, we test the idea that under these conditions, it is possible to train an object recognition system on-the-fly using images of an object as it appears in its environment, with supervision from the user. Furthermore, because some views of an object are much more likely than others, the number of training images required can be greatly reduced.
现有的基于视图的3D物体识别技术使用计算机视觉依赖于在将特定物体引入环境之前对系统进行训练。这种训练通常包括在观看球体周围的预定点拍摄100多张图像,试图考虑到观看物体的大多数角度。然而,在许多情况下,环境是众所周知的,我们只期望看到所有可能出现的一小部分。在本文中,我们测试了这样一个想法,即在这些条件下,有可能在用户的监督下,使用物体在其环境中出现的图像来训练物体识别系统。此外,由于物体的某些视图比其他视图更有可能出现,因此所需的训练图像数量可以大大减少。
{"title":"Acquisition of view-based 3D object models using supervised, unstructured data","authors":"Kevin Coogan, I. Green","doi":"10.1109/3DIM.2005.15","DOIUrl":"https://doi.org/10.1109/3DIM.2005.15","url":null,"abstract":"Existing techniques for view-based 3D object recognition using computer vision rely on training the system on a particular object before it is introduced into an environment. This training often consists of taking over 100 images at predetermined points around the viewing sphere in an attempt to account for most angles for viewing the object. However, in many circumstances, the environment is well known and we only expect to see a small subset of all possible appearances. In this paper, we test the idea that under these conditions, it is possible to train an object recognition system on-the-fly using images of an object as it appears in its environment, with supervision from the user. Furthermore, because some views of an object are much more likely than others, the number of training images required can be greatly reduced.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128631600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy of 3D scanning technologies in a face scanning scenario 三维扫描技术在人脸扫描场景中的准确性
Chris Boehnen, P. Flynn
In this paper, we review several different 3D scanning devices. We present a method for empirical accuracy analysis, and apply it to several scanners providing an overview of their technologies. The scanners include both general purpose and face specific scanning devices. We focus on face scanning technique, although the technique should be applicable to other domains as well. The proposed method involves several different calibration faces of known shape and comparisons of their scans to investigate both absolute accuracy and repeatability.
在本文中,我们回顾了几种不同的三维扫描设备。我们提出了一种经验精度分析方法,并将其应用于几种扫描仪,概述了它们的技术。扫描仪包括通用和面部特定扫描设备。我们的重点是人脸扫描技术,尽管这项技术也应该适用于其他领域。所提出的方法涉及几个已知形状的不同校准面,并对其扫描结果进行比较,以研究绝对精度和可重复性。
{"title":"Accuracy of 3D scanning technologies in a face scanning scenario","authors":"Chris Boehnen, P. Flynn","doi":"10.1109/3DIM.2005.13","DOIUrl":"https://doi.org/10.1109/3DIM.2005.13","url":null,"abstract":"In this paper, we review several different 3D scanning devices. We present a method for empirical accuracy analysis, and apply it to several scanners providing an overview of their technologies. The scanners include both general purpose and face specific scanning devices. We focus on face scanning technique, although the technique should be applicable to other domains as well. The proposed method involves several different calibration faces of known shape and comparisons of their scans to investigate both absolute accuracy and repeatability.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121949729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 106
Hierarchical coarse to fine depth estimation for realistic view interpolation 真实感视角插值的层次粗到细深度估计
I. Geys, L. Gool
This paper presents a novel approach for view synthesis and image interpolation. The algorithm is build up in a hierarchical way, and this on different structural levels instead of using a classic image pyramid. First coarse matching is done on a 'shape basis' only. A background-foreground segmentation yields a fairly accurate contour for every incoming video stream. Inter-relating these contours is a 1D problem and as such very fast. This step is then used to compute small position dependent bounding-boxes in 3D space which enclose the underlying object. The next step is a more expensive window based matching, within the volume of these bounding-boxes. This is limited to a number of regions around 'promising' feature points. Global regularisation is obtained by a graph cut. Speed results here from limiting the number of feature points. In a third step the interpolation is 'pre-rendered' and simultaneously evaluated on a per pixel basis. This is done by computing a Birchfield dissimilarity measure on the GPU. Per pixel parallelised operations keep computational cost low. Finally the bad interpolated parts are 'patched'. This per pixel correction yields the final interpolated view at the finest level. Here we also deal explicitly with opacity at the borders of the foreground object.
提出了一种新的视图合成和图像插值方法。该算法以分层的方式构建,在不同的结构层次上,而不是使用经典的图像金字塔。首先,粗匹配只在“形状基础”上完成。背景前景分割为每个传入视频流产生相当准确的轮廓。这些轮廓的相互关联是一个一维问题,因此非常快。这一步骤随后用于计算3D空间中包含底层对象的小位置依赖边界框。下一步是一个更昂贵的基于窗口的匹配,在这些边界框的体积内。这仅限于“有希望的”特征点周围的一些区域。全局正则化是通过图切得到的。这里的速度源于限制特征点的数量。在第三步中,插值被“预渲染”,并同时以每像素为基础进行评估。这是通过在GPU上计算Birchfield不相似度来完成的。每像素并行操作保持低计算成本。最后,坏的插入部分被“修补”。这种每像素的校正产生最终的内插视图在最好的水平。这里我们还明确地处理前景对象边界的不透明度。
{"title":"Hierarchical coarse to fine depth estimation for realistic view interpolation","authors":"I. Geys, L. Gool","doi":"10.1109/3DIM.2005.52","DOIUrl":"https://doi.org/10.1109/3DIM.2005.52","url":null,"abstract":"This paper presents a novel approach for view synthesis and image interpolation. The algorithm is build up in a hierarchical way, and this on different structural levels instead of using a classic image pyramid. First coarse matching is done on a 'shape basis' only. A background-foreground segmentation yields a fairly accurate contour for every incoming video stream. Inter-relating these contours is a 1D problem and as such very fast. This step is then used to compute small position dependent bounding-boxes in 3D space which enclose the underlying object. The next step is a more expensive window based matching, within the volume of these bounding-boxes. This is limited to a number of regions around 'promising' feature points. Global regularisation is obtained by a graph cut. Speed results here from limiting the number of feature points. In a third step the interpolation is 'pre-rendered' and simultaneously evaluated on a per pixel basis. This is done by computing a Birchfield dissimilarity measure on the GPU. Per pixel parallelised operations keep computational cost low. Finally the bad interpolated parts are 'patched'. This per pixel correction yields the final interpolated view at the finest level. Here we also deal explicitly with opacity at the borders of the foreground object.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127056179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1