首页 > 最新文献

Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)最新文献

英文 中文
Projection-based registration using a multi-view camera for indoor scene reconstruction 基于投影配准的多视点摄像机室内场景重建
Sehwan Kim, Woontack Woo
A registration method is proposed for 3D reconstruction of an indoor environment using a multi-view camera. In general, previous methods have a high computational complexity and are not robust for 3D point cloud with low precision. Thus, a projection-based registration is presented. First, depth are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling holes referring neighboring 3D points. Second, 3D point clouds acquired at two views are projected onto the same image plane, and two-step integer mapping enables the modified KLT to find correspondences. Then, fine registration is carried out by minimizing distance errors. Finally, a final color is evaluated using colors of corresponding points and an indoor environment is reconstructed by applying the above procedure to consecutive scenes. The proposed method reduces computational complexity by searching for correspondences within an image plane. It not only enables an effective registration even for 3D point cloud with low precision, but also need only a few views. The generated model can be adopted for interaction with as well as navigation in a virtual environment.
提出了一种利用多视角相机对室内环境进行三维重建的配准方法。一般来说,以往的方法计算量大,对于精度低的三维点云,鲁棒性不强。因此,提出了一种基于投影的配准方法。首先,基于时间属性对深度进行细化,剔除变化较大的三维点;基于空间属性对相邻三维点进行补孔;其次,将两个视图获取的三维点云投影到同一图像平面上,两步整数映射使改进的KLT能够找到对应关系。然后,通过最小化距离误差进行精细配准。最后,使用对应点的颜色来评估最终的颜色,并将上述过程应用于连续场景来重建室内环境。该方法通过在图像平面内查找对应关系来降低计算复杂度。它不仅可以对精度较低的3D点云进行有效的配准,而且只需要少量的视图。生成的模型可用于虚拟环境中的交互和导航。
{"title":"Projection-based registration using a multi-view camera for indoor scene reconstruction","authors":"Sehwan Kim, Woontack Woo","doi":"10.1109/3DIM.2005.64","DOIUrl":"https://doi.org/10.1109/3DIM.2005.64","url":null,"abstract":"A registration method is proposed for 3D reconstruction of an indoor environment using a multi-view camera. In general, previous methods have a high computational complexity and are not robust for 3D point cloud with low precision. Thus, a projection-based registration is presented. First, depth are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling holes referring neighboring 3D points. Second, 3D point clouds acquired at two views are projected onto the same image plane, and two-step integer mapping enables the modified KLT to find correspondences. Then, fine registration is carried out by minimizing distance errors. Finally, a final color is evaluated using colors of corresponding points and an indoor environment is reconstructed by applying the above procedure to consecutive scenes. The proposed method reduces computational complexity by searching for correspondences within an image plane. It not only enables an effective registration even for 3D point cloud with low precision, but also need only a few views. The generated model can be adopted for interaction with as well as navigation in a virtual environment.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122329135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Fast simultaneous alignment of multiple range images using index images 使用索引图像快速同时对齐多个范围图像
Takeshi Oishi, A. Nakazawa, R. Kurazume, K. Ikeuchi
This paper describes a fast and easy-to-use simultaneous alignment method of multiple range images. The most time consuming part of alignment process is searching corresponding points. Although "Inverse calibration" method quickly searches corresponding points in complexity O(n), where n is the number of vertices, the method requires some look-up tables or precise sensors parameters. Then, we propose an easy-to-use method that uses "Index Image": "Index image " can be rapidly created using graphics hardware without precise sensor's parameters. For fast computation of rigid transformation matrices of a large number of range images, we utilized linearized error function and applied incomplete Cholesky conjugate gradient (ICCG) method for solving linear equations. Some experimental results that aligned a large number of range images measured with laser range sensors show the effectiveness of our method.
本文介绍了一种快速简便的多距离图像同步对准方法。对齐过程中最耗时的部分是查找对应点。虽然“逆校准”方法快速搜索到对应点的复杂度为O(n),其中n为顶点数,但该方法需要一些查找表或精确的传感器参数。然后,我们提出了一种简单易用的使用“索引图像”的方法:使用图形硬件可以快速创建“索引图像”,而不需要精确的传感器参数。为了快速计算大量距离图像的刚性变换矩阵,采用线性化误差函数和不完全Cholesky共轭梯度(ICCG)法求解线性方程。对激光测距传感器测量的大量距离图像进行了对正实验,结果表明了该方法的有效性。
{"title":"Fast simultaneous alignment of multiple range images using index images","authors":"Takeshi Oishi, A. Nakazawa, R. Kurazume, K. Ikeuchi","doi":"10.1109/3DIM.2005.41","DOIUrl":"https://doi.org/10.1109/3DIM.2005.41","url":null,"abstract":"This paper describes a fast and easy-to-use simultaneous alignment method of multiple range images. The most time consuming part of alignment process is searching corresponding points. Although \"Inverse calibration\" method quickly searches corresponding points in complexity O(n), where n is the number of vertices, the method requires some look-up tables or precise sensors parameters. Then, we propose an easy-to-use method that uses \"Index Image\": \"Index image \" can be rapidly created using graphics hardware without precise sensor's parameters. For fast computation of rigid transformation matrices of a large number of range images, we utilized linearized error function and applied incomplete Cholesky conjugate gradient (ICCG) method for solving linear equations. Some experimental results that aligned a large number of range images measured with laser range sensors show the effectiveness of our method.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126768031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Projective surface matching of colored 3D scans 彩色3D扫描的投影表面匹配
K. Pulli, Simo Piiroinen, T. Duchamp, W. Stuetzle
We present a new method for registering multiple 3D scans of a colored object. Each scan is regarded as a color and range image of the object recorded by a pinhole camera. Consider a pair of cameras that see overlapping parts of the objects. For correct camera poses, the actual image of the overlap area in one camera matches the rendition of the overlap area as seen by the other camera. We define a mismatch score summarizing discrepancies in color, range, and silhouette between pairs of images, and we present an algorithm to efficiently minimize this mismatch score over camera poses.
我们提出了一种新的方法来注册多个三维扫描的一个彩色物体。每次扫描都被看作是由针孔相机记录的物体的彩色和范围图像。考虑一对可以看到物体重叠部分的相机。对于正确的相机姿势,一台相机中重叠区域的实际图像与另一台相机所看到的重叠区域的呈现相匹配。我们定义了一个不匹配分数,总结了图像对之间的颜色、范围和轮廓差异,并提出了一种算法来有效地最小化相机姿势上的不匹配分数。
{"title":"Projective surface matching of colored 3D scans","authors":"K. Pulli, Simo Piiroinen, T. Duchamp, W. Stuetzle","doi":"10.1109/3DIM.2005.65","DOIUrl":"https://doi.org/10.1109/3DIM.2005.65","url":null,"abstract":"We present a new method for registering multiple 3D scans of a colored object. Each scan is regarded as a color and range image of the object recorded by a pinhole camera. Consider a pair of cameras that see overlapping parts of the objects. For correct camera poses, the actual image of the overlap area in one camera matches the rendition of the overlap area as seen by the other camera. We define a mismatch score summarizing discrepancies in color, range, and silhouette between pairs of images, and we present an algorithm to efficiently minimize this mismatch score over camera poses.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127583679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
A complete U-V-disparity study for stereovision based 3D driving environment analysis 基于立体视觉的三维驾驶环境分析的整车视差研究
Zhencheng Hu, F. Lamosa, K. Uchimura
Reliable understanding of the 3D driving environment is vital for obstacle detection and adaptive cruise control (ACC) applications. Laser or millimeter wave radars have shown good performance in measuring relative speed and distance in a highway driving environment. However the accuracy of these systems decreases in an urban traffic environment as more confusion occurs due to factors such as parked vehicles, guardrails, poles and motorcycles. A stereovision based sensing system provides an effective supplement to radar-based road scene analysis with its much wider field of view and more accurate lateral information. This paper presents an efficient solution using a stereovision based road scene analysis algorithm which employs the "U-V-disparity" concept. This concept is used to classify a 3D road scene into relative surface planes and characterize the features of road pavement surfaces, roadside structures and obstacles. Real-time implementation of the disparity map calculation and the "U-V-disparity" classification is also presented.
对3D驾驶环境的可靠理解对于障碍物检测和自适应巡航控制(ACC)应用至关重要。激光或毫米波雷达在高速公路行驶环境中显示出良好的相对速度和距离测量性能。然而,在城市交通环境中,由于停车车辆、护栏、电线杆和摩托车等因素导致更多混乱,这些系统的准确性会降低。基于立体视觉的传感系统以其更广阔的视野和更准确的横向信息,为基于雷达的道路场景分析提供了有效的补充。本文提出了一种基于立体视觉的道路场景分析算法,该算法采用了“u - v -视差”的概念。该概念用于将3D道路场景划分为相对的表面平面,并表征道路路面、路边结构和障碍物的特征。给出了视差图计算和“u - v -视差”分类的实时实现。
{"title":"A complete U-V-disparity study for stereovision based 3D driving environment analysis","authors":"Zhencheng Hu, F. Lamosa, K. Uchimura","doi":"10.1109/3DIM.2005.6","DOIUrl":"https://doi.org/10.1109/3DIM.2005.6","url":null,"abstract":"Reliable understanding of the 3D driving environment is vital for obstacle detection and adaptive cruise control (ACC) applications. Laser or millimeter wave radars have shown good performance in measuring relative speed and distance in a highway driving environment. However the accuracy of these systems decreases in an urban traffic environment as more confusion occurs due to factors such as parked vehicles, guardrails, poles and motorcycles. A stereovision based sensing system provides an effective supplement to radar-based road scene analysis with its much wider field of view and more accurate lateral information. This paper presents an efficient solution using a stereovision based road scene analysis algorithm which employs the \"U-V-disparity\" concept. This concept is used to classify a 3D road scene into relative surface planes and characterize the features of road pavement surfaces, roadside structures and obstacles. Real-time implementation of the disparity map calculation and the \"U-V-disparity\" classification is also presented.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"29 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113944549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
Efficient photometric stereo technique for three-dimensional surfaces with unknown BRDF 具有未知BRDF的三维表面的有效光度立体技术
Li Shen, Takashi Machida, H. Takemura
The present paper focuses on efficient inverse rendering using a photometric stereo technique for realistic surfaces. The technique primarily assumes the Lambertian reflection model only. For non-Lambertian surfaces, application of the technique to real surfaces in order to estimate 3D shape and spatially varying reflectance from sparse images remains difficult. In the present paper, we propose a new photometric stereo technique by which to efficiently recover a full surface model, starting from a small set of photographs. The proposed technique allows diffuse albedo to vary arbitrarily over surfaces while non-diffuse characteristics remain constant for a material. Specifically, the basic approach is to first recover the specular reflectance parameters of the surfaces by a novel optimization procedure. These parameters are then used to estimate the diffuse reflectance and surface normal for each point. As a result, a lighting-independent model of the geometry and reflectance properties of the surface is established using the proposed method, which can be used to re-render the images under novel lighting via traditional rendering methods.
本论文的重点是利用光度计立体技术对现实表面进行有效的反向渲染。该技术主要只采用朗伯反射模型。对于非朗伯曲面,将该技术应用于真实曲面以从稀疏图像中估计三维形状和空间变化反射率仍然是困难的。在本文中,我们提出了一种新的光度立体技术,通过该技术可以从一小组照片开始有效地恢复完整的表面模型。所提出的技术允许漫射反照率在表面上任意变化,而非漫射特性对材料保持恒定。具体来说,基本方法是首先通过一种新的优化程序恢复表面的镜面反射参数。然后使用这些参数来估计每个点的漫反射和表面法线。利用该方法建立了一个与光照无关的曲面几何和反射率模型,该模型可用于在新的光照条件下使用传统的渲染方法对图像进行重新渲染。
{"title":"Efficient photometric stereo technique for three-dimensional surfaces with unknown BRDF","authors":"Li Shen, Takashi Machida, H. Takemura","doi":"10.1109/3DIM.2005.35","DOIUrl":"https://doi.org/10.1109/3DIM.2005.35","url":null,"abstract":"The present paper focuses on efficient inverse rendering using a photometric stereo technique for realistic surfaces. The technique primarily assumes the Lambertian reflection model only. For non-Lambertian surfaces, application of the technique to real surfaces in order to estimate 3D shape and spatially varying reflectance from sparse images remains difficult. In the present paper, we propose a new photometric stereo technique by which to efficiently recover a full surface model, starting from a small set of photographs. The proposed technique allows diffuse albedo to vary arbitrarily over surfaces while non-diffuse characteristics remain constant for a material. Specifically, the basic approach is to first recover the specular reflectance parameters of the surfaces by a novel optimization procedure. These parameters are then used to estimate the diffuse reflectance and surface normal for each point. As a result, a lighting-independent model of the geometry and reflectance properties of the surface is established using the proposed method, which can be used to re-render the images under novel lighting via traditional rendering methods.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123154671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Shape reconstruction of human foot from multi-camera images based on PCA of human shape database 基于人体形状数据库PCA的多相机图像人体足部形状重建
Jiahui Wang, H. Saito, M. Kimura, M. Mochimaru, T. Kanade
Recently, researches and developments for measuring and modeling of human body are taking much attention. Our aim is to capture accurate shape of human foot, using 2D images acquired by multiple cameras, which can capture dynamic behavior of the object. In this paper, 3D active shape models is used for accurate reconstruction of surface shape of human foot. We apply principal component analysis (PCA) of human shape database, so that we can represent human's foot shape by approximately 12 principal component shapes. Because of the reduction of dimensions for representing the object shape, we can efficiently recover the object shape from multi-camera images, even though the object shape is partially occluded in some of input views. To demonstrate the proposed method, two kinds of experiments are presented: high accuracy reconstruction of human foot in a virtual reality environment with CG multi-camera images and in real world with eight CCD cameras. In those experiments, the recovered shape error with our method is around 2mm, while the error is around 4mm with volume intersection method.
近年来,人体测量与建模的研究与发展备受关注。我们的目标是利用多台摄像机获取的二维图像来捕捉人体足部的精确形状,这些图像可以捕捉物体的动态行为。本文采用三维主动形状模型对人足表面形状进行精确重建。利用人体形状数据库的主成分分析(PCA),可以用大约12个主成分形状来表示人的足形。由于减少了表示物体形状的维数,即使在某些输入视图中物体形状被部分遮挡,我们也可以有效地从多相机图像中恢复物体形状。为了验证所提出的方法,提出了两种实验:在虚拟现实环境中使用CG多摄像机图像进行人体足部的高精度重建,以及在现实世界中使用8台CCD摄像机进行人体足部的高精度重建。在这些实验中,我们的方法恢复的形状误差在2mm左右,而体积相交法的误差在4mm左右。
{"title":"Shape reconstruction of human foot from multi-camera images based on PCA of human shape database","authors":"Jiahui Wang, H. Saito, M. Kimura, M. Mochimaru, T. Kanade","doi":"10.1109/3DIM.2005.73","DOIUrl":"https://doi.org/10.1109/3DIM.2005.73","url":null,"abstract":"Recently, researches and developments for measuring and modeling of human body are taking much attention. Our aim is to capture accurate shape of human foot, using 2D images acquired by multiple cameras, which can capture dynamic behavior of the object. In this paper, 3D active shape models is used for accurate reconstruction of surface shape of human foot. We apply principal component analysis (PCA) of human shape database, so that we can represent human's foot shape by approximately 12 principal component shapes. Because of the reduction of dimensions for representing the object shape, we can efficiently recover the object shape from multi-camera images, even though the object shape is partially occluded in some of input views. To demonstrate the proposed method, two kinds of experiments are presented: high accuracy reconstruction of human foot in a virtual reality environment with CG multi-camera images and in real world with eight CCD cameras. In those experiments, the recovered shape error with our method is around 2mm, while the error is around 4mm with volume intersection method.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123833285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Identifying the interface between two sand materials 识别两种砂材料之间的界面
A. Kaestner, P. Lehmann, H. Fluehler
To study the behavior of water flow at interfaces between different soil materials we made computed tomography scans of sand samples using synchrotron light. The samples were prepared with an interface between two sand materials. The contact points between grains at the interface between the sands were identified using a combination of watershed segmentation and a classifier that used the grain-size and -location. The process from a bilevel image to a classified image is described. In the classified image five classes are represented; two for the grains and three for the contact points to represent intra- and inter-class contact points.
为了研究水流在不同土壤材料界面上的行为,我们使用同步加速器光对沙土样品进行了计算机断层扫描。制备了两种砂材料之间的界面样品。采用分水岭分割和基于粒度和位置的分类器相结合的方法,确定了砂粒界面上颗粒之间的接触点。描述了从双层图像到分类图像的过程。分类后的图像分为五类;两个代表颗粒,三个代表接触点,代表阶级内部和阶级之间的接触点。
{"title":"Identifying the interface between two sand materials","authors":"A. Kaestner, P. Lehmann, H. Fluehler","doi":"10.1109/3DIM.2005.54","DOIUrl":"https://doi.org/10.1109/3DIM.2005.54","url":null,"abstract":"To study the behavior of water flow at interfaces between different soil materials we made computed tomography scans of sand samples using synchrotron light. The samples were prepared with an interface between two sand materials. The contact points between grains at the interface between the sands were identified using a combination of watershed segmentation and a classifier that used the grain-size and -location. The process from a bilevel image to a classified image is described. In the classified image five classes are represented; two for the grains and three for the contact points to represent intra- and inter-class contact points.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"215 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130932757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Capturing 2 1/2 D depth and texture of time-varying scenes using structured infrared light 利用结构红外光捕捉时变场景的2 1/2 D深度和纹理
Christian Früh, A. Zakhor
In this paper, we describe an approach to simultaneously capture visual appearance and depth of a time-varying scene. Our approach is based on projecting structured infrared (IR) light. Specifically, we project a combination of (a) a static vertical IR stripe pattern, and (b) a horizontal IR laser line sweeping up and down the scene; at the same time, the scene is captured with an IR-sensitive camera. Since IR light is invisible to the human eye, it does not disturb human subjects or interfere with human activities in the scene; in addition, it does not affect the scene's visual appearance as recorded by a color video camera. Vertical lines in the IR frames are identified using the horizontal line, intra-frame tracking, and inter-frame tracking; depth along these lines is reconstructed via triangulation. Interpolating these sparse depth lines within the foreground silhouette of the recorded video sequence, we obtain a dense depth map for every frame in the video sequence. Experimental results corresponding to a dynamic scene with a human subject in motion are presented to demonstrate the effectiveness of our proposed approach.
在本文中,我们描述了一种同时捕获时变场景的视觉外观和深度的方法。我们的方法是基于投射结构红外(IR)光。具体来说,我们投射了(a)静态垂直红外条纹图案和(b)上下扫过场景的水平红外激光线的组合;同时,现场被红外敏感相机捕捉到。由于红外光对人眼来说是不可见的,所以它不会干扰人体主体或干扰场景中的人类活动;此外,它不影响现场的视觉外观,由彩色摄像机记录。利用水平线、帧内跟踪和帧间跟踪识别红外帧中的垂直线;沿着这些线的深度通过三角测量重建。将这些稀疏的深度线插值到录制视频序列的前景轮廓中,我们得到了视频序列中每一帧的密集深度图。最后给出了一个动态场景的实验结果,验证了该方法的有效性。
{"title":"Capturing 2 1/2 D depth and texture of time-varying scenes using structured infrared light","authors":"Christian Früh, A. Zakhor","doi":"10.1109/3DIM.2005.26","DOIUrl":"https://doi.org/10.1109/3DIM.2005.26","url":null,"abstract":"In this paper, we describe an approach to simultaneously capture visual appearance and depth of a time-varying scene. Our approach is based on projecting structured infrared (IR) light. Specifically, we project a combination of (a) a static vertical IR stripe pattern, and (b) a horizontal IR laser line sweeping up and down the scene; at the same time, the scene is captured with an IR-sensitive camera. Since IR light is invisible to the human eye, it does not disturb human subjects or interfere with human activities in the scene; in addition, it does not affect the scene's visual appearance as recorded by a color video camera. Vertical lines in the IR frames are identified using the horizontal line, intra-frame tracking, and inter-frame tracking; depth along these lines is reconstructed via triangulation. Interpolating these sparse depth lines within the foreground silhouette of the recorded video sequence, we obtain a dense depth map for every frame in the video sequence. Experimental results corresponding to a dynamic scene with a human subject in motion are presented to demonstrate the effectiveness of our proposed approach.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131250654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Evaluating collinearity constraint for automatic range image registration 自动距离图像配准的共线性约束评价
Yonghuai Liu, Longzhuang Li, Baogang Wei
While most of the existing range image registration algorithms either have to extract and match structural (geometric or optical) features or have to estimate the motion parameters of interest from outliers corrupted point correspondence data for the elimination of false matches in the process of image registration, the registration error and the collinearity error derived directly from the traditional closest point criterion are also capable of doing the same job. However, the latter has an advantage of easy implementation. The purpose of this paper is to investigate which definition of collinearity is more accurate and stable in eliminating false matches inevitably introduced by the closest point criterion. The experiments based on real images show the advantages and disadvantages of different definitions of collinearity.
虽然现有的距离图像配准算法要么提取和匹配结构(几何或光学)特征,要么从异常点损坏的对应数据中估计感兴趣的运动参数,以消除图像配准过程中的错误匹配,但直接从传统的最接近点准则中得出的配准误差和共线性误差也能够完成相同的工作。但是,后者具有易于实现的优点。本文的目的是研究哪种共线性定义在消除由最近点准则不可避免地引入的错误匹配时更准确和稳定。基于真实图像的实验显示了不同共线性定义的优缺点。
{"title":"Evaluating collinearity constraint for automatic range image registration","authors":"Yonghuai Liu, Longzhuang Li, Baogang Wei","doi":"10.1109/3DIM.2005.37","DOIUrl":"https://doi.org/10.1109/3DIM.2005.37","url":null,"abstract":"While most of the existing range image registration algorithms either have to extract and match structural (geometric or optical) features or have to estimate the motion parameters of interest from outliers corrupted point correspondence data for the elimination of false matches in the process of image registration, the registration error and the collinearity error derived directly from the traditional closest point criterion are also capable of doing the same job. However, the latter has an advantage of easy implementation. The purpose of this paper is to investigate which definition of collinearity is more accurate and stable in eliminating false matches inevitably introduced by the closest point criterion. The experiments based on real images show the advantages and disadvantages of different definitions of collinearity.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114235374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Accurate principal directions estimation in discrete surfaces 离散曲面中主方向的精确估计
G. Agam, Xiaojing Tang
Accurate local surface geometry estimation in discrete surfaces is an important problem with numerous applications. Principal curvatures and principal directions can be used in applications such as shape analysis and recognition, object segmentation, adaptive smoothing, anisotropic fairing of irregular meshes, and anisotropic texture mapping. In this paper, a novel approach for accurate principal direction estimation in discrete surfaces is described. The proposed approach is based on local directional curve sampling of the surface where the sampling frequency can be controlled. This local model has a large number of degrees of freedoms compared with known techniques and so can better represent the local geometry. The proposed approach is quantitatively evaluated and compared with known techniques for principal direction estimation. In order to perform an unbiased evaluation in which smoothing effects are factored out, we use a set of randomly generated Bezier surface patches for which the principal directions can be computed analytically.
离散曲面的精确局部几何估计是许多应用中的一个重要问题。主曲率和主方向可用于形状分析和识别、目标分割、自适应平滑、不规则网格的各向异性整光和各向异性纹理映射等应用。本文提出了一种在离散曲面上精确估计主方向的新方法。该方法基于曲面的局部方向曲线采样,采样频率可控制。该局部模型与已知的技术相比具有大量的自由度,因此可以更好地表示局部几何。对该方法进行了定量评价,并与已知的主方向估计技术进行了比较。为了在剔除平滑效应的情况下进行无偏评估,我们使用了一组随机生成的贝塞尔曲面补丁,其主方向可以解析计算。
{"title":"Accurate principal directions estimation in discrete surfaces","authors":"G. Agam, Xiaojing Tang","doi":"10.1109/3DIM.2005.14","DOIUrl":"https://doi.org/10.1109/3DIM.2005.14","url":null,"abstract":"Accurate local surface geometry estimation in discrete surfaces is an important problem with numerous applications. Principal curvatures and principal directions can be used in applications such as shape analysis and recognition, object segmentation, adaptive smoothing, anisotropic fairing of irregular meshes, and anisotropic texture mapping. In this paper, a novel approach for accurate principal direction estimation in discrete surfaces is described. The proposed approach is based on local directional curve sampling of the surface where the sampling frequency can be controlled. This local model has a large number of degrees of freedoms compared with known techniques and so can better represent the local geometry. The proposed approach is quantitatively evaluated and compared with known techniques for principal direction estimation. In order to perform an unbiased evaluation in which smoothing effects are factored out, we use a set of randomly generated Bezier surface patches for which the principal directions can be computed analytically.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130623966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1