This paper proposes a "non-contact virtual clay modeling system". We developed a prototype of three-dimensional modeling system that allows users to shape "virtual clay" with their hand movements. In our proposed method, the users' hand movements are observed with multiple cameras to estimate their positions. The human hand surface and virtual clay are modeled by using subdivision surface. Using these estimated hand positions, virtual clay is shaped based on a direct free-form deformation technique. To improve processing speed, we implemented the proposed system on a PC cluster. This system proves the feasibility of an intuitive virtual clay modeling system.
{"title":"Virtual clay modeling system using multi-viewpoint images","authors":"E. Ueda, Y. Matsumoto, T. Ogasawara","doi":"10.1109/3DIM.2005.82","DOIUrl":"https://doi.org/10.1109/3DIM.2005.82","url":null,"abstract":"This paper proposes a \"non-contact virtual clay modeling system\". We developed a prototype of three-dimensional modeling system that allows users to shape \"virtual clay\" with their hand movements. In our proposed method, the users' hand movements are observed with multiple cameras to estimate their positions. The human hand surface and virtual clay are modeled by using subdivision surface. Using these estimated hand positions, virtual clay is shaped based on a direct free-form deformation technique. To improve processing speed, we implemented the proposed system on a PC cluster. This system proves the feasibility of an intuitive virtual clay modeling system.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117347755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we use a model selection criterion to decide whether two cylinders should be merged as a single cylinder or they should be left separated. We compare and evaluate an extensive number of different model selection criteria for this purpose and examine which factors can affect their performance. We conclude that SSC, GIC, MCAIC and CAIC have a better performance (for this particular application) compared to the other criteria.
{"title":"Detecting cylinders in 3D range data using model selection criteria","authors":"N. Gheissari, A. Bab-Hadiashar","doi":"10.1109/3DIM.2005.30","DOIUrl":"https://doi.org/10.1109/3DIM.2005.30","url":null,"abstract":"In this paper, we use a model selection criterion to decide whether two cylinders should be merged as a single cylinder or they should be left separated. We compare and evaluate an extensive number of different model selection criteria for this purpose and examine which factors can affect their performance. We conclude that SSC, GIC, MCAIC and CAIC have a better performance (for this particular application) compared to the other criteria.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122991431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a simple and efficient method of forming a 3D text label on a 3D triangulated surface. The label is formed by projecting the 2D contours that define the text silhouette onto the triangulated surface, forming 3D contour paths. Surface polygons upon which the 3D contour paths lie are retriangulated using a novel approach that forms a polyline defining the region outside the contour. This algorithm produces labeled 3D surfaces that conform to the specifications of the STL format, making them suitable for fabrication by a rapid prototyping machine. We demonstrate the effectiveness of the algorithm in forming flat and extruded labels on non-trivial surfaces.
{"title":"A contour-based approach to 3D text labeling on triangulated surfaces","authors":"G. Slabaugh, Viorel Mihalef, Gözde B. Ünal","doi":"10.1109/3DIM.2005.7","DOIUrl":"https://doi.org/10.1109/3DIM.2005.7","url":null,"abstract":"This paper presents a simple and efficient method of forming a 3D text label on a 3D triangulated surface. The label is formed by projecting the 2D contours that define the text silhouette onto the triangulated surface, forming 3D contour paths. Surface polygons upon which the 3D contour paths lie are retriangulated using a novel approach that forms a polyline defining the region outside the contour. This algorithm produces labeled 3D surfaces that conform to the specifications of the STL format, making them suitable for fabrication by a rapid prototyping machine. We demonstrate the effectiveness of the algorithm in forming flat and extruded labels on non-trivial surfaces.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"09 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129647810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Masuda, Yuichiro Hirota, K. Ikeuchi, K. Nishino
Conventional registration algorithms are mostly concerned with rigid-body transformation parameters between a pair of 3D range images. Our proposed framework aims to determine, in a unified manner, not only such rigid transformation parameters but also various deformation parameters, assuming that the deformation we handle here is strictly defined by some parameterized formulation derived from the deformation mechanism. While conventional registration algorithms usually calculate six parameters (three translation and three rotation parameters), our proposed algorithm estimates deformation parameters as well. In this paper, we describe how we formulated such an algorithm, implemented it, and evaluated its performance.
{"title":"Simultaneous determination of registration and deformation parameters among 3D range images","authors":"T. Masuda, Yuichiro Hirota, K. Ikeuchi, K. Nishino","doi":"10.1109/3DIM.2005.74","DOIUrl":"https://doi.org/10.1109/3DIM.2005.74","url":null,"abstract":"Conventional registration algorithms are mostly concerned with rigid-body transformation parameters between a pair of 3D range images. Our proposed framework aims to determine, in a unified manner, not only such rigid transformation parameters but also various deformation parameters, assuming that the deformation we handle here is strictly defined by some parameterized formulation derived from the deformation mechanism. While conventional registration algorithms usually calculate six parameters (three translation and three rotation parameters), our proposed algorithm estimates deformation parameters as well. In this paper, we describe how we formulated such an algorithm, implemented it, and evaluated its performance.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114105194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present a new framework for subdivision surface fitting of arbitrary surfaces (not closed objects) represented by polygonal meshes. Our approach is particularly suited for output surfaces from a mechanical or CAD object segmentation for a piecewise subdivision surface approximation. Our algorithm produces a mixed quadrangle-triangle control mesh, near optimal in terms of face and vertex numbers while remaining independent of the connectivity of the input mesh. The first step approximates the boundaries with subdivision curves and creates an initial subdivision surface by optimally linking the boundary control points with respect to the lines of curvature of the target surface. Then, a second step optimizes the initial control polyhedron by iteratively moving control points and enriching regions according to the error distribution. Experiments conducted on several surfaces and on a whole segmented mechanical object, have proven the coherency and the efficiency of our algorithm, compared with existing methods.
{"title":"Toward a near optimal quad/triangle subdivision surface fitting","authors":"G. Lavoué, F. Dupont, A. Baskurt","doi":"10.1109/3DIM.2005.78","DOIUrl":"https://doi.org/10.1109/3DIM.2005.78","url":null,"abstract":"In this paper we present a new framework for subdivision surface fitting of arbitrary surfaces (not closed objects) represented by polygonal meshes. Our approach is particularly suited for output surfaces from a mechanical or CAD object segmentation for a piecewise subdivision surface approximation. Our algorithm produces a mixed quadrangle-triangle control mesh, near optimal in terms of face and vertex numbers while remaining independent of the connectivity of the input mesh. The first step approximates the boundaries with subdivision curves and creates an initial subdivision surface by optimally linking the boundary control points with respect to the lines of curvature of the target surface. Then, a second step optimizes the initial control polyhedron by iteratively moving control points and enriching regions according to the error distribution. Experiments conducted on several surfaces and on a whole segmented mechanical object, have proven the coherency and the efficiency of our algorithm, compared with existing methods.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121716720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a new algorithm to segment in continuous parametric regions range images. The algorithm starts with an initial partition of small first order regions using a robust fitting algorithm constrained by the detection of depth and orientation discontinuities. The algorithm then optimally group these regions into larger and larger regions using parametric functions until an approximation limit is reached. The algorithm uses Bayesian decision theory to determine the local optimal grouping and the complexity of the parametric model used to represent the range signal. After the segmentation process an exact description of the boundary of each region is computed from the mutual intersections of the extracted surfaces. Experimental results show significant improvement of region boundary localization. A systematic comparison of our algorithm to the most well known algorithm in the literature is presented to highlight the contributions of this paper.
{"title":"Hierarchical segmentation of range images with contour constraints","authors":"P. Boulanger, G. Osorio, F. Prieto","doi":"10.1109/3DIM.2005.53","DOIUrl":"https://doi.org/10.1109/3DIM.2005.53","url":null,"abstract":"This paper describes a new algorithm to segment in continuous parametric regions range images. The algorithm starts with an initial partition of small first order regions using a robust fitting algorithm constrained by the detection of depth and orientation discontinuities. The algorithm then optimally group these regions into larger and larger regions using parametric functions until an approximation limit is reached. The algorithm uses Bayesian decision theory to determine the local optimal grouping and the complexity of the parametric model used to represent the range signal. After the segmentation process an exact description of the boundary of each region is computed from the mutual intersections of the extracted surfaces. Experimental results show significant improvement of region boundary localization. A systematic comparison of our algorithm to the most well known algorithm in the literature is presented to highlight the contributions of this paper.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130161069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jean-François Lalonde, R. Unnikrishnan, N. Vandapel, M. Hebert
Three-dimensional ladar data are commonly used to perform scene understanding for outdoor mobile robots, specifically in natural terrain. One effective method is to classify points using features based on local point cloud distribution into surfaces, linear structures or clutter volumes. But the local features are computed using 3D points within a support-volume. Local and global point density variations and the presence of multiple manifolds make the problem of selecting the size of this support volume, or scale, challenging. In this paper, we adopt an approach inspired by recent developments in computational geometry (Mitra et al., 2005) and investigate the problem of automatic data-driven scale selection to improve point cloud classification. The approach is validated with results using data from different sensors in various environments classified into different terrain types (vegetation, solid surface and linear structure).
三维雷达数据通常用于户外移动机器人的场景理解,特别是在自然地形中。一种有效的方法是利用基于局部点云分布的特征将点分类为曲面、线性结构或杂波体。但局部特征是使用支撑体内的3D点计算的。局部和全局点密度的变化以及多个流形的存在使得选择支持体积的大小或规模的问题具有挑战性。在本文中,我们采用了一种受计算几何最新发展启发的方法(Mitra et al., 2005),并研究了自动数据驱动的尺度选择问题,以改进点云分类。利用不同环境(植被、固体表面和线性结构)中不同传感器的数据对该方法进行了验证。
{"title":"Scale selection for classification of point-sampled 3D surfaces","authors":"Jean-François Lalonde, R. Unnikrishnan, N. Vandapel, M. Hebert","doi":"10.1109/3DIM.2005.71","DOIUrl":"https://doi.org/10.1109/3DIM.2005.71","url":null,"abstract":"Three-dimensional ladar data are commonly used to perform scene understanding for outdoor mobile robots, specifically in natural terrain. One effective method is to classify points using features based on local point cloud distribution into surfaces, linear structures or clutter volumes. But the local features are computed using 3D points within a support-volume. Local and global point density variations and the presence of multiple manifolds make the problem of selecting the size of this support volume, or scale, challenging. In this paper, we adopt an approach inspired by recent developments in computational geometry (Mitra et al., 2005) and investigate the problem of automatic data-driven scale selection to improve point cloud classification. The approach is validated with results using data from different sensors in various environments classified into different terrain types (vegetation, solid surface and linear structure).","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132701182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We give a formal definition of geometric fitting in a way that suits computer vision applications. We point out that the performance of geometric fitting should be evaluated in the limit of small noise rather than in the limit of a large number of data as recommended in the statistical literature. Taking the KCR lower bound as an optimality requirement and focusing on the linearized constraint case, we compare the accuracy of Kanatani's renormalization with maximum likelihood (ML) approaches including the FNS of Chojnacki et al. and the HEIV of Leedan and Meer. Our analysis reveals the existence of a method superior to all these.
{"title":"Further improving geometric fitting","authors":"K. Kanatani","doi":"10.1109/3DIM.2005.49","DOIUrl":"https://doi.org/10.1109/3DIM.2005.49","url":null,"abstract":"We give a formal definition of geometric fitting in a way that suits computer vision applications. We point out that the performance of geometric fitting should be evaluated in the limit of small noise rather than in the limit of a large number of data as recommended in the statistical literature. Taking the KCR lower bound as an optimality requirement and focusing on the linearized constraint case, we compare the accuracy of Kanatani's renormalization with maximum likelihood (ML) approaches including the FNS of Chojnacki et al. and the HEIV of Leedan and Meer. Our analysis reveals the existence of a method superior to all these.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115893250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a method for constructing photorealistic 3D head models from color images and a geometric head model of a specific person. With a simple experimental setup, we employ a user-assisted technique to register the uncalibrated images with the geometric model. A weighted averaging method is then used to extract a panoramic texture map from the input images. To recover the hairstyle of the specified person, a virtual photo plane is defined, according to a corresponding true photo plane, on which one input image is recorded. It provides hints to compute the 3D positions of the visual contour points of the hair from the images taken at different viewpoints. Finally, more hairs are grown to cover the whole region of the scalp using an interpolation method on the 3D scalp mesh surfaces.
{"title":"Realistic human head modeling with multi-view hairstyle reconstruction","authors":"Xiaolan Li, H. Zha","doi":"10.1109/3DIM.2005.67","DOIUrl":"https://doi.org/10.1109/3DIM.2005.67","url":null,"abstract":"We present a method for constructing photorealistic 3D head models from color images and a geometric head model of a specific person. With a simple experimental setup, we employ a user-assisted technique to register the uncalibrated images with the geometric model. A weighted averaging method is then used to extract a panoramic texture map from the input images. To recover the hairstyle of the specified person, a virtual photo plane is defined, according to a corresponding true photo plane, on which one input image is recorded. It provides hints to compute the 3D positions of the visual contour points of the hair from the images taken at different viewpoints. Finally, more hairs are grown to cover the whole region of the scalp using an interpolation method on the 3D scalp mesh surfaces.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114585401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the prototype of the Planar, a novel input/output device designed for applications in task areas focusing on the generation and manipulation of 3D data, e.g. CAD or styling. Technically, the Planar offers a spatially aware pen-sensitive display, mounted on an adjustable, scooter-like autonomous platform. The movable screen, with 6 degrees of freedom, can act like a window into 3D virtual environments and allows for efficient 2D and 3D interaction at the same time. We report on the pose estimation system of the Planar's display where we focus on two enhanced optical mice combined to track horizontal 2D position/orientation. Results of this pragmatic approach are presented and discussed. In order to demonstrate the potential of the Planar we show a small review/sketching/annotation application. The overall goal of this work is to contribute to the development of real VR applications.
{"title":"The Planar: a mobile VR tool with pragmatic pose estimation for generation and manipulation of 3D data in industrial environments","authors":"Ralph Schönfelder, J. Baur, Frank Spenling","doi":"10.1109/3DIM.2005.77","DOIUrl":"https://doi.org/10.1109/3DIM.2005.77","url":null,"abstract":"We present the prototype of the Planar, a novel input/output device designed for applications in task areas focusing on the generation and manipulation of 3D data, e.g. CAD or styling. Technically, the Planar offers a spatially aware pen-sensitive display, mounted on an adjustable, scooter-like autonomous platform. The movable screen, with 6 degrees of freedom, can act like a window into 3D virtual environments and allows for efficient 2D and 3D interaction at the same time. We report on the pose estimation system of the Planar's display where we focus on two enhanced optical mice combined to track horizontal 2D position/orientation. Results of this pragmatic approach are presented and discussed. In order to demonstrate the potential of the Planar we show a small review/sketching/annotation application. The overall goal of this work is to contribute to the development of real VR applications.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114751656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}