Gabriele Guidi, L. Micoli, M. Russo, B. Frischer, M. D. Simone, A. Spinetti, Luca Carosso
This paper describes 3D acquisition and modeling of the "Plastico di Roma antica", a large plaster-of-Paris model of imperial Rome (16/spl times/17 meters) created in the last century. Its overall size demands an acquisition approach typical of large structures, but it is also characterized by extremely tiny details, typical of small objects: houses are a few centimeters high; their doors, windows, etc. are smaller than 1 cm. The approach followed to resolve this "contradiction" is described. The result is a huge but precise 3D model created by using a special metrology laser radar. We give an account of the procedures of reorienting the large point clouds obtained after each acquisition step (50-60 million points) into a single reference system by means of measuring fixed redundant reference points. Finally we show how the data set can be properly divided into 2/spl times/2 meters sub-areas for allowing data merging and mesh editing.
本文描述了“Plastico di Roma antica”的三维获取和建模,这是上个世纪创建的罗马帝国的大型巴黎石膏模型(16/spl倍/17米)。它的整体尺寸需要典型的大型结构的获取方法,但它也具有极其微小的细节,典型的小物体:房屋只有几厘米高;他们的门、窗等都小于1厘米。本文描述了解决这一“矛盾”的方法。结果是一个巨大但精确的3D模型创建使用特殊计量激光雷达。我们给出了通过测量固定的冗余参考点,将每个采集步骤(5000 - 6000万个点)获得的大型点云重新定向到单个参考系统的过程。最后,我们展示了如何将数据集适当地划分为2/ sp1倍/2米的子区域,以便进行数据合并和网格编辑。
{"title":"3D digitization of a large model of imperial Rome","authors":"Gabriele Guidi, L. Micoli, M. Russo, B. Frischer, M. D. Simone, A. Spinetti, Luca Carosso","doi":"10.1109/3DIM.2005.2","DOIUrl":"https://doi.org/10.1109/3DIM.2005.2","url":null,"abstract":"This paper describes 3D acquisition and modeling of the \"Plastico di Roma antica\", a large plaster-of-Paris model of imperial Rome (16/spl times/17 meters) created in the last century. Its overall size demands an acquisition approach typical of large structures, but it is also characterized by extremely tiny details, typical of small objects: houses are a few centimeters high; their doors, windows, etc. are smaller than 1 cm. The approach followed to resolve this \"contradiction\" is described. The result is a huge but precise 3D model created by using a special metrology laser radar. We give an account of the procedures of reorienting the large point clouds obtained after each acquisition step (50-60 million points) into a single reference system by means of measuring fixed redundant reference points. Finally we show how the data set can be properly divided into 2/spl times/2 meters sub-areas for allowing data merging and mesh editing.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114108547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new method for registering multiple 3D scans of a colored object. Each scan is regarded as a color and range image of the object recorded by a pinhole camera. Consider a pair of cameras that see overlapping parts of the objects. For correct camera poses, the actual image of the overlap area in one camera matches the rendition of the overlap area as seen by the other camera. We define a mismatch score summarizing discrepancies in color, range, and silhouette between pairs of images, and we present an algorithm to efficiently minimize this mismatch score over camera poses.
{"title":"Projective surface matching of colored 3D scans","authors":"K. Pulli, Simo Piiroinen, T. Duchamp, W. Stuetzle","doi":"10.1109/3DIM.2005.65","DOIUrl":"https://doi.org/10.1109/3DIM.2005.65","url":null,"abstract":"We present a new method for registering multiple 3D scans of a colored object. Each scan is regarded as a color and range image of the object recorded by a pinhole camera. Consider a pair of cameras that see overlapping parts of the objects. For correct camera poses, the actual image of the overlap area in one camera matches the rendition of the overlap area as seen by the other camera. We define a mismatch score summarizing discrepancies in color, range, and silhouette between pairs of images, and we present an algorithm to efficiently minimize this mismatch score over camera poses.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127583679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reliable understanding of the 3D driving environment is vital for obstacle detection and adaptive cruise control (ACC) applications. Laser or millimeter wave radars have shown good performance in measuring relative speed and distance in a highway driving environment. However the accuracy of these systems decreases in an urban traffic environment as more confusion occurs due to factors such as parked vehicles, guardrails, poles and motorcycles. A stereovision based sensing system provides an effective supplement to radar-based road scene analysis with its much wider field of view and more accurate lateral information. This paper presents an efficient solution using a stereovision based road scene analysis algorithm which employs the "U-V-disparity" concept. This concept is used to classify a 3D road scene into relative surface planes and characterize the features of road pavement surfaces, roadside structures and obstacles. Real-time implementation of the disparity map calculation and the "U-V-disparity" classification is also presented.
对3D驾驶环境的可靠理解对于障碍物检测和自适应巡航控制(ACC)应用至关重要。激光或毫米波雷达在高速公路行驶环境中显示出良好的相对速度和距离测量性能。然而,在城市交通环境中,由于停车车辆、护栏、电线杆和摩托车等因素导致更多混乱,这些系统的准确性会降低。基于立体视觉的传感系统以其更广阔的视野和更准确的横向信息,为基于雷达的道路场景分析提供了有效的补充。本文提出了一种基于立体视觉的道路场景分析算法,该算法采用了“u - v -视差”的概念。该概念用于将3D道路场景划分为相对的表面平面,并表征道路路面、路边结构和障碍物的特征。给出了视差图计算和“u - v -视差”分类的实时实现。
{"title":"A complete U-V-disparity study for stereovision based 3D driving environment analysis","authors":"Zhencheng Hu, F. Lamosa, K. Uchimura","doi":"10.1109/3DIM.2005.6","DOIUrl":"https://doi.org/10.1109/3DIM.2005.6","url":null,"abstract":"Reliable understanding of the 3D driving environment is vital for obstacle detection and adaptive cruise control (ACC) applications. Laser or millimeter wave radars have shown good performance in measuring relative speed and distance in a highway driving environment. However the accuracy of these systems decreases in an urban traffic environment as more confusion occurs due to factors such as parked vehicles, guardrails, poles and motorcycles. A stereovision based sensing system provides an effective supplement to radar-based road scene analysis with its much wider field of view and more accurate lateral information. This paper presents an efficient solution using a stereovision based road scene analysis algorithm which employs the \"U-V-disparity\" concept. This concept is used to classify a 3D road scene into relative surface planes and characterize the features of road pavement surfaces, roadside structures and obstacles. Real-time implementation of the disparity map calculation and the \"U-V-disparity\" classification is also presented.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"29 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113944549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a new method for reconstructing animated, anatomy-based facial models of individuals from range data with minimal manual intervention. A prototype model with a multi-layer skin-muscle-skull structure serves as the starting point for our method. After the global adaptation, the skin mesh of the prototype model is represented as a dynamic deformable model which is deformed to fit scanned data according to internal force stemming from the elastic properties of the surface and external forces produced from the scanned data points and features. The underlying muscle layer that consists of three types of facial muscles is automatically adapted. According to the adapted skin and muscle structures, a set of automatically generated skull feature points is transformed to drive a volume morphing of the template skull model for skull fitting. The reconstructed model realistically reproduces the shape and features of a specific person and can be animated instantly.
{"title":"From range data to animated anatomy-based faces: a model adaptation method","authors":"Yu Zhang, T. Sim, C. Tan","doi":"10.1109/3DIM.2005.48","DOIUrl":"https://doi.org/10.1109/3DIM.2005.48","url":null,"abstract":"This paper presents a new method for reconstructing animated, anatomy-based facial models of individuals from range data with minimal manual intervention. A prototype model with a multi-layer skin-muscle-skull structure serves as the starting point for our method. After the global adaptation, the skin mesh of the prototype model is represented as a dynamic deformable model which is deformed to fit scanned data according to internal force stemming from the elastic properties of the surface and external forces produced from the scanned data points and features. The underlying muscle layer that consists of three types of facial muscles is automatically adapted. According to the adapted skin and muscle structures, a set of automatically generated skull feature points is transformed to drive a volume morphing of the template skull model for skull fitting. The reconstructed model realistically reproduces the shape and features of a specific person and can be animated instantly.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129970242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The present paper focuses on efficient inverse rendering using a photometric stereo technique for realistic surfaces. The technique primarily assumes the Lambertian reflection model only. For non-Lambertian surfaces, application of the technique to real surfaces in order to estimate 3D shape and spatially varying reflectance from sparse images remains difficult. In the present paper, we propose a new photometric stereo technique by which to efficiently recover a full surface model, starting from a small set of photographs. The proposed technique allows diffuse albedo to vary arbitrarily over surfaces while non-diffuse characteristics remain constant for a material. Specifically, the basic approach is to first recover the specular reflectance parameters of the surfaces by a novel optimization procedure. These parameters are then used to estimate the diffuse reflectance and surface normal for each point. As a result, a lighting-independent model of the geometry and reflectance properties of the surface is established using the proposed method, which can be used to re-render the images under novel lighting via traditional rendering methods.
{"title":"Efficient photometric stereo technique for three-dimensional surfaces with unknown BRDF","authors":"Li Shen, Takashi Machida, H. Takemura","doi":"10.1109/3DIM.2005.35","DOIUrl":"https://doi.org/10.1109/3DIM.2005.35","url":null,"abstract":"The present paper focuses on efficient inverse rendering using a photometric stereo technique for realistic surfaces. The technique primarily assumes the Lambertian reflection model only. For non-Lambertian surfaces, application of the technique to real surfaces in order to estimate 3D shape and spatially varying reflectance from sparse images remains difficult. In the present paper, we propose a new photometric stereo technique by which to efficiently recover a full surface model, starting from a small set of photographs. The proposed technique allows diffuse albedo to vary arbitrarily over surfaces while non-diffuse characteristics remain constant for a material. Specifically, the basic approach is to first recover the specular reflectance parameters of the surfaces by a novel optimization procedure. These parameters are then used to estimate the diffuse reflectance and surface normal for each point. As a result, a lighting-independent model of the geometry and reflectance properties of the surface is established using the proposed method, which can be used to re-render the images under novel lighting via traditional rendering methods.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123154671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiahui Wang, H. Saito, M. Kimura, M. Mochimaru, T. Kanade
Recently, researches and developments for measuring and modeling of human body are taking much attention. Our aim is to capture accurate shape of human foot, using 2D images acquired by multiple cameras, which can capture dynamic behavior of the object. In this paper, 3D active shape models is used for accurate reconstruction of surface shape of human foot. We apply principal component analysis (PCA) of human shape database, so that we can represent human's foot shape by approximately 12 principal component shapes. Because of the reduction of dimensions for representing the object shape, we can efficiently recover the object shape from multi-camera images, even though the object shape is partially occluded in some of input views. To demonstrate the proposed method, two kinds of experiments are presented: high accuracy reconstruction of human foot in a virtual reality environment with CG multi-camera images and in real world with eight CCD cameras. In those experiments, the recovered shape error with our method is around 2mm, while the error is around 4mm with volume intersection method.
{"title":"Shape reconstruction of human foot from multi-camera images based on PCA of human shape database","authors":"Jiahui Wang, H. Saito, M. Kimura, M. Mochimaru, T. Kanade","doi":"10.1109/3DIM.2005.73","DOIUrl":"https://doi.org/10.1109/3DIM.2005.73","url":null,"abstract":"Recently, researches and developments for measuring and modeling of human body are taking much attention. Our aim is to capture accurate shape of human foot, using 2D images acquired by multiple cameras, which can capture dynamic behavior of the object. In this paper, 3D active shape models is used for accurate reconstruction of surface shape of human foot. We apply principal component analysis (PCA) of human shape database, so that we can represent human's foot shape by approximately 12 principal component shapes. Because of the reduction of dimensions for representing the object shape, we can efficiently recover the object shape from multi-camera images, even though the object shape is partially occluded in some of input views. To demonstrate the proposed method, two kinds of experiments are presented: high accuracy reconstruction of human foot in a virtual reality environment with CG multi-camera images and in real world with eight CCD cameras. In those experiments, the recovered shape error with our method is around 2mm, while the error is around 4mm with volume intersection method.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123833285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To study the behavior of water flow at interfaces between different soil materials we made computed tomography scans of sand samples using synchrotron light. The samples were prepared with an interface between two sand materials. The contact points between grains at the interface between the sands were identified using a combination of watershed segmentation and a classifier that used the grain-size and -location. The process from a bilevel image to a classified image is described. In the classified image five classes are represented; two for the grains and three for the contact points to represent intra- and inter-class contact points.
{"title":"Identifying the interface between two sand materials","authors":"A. Kaestner, P. Lehmann, H. Fluehler","doi":"10.1109/3DIM.2005.54","DOIUrl":"https://doi.org/10.1109/3DIM.2005.54","url":null,"abstract":"To study the behavior of water flow at interfaces between different soil materials we made computed tomography scans of sand samples using synchrotron light. The samples were prepared with an interface between two sand materials. The contact points between grains at the interface between the sands were identified using a combination of watershed segmentation and a classifier that used the grain-size and -location. The process from a bilevel image to a classified image is described. In the classified image five classes are represented; two for the grains and three for the contact points to represent intra- and inter-class contact points.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"215 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130932757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe an approach to simultaneously capture visual appearance and depth of a time-varying scene. Our approach is based on projecting structured infrared (IR) light. Specifically, we project a combination of (a) a static vertical IR stripe pattern, and (b) a horizontal IR laser line sweeping up and down the scene; at the same time, the scene is captured with an IR-sensitive camera. Since IR light is invisible to the human eye, it does not disturb human subjects or interfere with human activities in the scene; in addition, it does not affect the scene's visual appearance as recorded by a color video camera. Vertical lines in the IR frames are identified using the horizontal line, intra-frame tracking, and inter-frame tracking; depth along these lines is reconstructed via triangulation. Interpolating these sparse depth lines within the foreground silhouette of the recorded video sequence, we obtain a dense depth map for every frame in the video sequence. Experimental results corresponding to a dynamic scene with a human subject in motion are presented to demonstrate the effectiveness of our proposed approach.
{"title":"Capturing 2 1/2 D depth and texture of time-varying scenes using structured infrared light","authors":"Christian Früh, A. Zakhor","doi":"10.1109/3DIM.2005.26","DOIUrl":"https://doi.org/10.1109/3DIM.2005.26","url":null,"abstract":"In this paper, we describe an approach to simultaneously capture visual appearance and depth of a time-varying scene. Our approach is based on projecting structured infrared (IR) light. Specifically, we project a combination of (a) a static vertical IR stripe pattern, and (b) a horizontal IR laser line sweeping up and down the scene; at the same time, the scene is captured with an IR-sensitive camera. Since IR light is invisible to the human eye, it does not disturb human subjects or interfere with human activities in the scene; in addition, it does not affect the scene's visual appearance as recorded by a color video camera. Vertical lines in the IR frames are identified using the horizontal line, intra-frame tracking, and inter-frame tracking; depth along these lines is reconstructed via triangulation. Interpolating these sparse depth lines within the foreground silhouette of the recorded video sequence, we obtain a dense depth map for every frame in the video sequence. Experimental results corresponding to a dynamic scene with a human subject in motion are presented to demonstrate the effectiveness of our proposed approach.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131250654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While most of the existing range image registration algorithms either have to extract and match structural (geometric or optical) features or have to estimate the motion parameters of interest from outliers corrupted point correspondence data for the elimination of false matches in the process of image registration, the registration error and the collinearity error derived directly from the traditional closest point criterion are also capable of doing the same job. However, the latter has an advantage of easy implementation. The purpose of this paper is to investigate which definition of collinearity is more accurate and stable in eliminating false matches inevitably introduced by the closest point criterion. The experiments based on real images show the advantages and disadvantages of different definitions of collinearity.
{"title":"Evaluating collinearity constraint for automatic range image registration","authors":"Yonghuai Liu, Longzhuang Li, Baogang Wei","doi":"10.1109/3DIM.2005.37","DOIUrl":"https://doi.org/10.1109/3DIM.2005.37","url":null,"abstract":"While most of the existing range image registration algorithms either have to extract and match structural (geometric or optical) features or have to estimate the motion parameters of interest from outliers corrupted point correspondence data for the elimination of false matches in the process of image registration, the registration error and the collinearity error derived directly from the traditional closest point criterion are also capable of doing the same job. However, the latter has an advantage of easy implementation. The purpose of this paper is to investigate which definition of collinearity is more accurate and stable in eliminating false matches inevitably introduced by the closest point criterion. The experiments based on real images show the advantages and disadvantages of different definitions of collinearity.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114235374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate local surface geometry estimation in discrete surfaces is an important problem with numerous applications. Principal curvatures and principal directions can be used in applications such as shape analysis and recognition, object segmentation, adaptive smoothing, anisotropic fairing of irregular meshes, and anisotropic texture mapping. In this paper, a novel approach for accurate principal direction estimation in discrete surfaces is described. The proposed approach is based on local directional curve sampling of the surface where the sampling frequency can be controlled. This local model has a large number of degrees of freedoms compared with known techniques and so can better represent the local geometry. The proposed approach is quantitatively evaluated and compared with known techniques for principal direction estimation. In order to perform an unbiased evaluation in which smoothing effects are factored out, we use a set of randomly generated Bezier surface patches for which the principal directions can be computed analytically.
{"title":"Accurate principal directions estimation in discrete surfaces","authors":"G. Agam, Xiaojing Tang","doi":"10.1109/3DIM.2005.14","DOIUrl":"https://doi.org/10.1109/3DIM.2005.14","url":null,"abstract":"Accurate local surface geometry estimation in discrete surfaces is an important problem with numerous applications. Principal curvatures and principal directions can be used in applications such as shape analysis and recognition, object segmentation, adaptive smoothing, anisotropic fairing of irregular meshes, and anisotropic texture mapping. In this paper, a novel approach for accurate principal direction estimation in discrete surfaces is described. The proposed approach is based on local directional curve sampling of the surface where the sampling frequency can be controlled. This local model has a large number of degrees of freedoms compared with known techniques and so can better represent the local geometry. The proposed approach is quantitatively evaluated and compared with known techniques for principal direction estimation. In order to perform an unbiased evaluation in which smoothing effects are factored out, we use a set of randomly generated Bezier surface patches for which the principal directions can be computed analytically.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130623966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}