J. B. Briere, M. S. Cordova, E. Galindo, G. Corkidi
Industrial fermentation procedures involve the mixing of multiple phases (solid, liquid, gaseous), where the interfacial area between the phases (air bubbles, oil drops and aqueous medium) determines the nutrients transfer and hence the performance of the culture. Interactions between phases occur, giving rise to the formation of complex structures containing air bubbles and small drops from the aqueous phase, trapped in oil drops (water-in-oil-in-water), A two-dimensional observation of this phenomenon may lead to an erroneous determination of the phenomena occurring since bubbles and droplets coming from different focal planes may appear overlapped. In the present work, an original strategy to solve this problem is described. Micro-stereoscopic on-line image acquisition techniques have been used, so as to obtain accurate images from the cultures for further three-dimensional analysis. Using this methodology, the three-dimensional spatial position of the trapped bubbles and droplets moving at high speed can be calculated in order to determine their relative concentration.. To evaluate the accuracy of this technique, the results obtained with our system have been compared with those obtained by an expert. An agreement of 95% was achieved. Also, this technique was able to evaluate 14% more bubbles and droplets corresponding to overlaps that the expert was not able to discern in non-stereoscopic images.
{"title":"Micro-stereoscopic vision system for the determination of air bubbles and aqueous droplets content within oil drops in simulated processes of multiphase fermentations","authors":"J. B. Briere, M. S. Cordova, E. Galindo, G. Corkidi","doi":"10.1109/3DIM.2005.57","DOIUrl":"https://doi.org/10.1109/3DIM.2005.57","url":null,"abstract":"Industrial fermentation procedures involve the mixing of multiple phases (solid, liquid, gaseous), where the interfacial area between the phases (air bubbles, oil drops and aqueous medium) determines the nutrients transfer and hence the performance of the culture. Interactions between phases occur, giving rise to the formation of complex structures containing air bubbles and small drops from the aqueous phase, trapped in oil drops (water-in-oil-in-water), A two-dimensional observation of this phenomenon may lead to an erroneous determination of the phenomena occurring since bubbles and droplets coming from different focal planes may appear overlapped. In the present work, an original strategy to solve this problem is described. Micro-stereoscopic on-line image acquisition techniques have been used, so as to obtain accurate images from the cultures for further three-dimensional analysis. Using this methodology, the three-dimensional spatial position of the trapped bubbles and droplets moving at high speed can be calculated in order to determine their relative concentration.. To evaluate the accuracy of this technique, the results obtained with our system have been compared with those obtained by an expert. An agreement of 95% was achieved. Also, this technique was able to evaluate 14% more bubbles and droplets corresponding to overlaps that the expert was not able to discern in non-stereoscopic images.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134101431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iterative closest point (ICP)-based tracking works well when the interframe motion is within the ICP minimum well space. For large interframe motions resulting from a limited sensor acquisition rate relative to the speed of the object motion, it suffers from slow convergence and a tendency to be stalled by local minima. A novel method is proposed to improve the performance of ICP-based tracking. The method is based upon the bounded Hough transform (BHT) which estimates the object pose in a coarse discrete pose space. Given an initial pose estimate, and assuming that the interframe motion is bounded in all 6 pose dimensions, the BHT estimates the current frame's pose. On its own, the BHT is able to track an object's pose in sparse range data both efficiently and reliably, albeit with a limited precision. Experiments on both simulated and real data show the BHT to be more efficient than a number of variants of the ICP for a similar degree of reliability. A hybrid method has also been implemented wherein at each frame the BHT is followed by a few ICP iterations. This hybrid method is more efficient than the ICP, and is more reliable than either the BHT or ICP separately.
{"title":"Discrete pose space estimation to improve ICP-based tracking","authors":"Limin Shang, P. Jasiobedzki, M. Greenspan","doi":"10.1109/3DIM.2005.33","DOIUrl":"https://doi.org/10.1109/3DIM.2005.33","url":null,"abstract":"Iterative closest point (ICP)-based tracking works well when the interframe motion is within the ICP minimum well space. For large interframe motions resulting from a limited sensor acquisition rate relative to the speed of the object motion, it suffers from slow convergence and a tendency to be stalled by local minima. A novel method is proposed to improve the performance of ICP-based tracking. The method is based upon the bounded Hough transform (BHT) which estimates the object pose in a coarse discrete pose space. Given an initial pose estimate, and assuming that the interframe motion is bounded in all 6 pose dimensions, the BHT estimates the current frame's pose. On its own, the BHT is able to track an object's pose in sparse range data both efficiently and reliably, albeit with a limited precision. Experiments on both simulated and real data show the BHT to be more efficient than a number of variants of the ICP for a similar degree of reliability. A hybrid method has also been implemented wherein at each frame the BHT is followed by a few ICP iterations. This hybrid method is more efficient than the ICP, and is more reliable than either the BHT or ICP separately.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"59 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122568856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multimedia projectors and cameras make possible the use of structured light to solve problems such as 3D reconstruction, disparity map computation and camera or projector calibration. Each projector displays patterns over a scene viewed by a camera, thereby allowing automatic computation of camera-projector pixel correspondences. This paper introduces a new algorithm to establish this correspondence in difficult cases of image acquisition. A probabilistic model formulated as a Markov random field uses the stripe images to find the most likely correspondences in the presence of noise. Our model is specially tailored to handle the unfavorable projector-camera pixel ratios that occur in multiple-projector single-camera setups. For the case where more than one camera is used, we propose a robust approach to establish correspondences between the cameras and compute an accurate disparity map. To conduct experiments, a ground truth was first reconstructed from a high quality acquisition. Various degradations were applied to the pattern images which were then solved using our method. The results were compared to the ground truth for error analysis and showed very good performances, even near depth discontinuities.
{"title":"A MRF formulation for coded structured light","authors":"J. Tardif, S. Roy","doi":"10.1109/3DIM.2005.11","DOIUrl":"https://doi.org/10.1109/3DIM.2005.11","url":null,"abstract":"Multimedia projectors and cameras make possible the use of structured light to solve problems such as 3D reconstruction, disparity map computation and camera or projector calibration. Each projector displays patterns over a scene viewed by a camera, thereby allowing automatic computation of camera-projector pixel correspondences. This paper introduces a new algorithm to establish this correspondence in difficult cases of image acquisition. A probabilistic model formulated as a Markov random field uses the stripe images to find the most likely correspondences in the presence of noise. Our model is specially tailored to handle the unfavorable projector-camera pixel ratios that occur in multiple-projector single-camera setups. For the case where more than one camera is used, we propose a robust approach to establish correspondences between the cameras and compute an accurate disparity map. To conduct experiments, a ground truth was first reconstructed from a high quality acquisition. Various degradations were applied to the pattern images which were then solved using our method. The results were compared to the ground truth for error analysis and showed very good performances, even near depth discontinuities.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129461132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Estimating the motion of a moving camera in an unknown environment is essential for a number of applications ranging from as-built reconstruction to augmented reality. It is a challenging problem especially when real-time performance is required. Our approach is to estimate the camera motion while reconstructing the shape and appearance of the most salient visual features in the scene. In our 3D reconstruction process, correspondences are obtained by tracking the visual features from frame to frame with optical flow tracking. Optical-flow-based tracking methods have limitations in tracking the salient features. Often larger translational motions and even moderate rotational motions can result in drifts. We propose to augment flow-based tracking by building a landmark representation around reliably reconstructed features. A planar patch around the reconstructed feature point provides matching information that prevents drifts in flow-based feature tracking and allows establishment of correspondences across the frames with large baselines. Selective and periodic such correspondence mappings drastically improve scene and motion reconstruction while adhering to the real-time requirements. The method is experimentally tested to be both accurate and computational efficient.
{"title":"Bootstrapped real-time ego motion estimation and scene modeling","authors":"Xiang Zhang, Yakup Genç","doi":"10.1109/3DIM.2005.25","DOIUrl":"https://doi.org/10.1109/3DIM.2005.25","url":null,"abstract":"Estimating the motion of a moving camera in an unknown environment is essential for a number of applications ranging from as-built reconstruction to augmented reality. It is a challenging problem especially when real-time performance is required. Our approach is to estimate the camera motion while reconstructing the shape and appearance of the most salient visual features in the scene. In our 3D reconstruction process, correspondences are obtained by tracking the visual features from frame to frame with optical flow tracking. Optical-flow-based tracking methods have limitations in tracking the salient features. Often larger translational motions and even moderate rotational motions can result in drifts. We propose to augment flow-based tracking by building a landmark representation around reliably reconstructed features. A planar patch around the reconstructed feature point provides matching information that prevents drifts in flow-based feature tracking and allows establishment of correspondences across the frames with large baselines. Selective and periodic such correspondence mappings drastically improve scene and motion reconstruction while adhering to the real-time requirements. The method is experimentally tested to be both accurate and computational efficient.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130236301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our goal is the production of highly accurate photorealistic descriptions of the 3D world with a minimum of human interaction and increased computational efficiency. Our input is a large number of unregistered 3D and 2D photographs of an urban site. The generated 3D representations, after automated registration, are useful for urban planning, historical preservation, or virtual reality (entertainment) applications. A major bottleneck in the process of 3D scene acquisition is the automated registration of a large number of geometrically complex 3D range scans in a common frame of reference. We have developed novel methods for the accurate and efficient registration of a large number of 3D range scans. The methods utilize range segmentation and feature extraction algorithms. We have also developed a context-sensitive user interface to overcome problems emerging from scene symmetry.
{"title":"Semi-automatic range to range registration: a feature-based method","authors":"Chen Chao, I. Chao","doi":"10.1109/3DIM.2005.72","DOIUrl":"https://doi.org/10.1109/3DIM.2005.72","url":null,"abstract":"Our goal is the production of highly accurate photorealistic descriptions of the 3D world with a minimum of human interaction and increased computational efficiency. Our input is a large number of unregistered 3D and 2D photographs of an urban site. The generated 3D representations, after automated registration, are useful for urban planning, historical preservation, or virtual reality (entertainment) applications. A major bottleneck in the process of 3D scene acquisition is the automated registration of a large number of geometrically complex 3D range scans in a common frame of reference. We have developed novel methods for the accurate and efficient registration of a large number of 3D range scans. The methods utilize range segmentation and feature extraction algorithms. We have also developed a context-sensitive user interface to overcome problems emerging from scene symmetry.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124789200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a new method for reconstructing animated, anatomy-based facial models of individuals from range data with minimal manual intervention. A prototype model with a multi-layer skin-muscle-skull structure serves as the starting point for our method. After the global adaptation, the skin mesh of the prototype model is represented as a dynamic deformable model which is deformed to fit scanned data according to internal force stemming from the elastic properties of the surface and external forces produced from the scanned data points and features. The underlying muscle layer that consists of three types of facial muscles is automatically adapted. According to the adapted skin and muscle structures, a set of automatically generated skull feature points is transformed to drive a volume morphing of the template skull model for skull fitting. The reconstructed model realistically reproduces the shape and features of a specific person and can be animated instantly.
{"title":"From range data to animated anatomy-based faces: a model adaptation method","authors":"Yu Zhang, T. Sim, C. Tan","doi":"10.1109/3DIM.2005.48","DOIUrl":"https://doi.org/10.1109/3DIM.2005.48","url":null,"abstract":"This paper presents a new method for reconstructing animated, anatomy-based facial models of individuals from range data with minimal manual intervention. A prototype model with a multi-layer skin-muscle-skull structure serves as the starting point for our method. After the global adaptation, the skin mesh of the prototype model is represented as a dynamic deformable model which is deformed to fit scanned data according to internal force stemming from the elastic properties of the surface and external forces produced from the scanned data points and features. The underlying muscle layer that consists of three types of facial muscles is automatically adapted. According to the adapted skin and muscle structures, a set of automatically generated skull feature points is transformed to drive a volume morphing of the template skull model for skull fitting. The reconstructed model realistically reproduces the shape and features of a specific person and can be animated instantly.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129970242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Devarakota, B. Mirbach, M. Castillo-Franco, B. Ottersten
This paper describes a 3D vision system based on a new 3D sensor technology for the detection and classification of occupants in a car. New generation of so-called "smart airbags" require the information about the occupancy type and position of the occupant. This information allows a distinct control of the airbag inflation. In order to reduce the risk of injuries due to airbag deployment, the airbag can be suppressed completely in case of a child seat oriented in reward direction. In this paper, we propose a 3D vision system based on a 3D optical time-of-flight (TOF) sensor, for the detection and classification of the occupancy on the passenger seat. Geometrical shape features are extracted from the 3D image sequences. Polynomial classifier is considered for the classification task. A comparison of classifier performance with principle components (eigen-images) is presented. This paper also discusses the robustness of the features with variation of the data. The full scale tests have been conducted on a wide range of realistic situations (adults/children/child seats etc.) which may occur in a vehicle.
{"title":"3D vision technology for occupant detection and classification","authors":"P. Devarakota, B. Mirbach, M. Castillo-Franco, B. Ottersten","doi":"10.1109/3DIM.2005.1","DOIUrl":"https://doi.org/10.1109/3DIM.2005.1","url":null,"abstract":"This paper describes a 3D vision system based on a new 3D sensor technology for the detection and classification of occupants in a car. New generation of so-called \"smart airbags\" require the information about the occupancy type and position of the occupant. This information allows a distinct control of the airbag inflation. In order to reduce the risk of injuries due to airbag deployment, the airbag can be suppressed completely in case of a child seat oriented in reward direction. In this paper, we propose a 3D vision system based on a 3D optical time-of-flight (TOF) sensor, for the detection and classification of the occupancy on the passenger seat. Geometrical shape features are extracted from the 3D image sequences. Polynomial classifier is considered for the classification task. A comparison of classifier performance with principle components (eigen-images) is presented. This paper also discusses the robustness of the features with variation of the data. The full scale tests have been conducted on a wide range of realistic situations (adults/children/child seats etc.) which may occur in a vehicle.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117261418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriele Guidi, L. Micoli, M. Russo, B. Frischer, M. D. Simone, A. Spinetti, Luca Carosso
This paper describes 3D acquisition and modeling of the "Plastico di Roma antica", a large plaster-of-Paris model of imperial Rome (16/spl times/17 meters) created in the last century. Its overall size demands an acquisition approach typical of large structures, but it is also characterized by extremely tiny details, typical of small objects: houses are a few centimeters high; their doors, windows, etc. are smaller than 1 cm. The approach followed to resolve this "contradiction" is described. The result is a huge but precise 3D model created by using a special metrology laser radar. We give an account of the procedures of reorienting the large point clouds obtained after each acquisition step (50-60 million points) into a single reference system by means of measuring fixed redundant reference points. Finally we show how the data set can be properly divided into 2/spl times/2 meters sub-areas for allowing data merging and mesh editing.
本文描述了“Plastico di Roma antica”的三维获取和建模,这是上个世纪创建的罗马帝国的大型巴黎石膏模型(16/spl倍/17米)。它的整体尺寸需要典型的大型结构的获取方法,但它也具有极其微小的细节,典型的小物体:房屋只有几厘米高;他们的门、窗等都小于1厘米。本文描述了解决这一“矛盾”的方法。结果是一个巨大但精确的3D模型创建使用特殊计量激光雷达。我们给出了通过测量固定的冗余参考点,将每个采集步骤(5000 - 6000万个点)获得的大型点云重新定向到单个参考系统的过程。最后,我们展示了如何将数据集适当地划分为2/ sp1倍/2米的子区域,以便进行数据合并和网格编辑。
{"title":"3D digitization of a large model of imperial Rome","authors":"Gabriele Guidi, L. Micoli, M. Russo, B. Frischer, M. D. Simone, A. Spinetti, Luca Carosso","doi":"10.1109/3DIM.2005.2","DOIUrl":"https://doi.org/10.1109/3DIM.2005.2","url":null,"abstract":"This paper describes 3D acquisition and modeling of the \"Plastico di Roma antica\", a large plaster-of-Paris model of imperial Rome (16/spl times/17 meters) created in the last century. Its overall size demands an acquisition approach typical of large structures, but it is also characterized by extremely tiny details, typical of small objects: houses are a few centimeters high; their doors, windows, etc. are smaller than 1 cm. The approach followed to resolve this \"contradiction\" is described. The result is a huge but precise 3D model created by using a special metrology laser radar. We give an account of the procedures of reorienting the large point clouds obtained after each acquisition step (50-60 million points) into a single reference system by means of measuring fixed redundant reference points. Finally we show how the data set can be properly divided into 2/spl times/2 meters sub-areas for allowing data merging and mesh editing.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114108547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose an efficient approximation algorithm using multilevel B-splines based on quasi-interpolants. Multilevel technique uses a coarse to fine hierarchy to generate a sequence of bicubic B-spline functions whose sum approaches the desired interpolation function. To compute a set of control points, quasi-interpolants gives a procedure for deriving local spline approximation methods where a B-spline coefficient only depends on data points taken from the neighborhood of the support corresponding the B-spline. Experimental results show that the smooth surface reconstruction with high accuracy can be obtained from a selected set of scattered or dense irregular samples.
{"title":"An efficient scattered data approximation using multilevel B-splines based on quasi-interpolants","authors":"Byung-Gook Lee, Joon-Jae Lee, Jaechil Yoo","doi":"10.1109/3DIM.2005.18","DOIUrl":"https://doi.org/10.1109/3DIM.2005.18","url":null,"abstract":"In this paper, we propose an efficient approximation algorithm using multilevel B-splines based on quasi-interpolants. Multilevel technique uses a coarse to fine hierarchy to generate a sequence of bicubic B-spline functions whose sum approaches the desired interpolation function. To compute a set of control points, quasi-interpolants gives a procedure for deriving local spline approximation methods where a B-spline coefficient only depends on data points taken from the neighborhood of the support corresponding the B-spline. Experimental results show that the smooth surface reconstruction with high accuracy can be obtained from a selected set of scattered or dense irregular samples.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122514364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most three-dimensional acquisition systems generate several partial reconstructions that have to be registered and integrated for building a complete 3D model. In this paper, we propose a volumetric shape integration method, consisting of weighted signed distance functions represented as variational implicit functions (VIF) or surfaces (VIS). Texture integration is solved similarly by using three weighted color junctions also based on VIFs. Using these continuous (not grid-based) representations solves current limitations of volumetric methods: no memory inefficient and resolution limiting grid representation is required. The built-in smoothing properties of the VIS representations also improve the robustness of the final integration against noise in the input data. Experimental results are performed on real-live, noiseless and noisy synthetic data of human faces in order to show the robustness and accuracy of the integration algorithm.
{"title":"Partial surface integration based on variational implicit functions and surfaces for 3D model building","authors":"P. Claes, D. Vandermeulen, L. Gool, P. Suetens","doi":"10.1109/3DIM.2005.62","DOIUrl":"https://doi.org/10.1109/3DIM.2005.62","url":null,"abstract":"Most three-dimensional acquisition systems generate several partial reconstructions that have to be registered and integrated for building a complete 3D model. In this paper, we propose a volumetric shape integration method, consisting of weighted signed distance functions represented as variational implicit functions (VIF) or surfaces (VIS). Texture integration is solved similarly by using three weighted color junctions also based on VIFs. Using these continuous (not grid-based) representations solves current limitations of volumetric methods: no memory inefficient and resolution limiting grid representation is required. The built-in smoothing properties of the VIS representations also improve the robustness of the final integration against noise in the input data. Experimental results are performed on real-live, noiseless and noisy synthetic data of human faces in order to show the robustness and accuracy of the integration algorithm.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121555043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}