J. B. Briere, M. S. Cordova, E. Galindo, G. Corkidi
Industrial fermentation procedures involve the mixing of multiple phases (solid, liquid, gaseous), where the interfacial area between the phases (air bubbles, oil drops and aqueous medium) determines the nutrients transfer and hence the performance of the culture. Interactions between phases occur, giving rise to the formation of complex structures containing air bubbles and small drops from the aqueous phase, trapped in oil drops (water-in-oil-in-water), A two-dimensional observation of this phenomenon may lead to an erroneous determination of the phenomena occurring since bubbles and droplets coming from different focal planes may appear overlapped. In the present work, an original strategy to solve this problem is described. Micro-stereoscopic on-line image acquisition techniques have been used, so as to obtain accurate images from the cultures for further three-dimensional analysis. Using this methodology, the three-dimensional spatial position of the trapped bubbles and droplets moving at high speed can be calculated in order to determine their relative concentration.. To evaluate the accuracy of this technique, the results obtained with our system have been compared with those obtained by an expert. An agreement of 95% was achieved. Also, this technique was able to evaluate 14% more bubbles and droplets corresponding to overlaps that the expert was not able to discern in non-stereoscopic images.
{"title":"Micro-stereoscopic vision system for the determination of air bubbles and aqueous droplets content within oil drops in simulated processes of multiphase fermentations","authors":"J. B. Briere, M. S. Cordova, E. Galindo, G. Corkidi","doi":"10.1109/3DIM.2005.57","DOIUrl":"https://doi.org/10.1109/3DIM.2005.57","url":null,"abstract":"Industrial fermentation procedures involve the mixing of multiple phases (solid, liquid, gaseous), where the interfacial area between the phases (air bubbles, oil drops and aqueous medium) determines the nutrients transfer and hence the performance of the culture. Interactions between phases occur, giving rise to the formation of complex structures containing air bubbles and small drops from the aqueous phase, trapped in oil drops (water-in-oil-in-water), A two-dimensional observation of this phenomenon may lead to an erroneous determination of the phenomena occurring since bubbles and droplets coming from different focal planes may appear overlapped. In the present work, an original strategy to solve this problem is described. Micro-stereoscopic on-line image acquisition techniques have been used, so as to obtain accurate images from the cultures for further three-dimensional analysis. Using this methodology, the three-dimensional spatial position of the trapped bubbles and droplets moving at high speed can be calculated in order to determine their relative concentration.. To evaluate the accuracy of this technique, the results obtained with our system have been compared with those obtained by an expert. An agreement of 95% was achieved. Also, this technique was able to evaluate 14% more bubbles and droplets corresponding to overlaps that the expert was not able to discern in non-stereoscopic images.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134101431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multimedia projectors and cameras make possible the use of structured light to solve problems such as 3D reconstruction, disparity map computation and camera or projector calibration. Each projector displays patterns over a scene viewed by a camera, thereby allowing automatic computation of camera-projector pixel correspondences. This paper introduces a new algorithm to establish this correspondence in difficult cases of image acquisition. A probabilistic model formulated as a Markov random field uses the stripe images to find the most likely correspondences in the presence of noise. Our model is specially tailored to handle the unfavorable projector-camera pixel ratios that occur in multiple-projector single-camera setups. For the case where more than one camera is used, we propose a robust approach to establish correspondences between the cameras and compute an accurate disparity map. To conduct experiments, a ground truth was first reconstructed from a high quality acquisition. Various degradations were applied to the pattern images which were then solved using our method. The results were compared to the ground truth for error analysis and showed very good performances, even near depth discontinuities.
{"title":"A MRF formulation for coded structured light","authors":"J. Tardif, S. Roy","doi":"10.1109/3DIM.2005.11","DOIUrl":"https://doi.org/10.1109/3DIM.2005.11","url":null,"abstract":"Multimedia projectors and cameras make possible the use of structured light to solve problems such as 3D reconstruction, disparity map computation and camera or projector calibration. Each projector displays patterns over a scene viewed by a camera, thereby allowing automatic computation of camera-projector pixel correspondences. This paper introduces a new algorithm to establish this correspondence in difficult cases of image acquisition. A probabilistic model formulated as a Markov random field uses the stripe images to find the most likely correspondences in the presence of noise. Our model is specially tailored to handle the unfavorable projector-camera pixel ratios that occur in multiple-projector single-camera setups. For the case where more than one camera is used, we propose a robust approach to establish correspondences between the cameras and compute an accurate disparity map. To conduct experiments, a ground truth was first reconstructed from a high quality acquisition. Various degradations were applied to the pattern images which were then solved using our method. The results were compared to the ground truth for error analysis and showed very good performances, even near depth discontinuities.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129461132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Estimating the motion of a moving camera in an unknown environment is essential for a number of applications ranging from as-built reconstruction to augmented reality. It is a challenging problem especially when real-time performance is required. Our approach is to estimate the camera motion while reconstructing the shape and appearance of the most salient visual features in the scene. In our 3D reconstruction process, correspondences are obtained by tracking the visual features from frame to frame with optical flow tracking. Optical-flow-based tracking methods have limitations in tracking the salient features. Often larger translational motions and even moderate rotational motions can result in drifts. We propose to augment flow-based tracking by building a landmark representation around reliably reconstructed features. A planar patch around the reconstructed feature point provides matching information that prevents drifts in flow-based feature tracking and allows establishment of correspondences across the frames with large baselines. Selective and periodic such correspondence mappings drastically improve scene and motion reconstruction while adhering to the real-time requirements. The method is experimentally tested to be both accurate and computational efficient.
{"title":"Bootstrapped real-time ego motion estimation and scene modeling","authors":"Xiang Zhang, Yakup Genç","doi":"10.1109/3DIM.2005.25","DOIUrl":"https://doi.org/10.1109/3DIM.2005.25","url":null,"abstract":"Estimating the motion of a moving camera in an unknown environment is essential for a number of applications ranging from as-built reconstruction to augmented reality. It is a challenging problem especially when real-time performance is required. Our approach is to estimate the camera motion while reconstructing the shape and appearance of the most salient visual features in the scene. In our 3D reconstruction process, correspondences are obtained by tracking the visual features from frame to frame with optical flow tracking. Optical-flow-based tracking methods have limitations in tracking the salient features. Often larger translational motions and even moderate rotational motions can result in drifts. We propose to augment flow-based tracking by building a landmark representation around reliably reconstructed features. A planar patch around the reconstructed feature point provides matching information that prevents drifts in flow-based feature tracking and allows establishment of correspondences across the frames with large baselines. Selective and periodic such correspondence mappings drastically improve scene and motion reconstruction while adhering to the real-time requirements. The method is experimentally tested to be both accurate and computational efficient.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130236301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iterative closest point (ICP)-based tracking works well when the interframe motion is within the ICP minimum well space. For large interframe motions resulting from a limited sensor acquisition rate relative to the speed of the object motion, it suffers from slow convergence and a tendency to be stalled by local minima. A novel method is proposed to improve the performance of ICP-based tracking. The method is based upon the bounded Hough transform (BHT) which estimates the object pose in a coarse discrete pose space. Given an initial pose estimate, and assuming that the interframe motion is bounded in all 6 pose dimensions, the BHT estimates the current frame's pose. On its own, the BHT is able to track an object's pose in sparse range data both efficiently and reliably, albeit with a limited precision. Experiments on both simulated and real data show the BHT to be more efficient than a number of variants of the ICP for a similar degree of reliability. A hybrid method has also been implemented wherein at each frame the BHT is followed by a few ICP iterations. This hybrid method is more efficient than the ICP, and is more reliable than either the BHT or ICP separately.
{"title":"Discrete pose space estimation to improve ICP-based tracking","authors":"Limin Shang, P. Jasiobedzki, M. Greenspan","doi":"10.1109/3DIM.2005.33","DOIUrl":"https://doi.org/10.1109/3DIM.2005.33","url":null,"abstract":"Iterative closest point (ICP)-based tracking works well when the interframe motion is within the ICP minimum well space. For large interframe motions resulting from a limited sensor acquisition rate relative to the speed of the object motion, it suffers from slow convergence and a tendency to be stalled by local minima. A novel method is proposed to improve the performance of ICP-based tracking. The method is based upon the bounded Hough transform (BHT) which estimates the object pose in a coarse discrete pose space. Given an initial pose estimate, and assuming that the interframe motion is bounded in all 6 pose dimensions, the BHT estimates the current frame's pose. On its own, the BHT is able to track an object's pose in sparse range data both efficiently and reliably, albeit with a limited precision. Experiments on both simulated and real data show the BHT to be more efficient than a number of variants of the ICP for a similar degree of reliability. A hybrid method has also been implemented wherein at each frame the BHT is followed by a few ICP iterations. This hybrid method is more efficient than the ICP, and is more reliable than either the BHT or ICP separately.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"59 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122568856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our goal is the production of highly accurate photorealistic descriptions of the 3D world with a minimum of human interaction and increased computational efficiency. Our input is a large number of unregistered 3D and 2D photographs of an urban site. The generated 3D representations, after automated registration, are useful for urban planning, historical preservation, or virtual reality (entertainment) applications. A major bottleneck in the process of 3D scene acquisition is the automated registration of a large number of geometrically complex 3D range scans in a common frame of reference. We have developed novel methods for the accurate and efficient registration of a large number of 3D range scans. The methods utilize range segmentation and feature extraction algorithms. We have also developed a context-sensitive user interface to overcome problems emerging from scene symmetry.
{"title":"Semi-automatic range to range registration: a feature-based method","authors":"Chen Chao, I. Chao","doi":"10.1109/3DIM.2005.72","DOIUrl":"https://doi.org/10.1109/3DIM.2005.72","url":null,"abstract":"Our goal is the production of highly accurate photorealistic descriptions of the 3D world with a minimum of human interaction and increased computational efficiency. Our input is a large number of unregistered 3D and 2D photographs of an urban site. The generated 3D representations, after automated registration, are useful for urban planning, historical preservation, or virtual reality (entertainment) applications. A major bottleneck in the process of 3D scene acquisition is the automated registration of a large number of geometrically complex 3D range scans in a common frame of reference. We have developed novel methods for the accurate and efficient registration of a large number of 3D range scans. The methods utilize range segmentation and feature extraction algorithms. We have also developed a context-sensitive user interface to overcome problems emerging from scene symmetry.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124789200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most three-dimensional acquisition systems generate several partial reconstructions that have to be registered and integrated for building a complete 3D model. In this paper, we propose a volumetric shape integration method, consisting of weighted signed distance functions represented as variational implicit functions (VIF) or surfaces (VIS). Texture integration is solved similarly by using three weighted color junctions also based on VIFs. Using these continuous (not grid-based) representations solves current limitations of volumetric methods: no memory inefficient and resolution limiting grid representation is required. The built-in smoothing properties of the VIS representations also improve the robustness of the final integration against noise in the input data. Experimental results are performed on real-live, noiseless and noisy synthetic data of human faces in order to show the robustness and accuracy of the integration algorithm.
{"title":"Partial surface integration based on variational implicit functions and surfaces for 3D model building","authors":"P. Claes, D. Vandermeulen, L. Gool, P. Suetens","doi":"10.1109/3DIM.2005.62","DOIUrl":"https://doi.org/10.1109/3DIM.2005.62","url":null,"abstract":"Most three-dimensional acquisition systems generate several partial reconstructions that have to be registered and integrated for building a complete 3D model. In this paper, we propose a volumetric shape integration method, consisting of weighted signed distance functions represented as variational implicit functions (VIF) or surfaces (VIS). Texture integration is solved similarly by using three weighted color junctions also based on VIFs. Using these continuous (not grid-based) representations solves current limitations of volumetric methods: no memory inefficient and resolution limiting grid representation is required. The built-in smoothing properties of the VIS representations also improve the robustness of the final integration against noise in the input data. Experimental results are performed on real-live, noiseless and noisy synthetic data of human faces in order to show the robustness and accuracy of the integration algorithm.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121555043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose an efficient approximation algorithm using multilevel B-splines based on quasi-interpolants. Multilevel technique uses a coarse to fine hierarchy to generate a sequence of bicubic B-spline functions whose sum approaches the desired interpolation function. To compute a set of control points, quasi-interpolants gives a procedure for deriving local spline approximation methods where a B-spline coefficient only depends on data points taken from the neighborhood of the support corresponding the B-spline. Experimental results show that the smooth surface reconstruction with high accuracy can be obtained from a selected set of scattered or dense irregular samples.
{"title":"An efficient scattered data approximation using multilevel B-splines based on quasi-interpolants","authors":"Byung-Gook Lee, Joon-Jae Lee, Jaechil Yoo","doi":"10.1109/3DIM.2005.18","DOIUrl":"https://doi.org/10.1109/3DIM.2005.18","url":null,"abstract":"In this paper, we propose an efficient approximation algorithm using multilevel B-splines based on quasi-interpolants. Multilevel technique uses a coarse to fine hierarchy to generate a sequence of bicubic B-spline functions whose sum approaches the desired interpolation function. To compute a set of control points, quasi-interpolants gives a procedure for deriving local spline approximation methods where a B-spline coefficient only depends on data points taken from the neighborhood of the support corresponding the B-spline. Experimental results show that the smooth surface reconstruction with high accuracy can be obtained from a selected set of scattered or dense irregular samples.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122514364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Devarakota, B. Mirbach, M. Castillo-Franco, B. Ottersten
This paper describes a 3D vision system based on a new 3D sensor technology for the detection and classification of occupants in a car. New generation of so-called "smart airbags" require the information about the occupancy type and position of the occupant. This information allows a distinct control of the airbag inflation. In order to reduce the risk of injuries due to airbag deployment, the airbag can be suppressed completely in case of a child seat oriented in reward direction. In this paper, we propose a 3D vision system based on a 3D optical time-of-flight (TOF) sensor, for the detection and classification of the occupancy on the passenger seat. Geometrical shape features are extracted from the 3D image sequences. Polynomial classifier is considered for the classification task. A comparison of classifier performance with principle components (eigen-images) is presented. This paper also discusses the robustness of the features with variation of the data. The full scale tests have been conducted on a wide range of realistic situations (adults/children/child seats etc.) which may occur in a vehicle.
{"title":"3D vision technology for occupant detection and classification","authors":"P. Devarakota, B. Mirbach, M. Castillo-Franco, B. Ottersten","doi":"10.1109/3DIM.2005.1","DOIUrl":"https://doi.org/10.1109/3DIM.2005.1","url":null,"abstract":"This paper describes a 3D vision system based on a new 3D sensor technology for the detection and classification of occupants in a car. New generation of so-called \"smart airbags\" require the information about the occupancy type and position of the occupant. This information allows a distinct control of the airbag inflation. In order to reduce the risk of injuries due to airbag deployment, the airbag can be suppressed completely in case of a child seat oriented in reward direction. In this paper, we propose a 3D vision system based on a 3D optical time-of-flight (TOF) sensor, for the detection and classification of the occupancy on the passenger seat. Geometrical shape features are extracted from the 3D image sequences. Polynomial classifier is considered for the classification task. A comparison of classifier performance with principle components (eigen-images) is presented. This paper also discusses the robustness of the features with variation of the data. The full scale tests have been conducted on a wide range of realistic situations (adults/children/child seats etc.) which may occur in a vehicle.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117261418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takeshi Oishi, A. Nakazawa, R. Kurazume, K. Ikeuchi
This paper describes a fast and easy-to-use simultaneous alignment method of multiple range images. The most time consuming part of alignment process is searching corresponding points. Although "Inverse calibration" method quickly searches corresponding points in complexity O(n), where n is the number of vertices, the method requires some look-up tables or precise sensors parameters. Then, we propose an easy-to-use method that uses "Index Image": "Index image " can be rapidly created using graphics hardware without precise sensor's parameters. For fast computation of rigid transformation matrices of a large number of range images, we utilized linearized error function and applied incomplete Cholesky conjugate gradient (ICCG) method for solving linear equations. Some experimental results that aligned a large number of range images measured with laser range sensors show the effectiveness of our method.
{"title":"Fast simultaneous alignment of multiple range images using index images","authors":"Takeshi Oishi, A. Nakazawa, R. Kurazume, K. Ikeuchi","doi":"10.1109/3DIM.2005.41","DOIUrl":"https://doi.org/10.1109/3DIM.2005.41","url":null,"abstract":"This paper describes a fast and easy-to-use simultaneous alignment method of multiple range images. The most time consuming part of alignment process is searching corresponding points. Although \"Inverse calibration\" method quickly searches corresponding points in complexity O(n), where n is the number of vertices, the method requires some look-up tables or precise sensors parameters. Then, we propose an easy-to-use method that uses \"Index Image\": \"Index image \" can be rapidly created using graphics hardware without precise sensor's parameters. For fast computation of rigid transformation matrices of a large number of range images, we utilized linearized error function and applied incomplete Cholesky conjugate gradient (ICCG) method for solving linear equations. Some experimental results that aligned a large number of range images measured with laser range sensors show the effectiveness of our method.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126768031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A registration method is proposed for 3D reconstruction of an indoor environment using a multi-view camera. In general, previous methods have a high computational complexity and are not robust for 3D point cloud with low precision. Thus, a projection-based registration is presented. First, depth are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling holes referring neighboring 3D points. Second, 3D point clouds acquired at two views are projected onto the same image plane, and two-step integer mapping enables the modified KLT to find correspondences. Then, fine registration is carried out by minimizing distance errors. Finally, a final color is evaluated using colors of corresponding points and an indoor environment is reconstructed by applying the above procedure to consecutive scenes. The proposed method reduces computational complexity by searching for correspondences within an image plane. It not only enables an effective registration even for 3D point cloud with low precision, but also need only a few views. The generated model can be adopted for interaction with as well as navigation in a virtual environment.
{"title":"Projection-based registration using a multi-view camera for indoor scene reconstruction","authors":"Sehwan Kim, Woontack Woo","doi":"10.1109/3DIM.2005.64","DOIUrl":"https://doi.org/10.1109/3DIM.2005.64","url":null,"abstract":"A registration method is proposed for 3D reconstruction of an indoor environment using a multi-view camera. In general, previous methods have a high computational complexity and are not robust for 3D point cloud with low precision. Thus, a projection-based registration is presented. First, depth are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling holes referring neighboring 3D points. Second, 3D point clouds acquired at two views are projected onto the same image plane, and two-step integer mapping enables the modified KLT to find correspondences. Then, fine registration is carried out by minimizing distance errors. Finally, a final color is evaluated using colors of corresponding points and an indoor environment is reconstructed by applying the above procedure to consecutive scenes. The proposed method reduces computational complexity by searching for correspondences within an image plane. It not only enables an effective registration even for 3D point cloud with low precision, but also need only a few views. The generated model can be adopted for interaction with as well as navigation in a virtual environment.","PeriodicalId":170883,"journal":{"name":"Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122329135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}