We present a practical system to map and reconstruct multi-room indoor structures using the sensors commonly available in commodity smart phones. Our approach combines and extends state-of-the-art results to automatically generate floor plans scaled to real-world metric dimensions and to reconstruct scenes not necessarily limited to the Manhattan World assumption. In contrast to previous works, our method introduces an interactive method based on statistical indicators for refining wall orientations and a specialized merging algorithm for building the final rooms shape. The low CPU cost of the method makes it possible to support full execution by commodity smart phones, without the need of connecting them to a compute server. We demonstrate the effectiveness of our technique on a variety of multi-room indoor scenes, achieving remarkably better results than previous approaches.
{"title":"Interactive Mapping of Indoor Building Structures through Mobile Devices","authors":"G. Pintore, Marco Agus, E. Gobbetti","doi":"10.1109/3DV.2014.40","DOIUrl":"https://doi.org/10.1109/3DV.2014.40","url":null,"abstract":"We present a practical system to map and reconstruct multi-room indoor structures using the sensors commonly available in commodity smart phones. Our approach combines and extends state-of-the-art results to automatically generate floor plans scaled to real-world metric dimensions and to reconstruct scenes not necessarily limited to the Manhattan World assumption. In contrast to previous works, our method introduces an interactive method based on statistical indicators for refining wall orientations and a specialized merging algorithm for building the final rooms shape. The low CPU cost of the method makes it possible to support full execution by commodity smart phones, without the need of connecting them to a compute server. We demonstrate the effectiveness of our technique on a variety of multi-room indoor scenes, achieving remarkably better results than previous approaches.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121284705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The architectural, engineering, construction and facilities management (AEC-FM) industry is going through a transformative phase by adapting new technologies and tools into its change management practices. AEC-FM Industry has adopted Building Information Modeling (BIM) and three-dimensional (3D) laser scanning technologies in tracking changes in the whole lifecycle of building and infrastructure projects, from planning to design and construction, and finally to facilities management. One of the challenges of using these technologies in change management is the difficulties of reliably detecting changes of densely located objects, such as Mechanical, Electrical, and Plumbing (MEP) objects in building systems. This paper presents a novel relational-graph-based framework for automated spatial change analysis of MEP components. This framework extract objects and spatial relationships from 3D laser scanned point clouds, and use relational structures of objects in data and designed BIM models for fusing 3D data and as-designed BIM. The authors validated the proposed change analysis approach using data acquired from real building construction sites.
{"title":"Toward Automated Spatial Change Analysis of MEP Components Using 3D Point Clouds and As-Designed BIM Models","authors":"V. Kalasapudi, Y. Turkan, P. Tang","doi":"10.1109/3DV.2014.105","DOIUrl":"https://doi.org/10.1109/3DV.2014.105","url":null,"abstract":"The architectural, engineering, construction and facilities management (AEC-FM) industry is going through a transformative phase by adapting new technologies and tools into its change management practices. AEC-FM Industry has adopted Building Information Modeling (BIM) and three-dimensional (3D) laser scanning technologies in tracking changes in the whole lifecycle of building and infrastructure projects, from planning to design and construction, and finally to facilities management. One of the challenges of using these technologies in change management is the difficulties of reliably detecting changes of densely located objects, such as Mechanical, Electrical, and Plumbing (MEP) objects in building systems. This paper presents a novel relational-graph-based framework for automated spatial change analysis of MEP components. This framework extract objects and spatial relationships from 3D laser scanned point clouds, and use relational structures of objects in data and designed BIM models for fusing 3D data and as-designed BIM. The authors validated the proposed change analysis approach using data acquired from real building construction sites.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129518872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Drouin, J. Beraldin, L. Cournoyer, D. MacKinnon, G. Godin, J. Fournier
We propose a methodology for acquiring reference models with known uncertainty of complex building-sized objects. Those can be used to quantitatively evaluate the performance of passive 3D reconstruction when working at large scale. The proposed methodology combines the use of a time-of-flight scanner, a laser tracker, spherical artifacts and contrast targets. To demonstrate the soundness of the proposed approach, we built a reference model composed of a 3D model of exterior walls and courtyards of a 130m × 55m × 20m Building. The expanded uncertainty of the 3D reference model and the spatial resolution were calculated.
{"title":"A Methodology for Creating Large Scale Reference Models with Known Uncertainty for Evaluating Imaging Solution","authors":"M. Drouin, J. Beraldin, L. Cournoyer, D. MacKinnon, G. Godin, J. Fournier","doi":"10.1109/3DV.2014.104","DOIUrl":"https://doi.org/10.1109/3DV.2014.104","url":null,"abstract":"We propose a methodology for acquiring reference models with known uncertainty of complex building-sized objects. Those can be used to quantitatively evaluate the performance of passive 3D reconstruction when working at large scale. The proposed methodology combines the use of a time-of-flight scanner, a laser tracker, spherical artifacts and contrast targets. To demonstrate the soundness of the proposed approach, we built a reference model composed of a 3D model of exterior walls and courtyards of a 130m × 55m × 20m Building. The expanded uncertainty of the 3D reference model and the spatial resolution were calculated.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"202 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130558728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Ren, V. Prisacariu, O. Kähler, I. Reid, D. W. Murray
Most current approaches for 3D object tracking rely on distinctive object appearances. While several such trackers can be instantiated to track multiple objects independently, this not only neglects that objects should not occupy the same space in 3D, but also fails when objects have highly similar or identical appearances. In this paper we develop a probabilistic graphical model that accounts for similarity and proximity and leads to robust real-time tracking of multiple objects from RGB-D data, without recourse to bolton collision detection.
{"title":"3D Tracking of Multiple Objects with Identical Appearance Using RGB-D Input","authors":"C. Ren, V. Prisacariu, O. Kähler, I. Reid, D. W. Murray","doi":"10.1109/3DV.2014.39","DOIUrl":"https://doi.org/10.1109/3DV.2014.39","url":null,"abstract":"Most current approaches for 3D object tracking rely on distinctive object appearances. While several such trackers can be instantiated to track multiple objects independently, this not only neglects that objects should not occupy the same space in 3D, but also fails when objects have highly similar or identical appearances. In this paper we develop a probabilistic graphical model that accounts for similarity and proximity and leads to robust real-time tracking of multiple objects from RGB-D data, without recourse to bolton collision detection.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129593807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Charles Malleson, M. Klaudiny, Jean-Yves Guillemaut, A. Hilton
This work considers the problem of structured representation of dynamic surfaces from incomplete 3D point tracks from a single viewpoint. The surface is segmented into a set of connected regions each of which can be represented by a fixed intrinsic shape and a parametrised rigid/non-rigid motion trajectory. Neither the model parameters nor the point-to-model assignments are known upfront. Motion and geometric shape parameters are estimated in alternation with a graph-cuts based point-to-model assignment. This modelling process facilitates in-filling of missing data as well as de-noising of measurements by temporal integration while adding meaningful structure to the geometry and reducing storage cost by an order of magnitude. Experiments are presented for real and synthetic sequences to validate the approach and show how a single tuning parameter can be used to trade modelling error with extrapolation level and storage cost.
{"title":"Structured Representation of Non-Rigid Surfaces from Single View 3D Point Tracks","authors":"Charles Malleson, M. Klaudiny, Jean-Yves Guillemaut, A. Hilton","doi":"10.1109/3DV.2014.13","DOIUrl":"https://doi.org/10.1109/3DV.2014.13","url":null,"abstract":"This work considers the problem of structured representation of dynamic surfaces from incomplete 3D point tracks from a single viewpoint. The surface is segmented into a set of connected regions each of which can be represented by a fixed intrinsic shape and a parametrised rigid/non-rigid motion trajectory. Neither the model parameters nor the point-to-model assignments are known upfront. Motion and geometric shape parameters are estimated in alternation with a graph-cuts based point-to-model assignment. This modelling process facilitates in-filling of missing data as well as de-noising of measurements by temporal integration while adding meaningful structure to the geometry and reducing storage cost by an order of magnitude. Experiments are presented for real and synthetic sequences to validate the approach and show how a single tuning parameter can be used to trade modelling error with extrapolation level and storage cost.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122076835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nadia Robertini, Edilson de Aguiar, Thomas Helten, C. Theobalt
We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods.
{"title":"Efficient Multi-view Performance Capture of Fine-Scale Surface Detail","authors":"Nadia Robertini, Edilson de Aguiar, Thomas Helten, C. Theobalt","doi":"10.1109/3DV.2014.46","DOIUrl":"https://doi.org/10.1109/3DV.2014.46","url":null,"abstract":"We present a new effective way for performance capture of deforming meshes with fine-scale time-varying surface detail from multi-view video. Our method builds up on coarse 4D surface reconstructions, as obtained with commonly used template-based methods. As they only capture models of coarse-to-medium scale detail, fine scale deformation detail is often done in a second pass by using stereo constraints, features, or shading-based refinement. In this paper, we propose a new effective and stable solution to this second step. Our framework creates an implicit representation of the deformable mesh using a dense collection of 3D Gaussian functions on the surface, and a set of 2D Gaussians for the images. The fine scale deformation of all mesh vertices that maximizes photo-consistency can be efficiently found by densely optimizing a new model-to-image consistency energy on all vertex positions. A principal advantage is that our problem formulation yields a smooth closed form energy with implicit occlusion handling and analytic derivatives. Error-prone correspondence finding, or discrete sampling of surface displacement values are also not needed. We show several reconstructions of human subjects wearing loose clothing, and we qualitatively and quantitatively show that we robustly capture more detail than related methods.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117315279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Place recognition is a core competency for any visual simultaneous localization and mapping system. Identifying previously visited places enables the creation of globally accurate maps, robust relocalization, and multi-user mapping. To match one place to another, most state-of-the-art approaches must decide a priori what constitutes a place, often in terms of how many consecutive views should overlap, or how many consecutive images should be considered together. Unfortunately, depending on thresholds such as these, limits their generality to different types of scenes. In this paper, we present a placeless place recognition algorithm using a novel vote-density estimation technique that avoids heuristically discretizing the space. Instead, our approach considers place recognition as a problem of continuous matching between image streams, automatically discovering regions of high vote density that represent overlapping trajectory segments. The resulting algorithm has a single free parameter and all remaining thresholds are set automatically using well-studied statistical tests. We demonstrate the efficiency and accuracy of our methodology on three outdoor sequences: A comprehensive evaluation against ground-truth from publicly available datasets shows that our approach outperforms several state-of-the-art algorithms for place recognition.
{"title":"Placeless Place-Recognition","authors":"Simon Lynen, M. Bosse, P. Furgale, R. Siegwart","doi":"10.1109/3DV.2014.36","DOIUrl":"https://doi.org/10.1109/3DV.2014.36","url":null,"abstract":"Place recognition is a core competency for any visual simultaneous localization and mapping system. Identifying previously visited places enables the creation of globally accurate maps, robust relocalization, and multi-user mapping. To match one place to another, most state-of-the-art approaches must decide a priori what constitutes a place, often in terms of how many consecutive views should overlap, or how many consecutive images should be considered together. Unfortunately, depending on thresholds such as these, limits their generality to different types of scenes. In this paper, we present a placeless place recognition algorithm using a novel vote-density estimation technique that avoids heuristically discretizing the space. Instead, our approach considers place recognition as a problem of continuous matching between image streams, automatically discovering regions of high vote density that represent overlapping trajectory segments. The resulting algorithm has a single free parameter and all remaining thresholds are set automatically using well-studied statistical tests. We demonstrate the efficiency and accuracy of our methodology on three outdoor sequences: A comprehensive evaluation against ground-truth from publicly available datasets shows that our approach outperforms several state-of-the-art algorithms for place recognition.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123032356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an efficient colorization method for a large scale point cloud using multi-view images. To address the practical issues of noisy camera parameters and color inconsistencies across multi-view images, our method takes an optimization approach for achieving visually pleasing point cloud colorization. We introduce a multi-pass Z-ordering technique that efficiently defines a graph structure to a large-scale and un-ordered set of 3D points, and use the graph structure for optimizing the point colors to be assigned. Our technique is useful for defining minimal but sufficient connectivities among 3D points so that the optimization can exploit the sparsity for efficiently solving the problem. We demonstrate the effectiveness of our method using synthetic datasets and a large-scale real-world data in comparison with other graph construction techniques.
{"title":"Efficient Colorization of Large-Scale Point Cloud Using Multi-pass Z-Ordering","authors":"Sunyoung Cho, Jizhou Yan, Y. Matsushita, H. Byun","doi":"10.1109/3DV.2014.33","DOIUrl":"https://doi.org/10.1109/3DV.2014.33","url":null,"abstract":"We present an efficient colorization method for a large scale point cloud using multi-view images. To address the practical issues of noisy camera parameters and color inconsistencies across multi-view images, our method takes an optimization approach for achieving visually pleasing point cloud colorization. We introduce a multi-pass Z-ordering technique that efficiently defines a graph structure to a large-scale and un-ordered set of 3D points, and use the graph structure for optimizing the point colors to be assigned. Our technique is useful for defining minimal but sufficient connectivities among 3D points so that the optimization can exploit the sparsity for efficiently solving the problem. We demonstrate the effectiveness of our method using synthetic datasets and a large-scale real-world data in comparison with other graph construction techniques.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134078370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a new multistage approach for SfM reconstruction of a single component. Our method begins with building a coarse 3D reconstruction using high-scale features of given images. This step uses only a fraction of features and is fast. We enrich the model in stages by localizing remaining images to it and matching and triangulating remaining features. Unlike traditional incremental SfM, localization and triangulation steps in our approach are made efficient and embarrassingly parallel using geometry of the coarse model. The coarse model allows us to use 3D-2D correspondences based direct localization techniques to register remaining images. We further utilize the geometry of the coarse model to reduce the pair-wise image matching effort as well as to perform fast guided feature matching for majority of features. Our method produces similar quality models as compared to incremental SfM methods while being notably fast and parallel. Our algorithm can reconstruct a 1000 images dataset in 15 hours using a single core, in about 2 hours using 8 cores and in a few minutes by utilizing full parallelism of about 200 cores.
{"title":"Multistage SFM: Revisiting Incremental Structure from Motion","authors":"R. Shah, A. Deshpande, P J Narayanan","doi":"10.1109/3DV.2014.95","DOIUrl":"https://doi.org/10.1109/3DV.2014.95","url":null,"abstract":"In this paper, we present a new multistage approach for SfM reconstruction of a single component. Our method begins with building a coarse 3D reconstruction using high-scale features of given images. This step uses only a fraction of features and is fast. We enrich the model in stages by localizing remaining images to it and matching and triangulating remaining features. Unlike traditional incremental SfM, localization and triangulation steps in our approach are made efficient and embarrassingly parallel using geometry of the coarse model. The coarse model allows us to use 3D-2D correspondences based direct localization techniques to register remaining images. We further utilize the geometry of the coarse model to reduce the pair-wise image matching effort as well as to perform fast guided feature matching for majority of features. Our method produces similar quality models as compared to incremental SfM methods while being notably fast and parallel. Our algorithm can reconstruct a 1000 images dataset in 15 hours using a single core, in about 2 hours using 8 cores and in a few minutes by utilizing full parallelism of about 200 cores.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"13 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131752139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hassan Afzal, Kassem Al Ismaeil, Djamila Aouada, F. Destelle, B. Mirbach, B. Ottersten
In this work we propose KinectDeform, an algorithm which targets enhanced 3D reconstruction of scenes containing non-rigidly deforming objects. It provides an innovation to the existing class of algorithms which either target scenes with rigid objects only or allow for very limited non-rigid deformations or use precomputed templates to track them. KinectDeform combines a fast non-rigid scene tracking algorithm based on octree data representation and hierarchical voxel associations with a recursive data filtering mechanism. We analyze its performance on both real and simulated data and show improved results in terms of smoothness and feature preserving 3D reconstructions with reduced noise.
{"title":"Kinect Deform: Enhanced 3D Reconstruction of Non-rigidly Deforming Objects","authors":"Hassan Afzal, Kassem Al Ismaeil, Djamila Aouada, F. Destelle, B. Mirbach, B. Ottersten","doi":"10.1109/3DV.2014.114","DOIUrl":"https://doi.org/10.1109/3DV.2014.114","url":null,"abstract":"In this work we propose KinectDeform, an algorithm which targets enhanced 3D reconstruction of scenes containing non-rigidly deforming objects. It provides an innovation to the existing class of algorithms which either target scenes with rigid objects only or allow for very limited non-rigid deformations or use precomputed templates to track them. KinectDeform combines a fast non-rigid scene tracking algorithm based on octree data representation and hierarchical voxel associations with a recursive data filtering mechanism. We analyze its performance on both real and simulated data and show improved results in terms of smoothness and feature preserving 3D reconstructions with reduced noise.","PeriodicalId":275516,"journal":{"name":"2014 2nd International Conference on 3D Vision","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132760557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}