{"title":"Large-Scale 3-D Building Reconstruction in LoD2 From ALS Point Clouds","authors":"Gefei Kong;Chaoquan Zhang;Hongchao Fan","doi":"10.1109/LGRS.2024.3514514","DOIUrl":null,"url":null,"abstract":"Large-scale 3-D building models are a fundamental data of many research and applications. The automatic reconstruction of these 3-D models in LoD2 garners much attention and many automatic methods have been proposed. However, most existing solutions require multiple and complicated substeps for reconstructing the structure of a single building. Meanwhile, most of them have not been applied to large-scale reconstruction to better support the practical applications. Furthermore, some of them rely on the input point clouds with building classification information, thereby affecting their generalization. To resolve these issues, in this letter, we propose a workflow to fully automatically reconstruct large-scale 3-D building models in LoD2. This workflow takes airborne laser scanning (ALS) point clouds as input and uses building footprints and digital terrain model (DTM) as assistance. LoD2 3-D building models are reconstructed by a three-module pipeline: 1) building and roof segmentation; 2) 3-D roof reconstruction; and 3) final top–down extrusion with terrain information. By proposing hybrid deep-learning-based and rule-based methods for the first two modules, we ensure the accurate structure output of reconstruction results as much as possible. The experimental results on point clouds covering the whole city of Trondheim, Norway, indicate that the proposed workflow can effectively reconstruct large-scale 3-D building models in LoD2 with the acceptable RMSE.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10787123/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Large-scale 3-D building models are a fundamental data of many research and applications. The automatic reconstruction of these 3-D models in LoD2 garners much attention and many automatic methods have been proposed. However, most existing solutions require multiple and complicated substeps for reconstructing the structure of a single building. Meanwhile, most of them have not been applied to large-scale reconstruction to better support the practical applications. Furthermore, some of them rely on the input point clouds with building classification information, thereby affecting their generalization. To resolve these issues, in this letter, we propose a workflow to fully automatically reconstruct large-scale 3-D building models in LoD2. This workflow takes airborne laser scanning (ALS) point clouds as input and uses building footprints and digital terrain model (DTM) as assistance. LoD2 3-D building models are reconstructed by a three-module pipeline: 1) building and roof segmentation; 2) 3-D roof reconstruction; and 3) final top–down extrusion with terrain information. By proposing hybrid deep-learning-based and rule-based methods for the first two modules, we ensure the accurate structure output of reconstruction results as much as possible. The experimental results on point clouds covering the whole city of Trondheim, Norway, indicate that the proposed workflow can effectively reconstruct large-scale 3-D building models in LoD2 with the acceptable RMSE.