Zhendong Liu , Liang Zhai , Jie Yin , Xiaoli Liu , Shilong Zhang , Dongyang Wang , Abbas Rajabifard , Yiqun Chen
{"title":"TRSP: Texture reconstruction algorithm driven by prior knowledge of ground object types","authors":"Zhendong Liu , Liang Zhai , Jie Yin , Xiaoli Liu , Shilong Zhang , Dongyang Wang , Abbas Rajabifard , Yiqun Chen","doi":"10.1016/j.isprsjprs.2025.03.015","DOIUrl":null,"url":null,"abstract":"<div><div>The texture reconstruction algorithm uses multiview images and 3D geometric surface models as data sources to establish the mapping relationship and texture consistency constraints between 2D images and 3D geometric surfaces to produce a 3D surface model with color reality. The existing algorithms still have challenges in terms of texture quality when faced with dynamic scenes with complex outdoor features and different lighting environments. In this paper, a texture reconstruction algorithm driven by prior knowledge of ground object types is proposed. First, a multiscale and multifactor joint screening strategy is constructed to generate sparse key scenes of occlusion perception. Second, globally consistent 3D semantic mapping rules and semantic similarity measures are proposed. The multiview 2D image semantic segmentation results are refined, fused, and mapped into 3D semantic category information. Then, the 3D model semantic information is introduced to construct the energy function of the prior knowledge of the ground objects, and the color of the texture block boundary is adjusted. Experimental verification and analysis are conducted using public and actual datasets. Compared with famous algorithms such as Allene, Waechter, and OpenMVS, the core indicators of texture quality of the proposed algorithm are effectively reduced by 57.14 %, 53.24 % and 50.69 %, and it performs best in terms of clarity and contrast of texture details; the effective culling rate of moving objects is about 80 %–88.9 %, the texture mapping is cleaner and the redundant calculation is significantly reduced.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"223 ","pages":"Pages 221-243"},"PeriodicalIF":10.6000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271625001133","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0
Abstract
The texture reconstruction algorithm uses multiview images and 3D geometric surface models as data sources to establish the mapping relationship and texture consistency constraints between 2D images and 3D geometric surfaces to produce a 3D surface model with color reality. The existing algorithms still have challenges in terms of texture quality when faced with dynamic scenes with complex outdoor features and different lighting environments. In this paper, a texture reconstruction algorithm driven by prior knowledge of ground object types is proposed. First, a multiscale and multifactor joint screening strategy is constructed to generate sparse key scenes of occlusion perception. Second, globally consistent 3D semantic mapping rules and semantic similarity measures are proposed. The multiview 2D image semantic segmentation results are refined, fused, and mapped into 3D semantic category information. Then, the 3D model semantic information is introduced to construct the energy function of the prior knowledge of the ground objects, and the color of the texture block boundary is adjusted. Experimental verification and analysis are conducted using public and actual datasets. Compared with famous algorithms such as Allene, Waechter, and OpenMVS, the core indicators of texture quality of the proposed algorithm are effectively reduced by 57.14 %, 53.24 % and 50.69 %, and it performs best in terms of clarity and contrast of texture details; the effective culling rate of moving objects is about 80 %–88.9 %, the texture mapping is cleaner and the redundant calculation is significantly reduced.
期刊介绍:
The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive.
P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields.
In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.