Pub Date : 2024-05-09DOI: 10.5194/isprs-annals-x-1-2024-49-2024
Shao Dong, Yi Lin
Abstract. Light detection and ranging (LiDAR), as an innovative remote sensing tool, not only captures target reflectance but also provides its morphological parameters. Traditional single/multi-band LiDAR and multispectral LiDAR (MSL) are presently employed in applications such as 3D modeling and plant biochemical parameter inversion albeit with effectiveness limited. Moreover, hyperspectral LiDAR (HSL) distinguished by its expanded array of spectral detection channels and enhanced spectral resolution, has proven more effective in meeting these requirements and also exhibits superior capabilities in both feature and land cover classification tasks. Nevertheless, point clouds acquired through HSL frequently exhibit quality deficiencies, including uneven density and excessive noise. Meanwhile, there exists a notable absence of technical specifications and operational standards governing the measurement protocols for HSL systems globally. To address this gap, this study constructed a systematic analysis framework of data quality in hyperspectral point clouds and endeavors to qualitatively analyse 30 tree point clouds continuously scanned with Finnish Geospatial Research Institute (FGI) 8-band hyperspectral laser scanner. Furthermore, this research validated the theoretical feasibility of employing the 8-band HSL system for inversion processes aimed at quantifying chlorophyll leaf content. Apart from detecting the time-varying patterns of reflectance within birch canopy point clouds, the results of this study also effectively pinpointed the band exhibiting heightened noise level of the HSL system, demonstrating the efficacy of our proposed quality analysis methodology. The endeavor presented in this study can serve as a cornerstone for advancing hyperspectral LiDAR across a diverse array of related remote sensing and earth observation applications.
{"title":"Data quality analysis after hyperspectral LiDAR sequentially mapping trees","authors":"Shao Dong, Yi Lin","doi":"10.5194/isprs-annals-x-1-2024-49-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-49-2024","url":null,"abstract":"Abstract. Light detection and ranging (LiDAR), as an innovative remote sensing tool, not only captures target reflectance but also provides its morphological parameters. Traditional single/multi-band LiDAR and multispectral LiDAR (MSL) are presently employed in applications such as 3D modeling and plant biochemical parameter inversion albeit with effectiveness limited. Moreover, hyperspectral LiDAR (HSL) distinguished by its expanded array of spectral detection channels and enhanced spectral resolution, has proven more effective in meeting these requirements and also exhibits superior capabilities in both feature and land cover classification tasks. Nevertheless, point clouds acquired through HSL frequently exhibit quality deficiencies, including uneven density and excessive noise. Meanwhile, there exists a notable absence of technical specifications and operational standards governing the measurement protocols for HSL systems globally. To address this gap, this study constructed a systematic analysis framework of data quality in hyperspectral point clouds and endeavors to qualitatively analyse 30 tree point clouds continuously scanned with Finnish Geospatial Research Institute (FGI) 8-band hyperspectral laser scanner. Furthermore, this research validated the theoretical feasibility of employing the 8-band HSL system for inversion processes aimed at quantifying chlorophyll leaf content. Apart from detecting the time-varying patterns of reflectance within birch canopy point clouds, the results of this study also effectively pinpointed the band exhibiting heightened noise level of the HSL system, demonstrating the efficacy of our proposed quality analysis methodology. The endeavor presented in this study can serve as a cornerstone for advancing hyperspectral LiDAR across a diverse array of related remote sensing and earth observation applications.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 44","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140996644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.5194/isprs-annals-x-1-2024-275-2024
Ningli Xu, Rongjun Qin
Abstract. Generating wide-area digital surface models (DSMs) requires registering a large number of individual, and partially overlapped DSMs. This presents a challenging problem for a typical registration algorithm, since when a large number of observations from these multiple DSMs are considered, it may easily cause memory overflow. Sequential registration algorithms, although can significantly reduce the computation, are especially vulnerable for small overlapped pairs, leading to a large error accumulation. In this work, we propose a novel solution that builds the DSM registration task as a motion averaging problem: pair-wise DSMs are registered to build a scene graph, with edges representing relative poses between DSMs. Specifically, based on the grid structure of the large DSM, the pair-wise registration is performed using a novel nearest neighbor search method. We show that the scene graph can be optimized via an extremely fast motion average algorithm with O(N) complexity (N refers to the number of images). Evaluation of high-resolution satellite-derived DSM demonstrates significant improvement in computation and accuracy.
{"title":"Large-scale DSM registration via motion averaging","authors":"Ningli Xu, Rongjun Qin","doi":"10.5194/isprs-annals-x-1-2024-275-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-275-2024","url":null,"abstract":"Abstract. Generating wide-area digital surface models (DSMs) requires registering a large number of individual, and partially overlapped DSMs. This presents a challenging problem for a typical registration algorithm, since when a large number of observations from these multiple DSMs are considered, it may easily cause memory overflow. Sequential registration algorithms, although can significantly reduce the computation, are especially vulnerable for small overlapped pairs, leading to a large error accumulation. In this work, we propose a novel solution that builds the DSM registration task as a motion averaging problem: pair-wise DSMs are registered to build a scene graph, with edges representing relative poses between DSMs. Specifically, based on the grid structure of the large DSM, the pair-wise registration is performed using a novel nearest neighbor search method. We show that the scene graph can be optimized via an extremely fast motion average algorithm with O(N) complexity (N refers to the number of images). Evaluation of high-resolution satellite-derived DSM demonstrates significant improvement in computation and accuracy.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140995460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.5194/isprs-annals-x-1-2024-59-2024
Ran Duan, Zhenling Ma
Abstract. Photovoltaic power stations utilizing solar energy, have grown in scale, resulting in an increase in operational maintenance requirements. Efficient inspection of components within these stations is crucial. However, the large area of photovoltaic power generation, coupled with a substantial number of photovoltaic panels and complex geographical environments, renders manual inspection methods highly inefficient and inadequate for modern photovoltaic power stations. To address this issue, this paper proposes a method and system for hot spot detection on photovoltaic panels using unmanned aerial vehicles (UAVs) equipped with multispectral cameras. The UAVs capture visible and infrared images of the photovoltaic power plant, which are then processed for photogrammetry to determine imaging position and attitude. The infrared images are stitched together using this information, forming a geographically referenced overall image. Hot spot detection is performed on the infrared images, enabling the identification of faulty photovoltaic panels and facilitating efficient inspection and maintenance. Experimental trials were conducted at a photovoltaic power station in Qingyuan, Guangdong Province China. The results demonstrate the effectiveness of the proposed method in accurately detecting panels with hot spot faults.
{"title":"A method for detecting photovoltaic panel faults using a drone equipped with a multispectral camera","authors":"Ran Duan, Zhenling Ma","doi":"10.5194/isprs-annals-x-1-2024-59-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-59-2024","url":null,"abstract":"Abstract. Photovoltaic power stations utilizing solar energy, have grown in scale, resulting in an increase in operational maintenance requirements. Efficient inspection of components within these stations is crucial. However, the large area of photovoltaic power generation, coupled with a substantial number of photovoltaic panels and complex geographical environments, renders manual inspection methods highly inefficient and inadequate for modern photovoltaic power stations. To address this issue, this paper proposes a method and system for hot spot detection on photovoltaic panels using unmanned aerial vehicles (UAVs) equipped with multispectral cameras. The UAVs capture visible and infrared images of the photovoltaic power plant, which are then processed for photogrammetry to determine imaging position and attitude. The infrared images are stitched together using this information, forming a geographically referenced overall image. Hot spot detection is performed on the infrared images, enabling the identification of faulty photovoltaic panels and facilitating efficient inspection and maintenance. Experimental trials were conducted at a photovoltaic power station in Qingyuan, Guangdong Province China. The results demonstrate the effectiveness of the proposed method in accurately detecting panels with hot spot faults.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140996548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.5194/isprs-annals-x-1-2024-1-2024
Mengchi Ai, Ilyar Asl Sabbaghian Hokmabad, M. Elhabiby, N. El-Sheimy
Abstract. Recent advances in precise navigation have extensively utilized the integration of Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS), particularly in the domain of intelligent vehicles. However, the efficacy of such navigation systems is considerably compromised by the reflection and multipath disruptions of non-light-of-sight (NLOS) signals. Light Detection and Ranging (LiDAR)-based odometry, an active perception-based sensor known for its precise 3D measurements, has become increasingly prevalent in augmenting navigation systems. Nonetheless, the assimilation of LiDAR odometry with GNSS/INS systems presents substantial challenges. Addressing these challenges, this study introduces a two-phase sensor fusion (TPSF) approach that synergistically combines GNSS positioning, LiDAR odometry, and IMU pre-integration through a dual-stage sensor fusion process. The initial stage employs an Extended Kalman Filter (EKF) to amalgamate the GNSS solution with IMU Mechanization, facilitating the estimation of IMU biases and system initialization. Subsequently, the second stage integrates scan-to-map LiDAR odometry with IMU mechanization to support continuous LiDAR factor estimation. Factor graph optimization (FGO) is then utilized for the comprehensive fusion of LiDAR factors, IMU pre-integration, and GNSS solutions. The efficacy of the proposed methodology is corroborated through rigorous testing on a demanding trajectory from an urbanized open-source dataset, with the system demonstrating a notable enhancement in performance compared to the state-of-the-art algorithms, achieving a translational Standard Deviation (STD) of 1.269 meters.
{"title":"A novel LiDAR-GNSS-INS Two-Phase Tightly Coupled integration scheme for precise navigation","authors":"Mengchi Ai, Ilyar Asl Sabbaghian Hokmabad, M. Elhabiby, N. El-Sheimy","doi":"10.5194/isprs-annals-x-1-2024-1-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-1-2024","url":null,"abstract":"Abstract. Recent advances in precise navigation have extensively utilized the integration of Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS), particularly in the domain of intelligent vehicles. However, the efficacy of such navigation systems is considerably compromised by the reflection and multipath disruptions of non-light-of-sight (NLOS) signals. Light Detection and Ranging (LiDAR)-based odometry, an active perception-based sensor known for its precise 3D measurements, has become increasingly prevalent in augmenting navigation systems. Nonetheless, the assimilation of LiDAR odometry with GNSS/INS systems presents substantial challenges. Addressing these challenges, this study introduces a two-phase sensor fusion (TPSF) approach that synergistically combines GNSS positioning, LiDAR odometry, and IMU pre-integration through a dual-stage sensor fusion process. The initial stage employs an Extended Kalman Filter (EKF) to amalgamate the GNSS solution with IMU Mechanization, facilitating the estimation of IMU biases and system initialization. Subsequently, the second stage integrates scan-to-map LiDAR odometry with IMU mechanization to support continuous LiDAR factor estimation. Factor graph optimization (FGO) is then utilized for the comprehensive fusion of LiDAR factors, IMU pre-integration, and GNSS solutions. The efficacy of the proposed methodology is corroborated through rigorous testing on a demanding trajectory from an urbanized open-source dataset, with the system demonstrating a notable enhancement in performance compared to the state-of-the-art algorithms, achieving a translational Standard Deviation (STD) of 1.269 meters.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":"183 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140994597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.5194/isprs-annals-x-1-2024-321-2024
Xinming Tang, A. Tommaselli, Tao Zhang, Junfeng Xie
Abstract. The ISPRS Technical Commission I Midterm Symposium on "Intelligent Sensing and Remote Sensing Application" was held in Changsha, China, during May 13–17, 2024, aiming to provide a platform to share the latest researches, advanced technologies and application experience, to discuss the future development and to seek international cooperation in various forms. The Symposium has received 229 full paper and abstract s, among them 45 double-blind peer-reviewed full papers were published in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information, and 165 papers accepted through abstract review were published in the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. These papers are mostly dedicated to topics of the 8 TC I Working Groups, 3 Inter-commission Working Groups, including Satellite Missions and Constellations for Remote Sensing, Mobile Mapping Technology, Multispectral, Hyperspectral and Thermal Sensors, LiDAR, Laser Altimetry and Sensor Integration, Microwave and InSAR Technology for Earth Observation, Orientation, Calibration and Validation of Sensors, Data Quality and Benchmark of Sensors, Multi-sensor Modelling and Cross-modality Fusion, Robotics for Mapping and Machine Intelligence, Autonomous Sensing Systems and their Applications, Digital Construction: Reality Capture, Automated Inspection and Integration to BIM, Point Cloud Generation and Processing, Artificial Intelligence Technology Related to Sensor Systems, Multi-sensor Remote Sensing Applications.. These papers presented the latest trends of sensor systems. The full papers and abstracts were reviewed by the members of the Symposium Scientific Committee comprised of Working Group officers and invited experts. We would like to take this opportunity to express our great gratitude to the Scientific Committee, Local Organizing Committee, Sponsors, Exhibitors and all those who have contributed to this successful Symposium. We also want to express our thanks to the authors for their excellent papers and presentations. Tang Xinming, Antonio Maria Garcia Tommaselli, Zhang Tao, Xie JunfengISPRS Technical Commission I on Sensor SystemsMay 2024, Changsha, China
摘要国际摄影测量与遥感学会(ISPRS)第一技术委员会 "智能传感与遥感应用 "中期研讨会于2024年5月13-17日在中国长沙召开,旨在提供一个分享最新研究成果、先进技术和应用经验的平台,探讨未来发展,寻求多种形式的国际合作。本次研讨会共收到229篇论文全文和摘要,其中45篇经双盲同行评审的论文全文发表在《摄影测量、遥感和空间信息年鉴》(ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information)上,165篇通过摘要评审的论文发表在《摄影测量、遥感和空间信息科学国际档案》(International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences)上。这些论文主要涉及 8 个 TC I 工作组和 3 个委员会间工作组的主题,包括遥感卫星任务和星座、移动测绘技术、多光谱、超光谱和热传感器、激光雷达、激光测高和传感器集成、用于地球观测的微波和 InSAR 技术,传感器的定位、校准和验证,传感器的数据质量和基准,多传感器建模和跨模态融合,用于测绘和机器智能的机器人技术,自主传感系统及其应用,数字化建设:现实捕捉、自动检测与 BIM 集成、点云生成与处理、与传感器系统相关的人工智能技术、多传感器遥感应用。这些论文介绍了传感器系统的最新发展趋势。由工作组官员和特邀专家组成的研讨会科学委员会成员对论文全文和摘要进行了评审。借此机会,我们向科学委员会、地方组织委员会、赞助商、参展商以及所有为本次研讨会的成功举办做出贡献的人士表示衷心的感谢。我们还要感谢各位作者的精彩论文和演讲。唐新明,安东尼奥-玛丽亚-加西亚-托马塞利,张涛,谢俊峰ISPRS传感器系统第一技术委员会2024年5月,中国长沙
{"title":"Preface: ISPRS Technical Commission I Midterm Symposium on “Intelligent Sensing and Remote Sensing Application”","authors":"Xinming Tang, A. Tommaselli, Tao Zhang, Junfeng Xie","doi":"10.5194/isprs-annals-x-1-2024-321-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-321-2024","url":null,"abstract":"Abstract. The ISPRS Technical Commission I Midterm Symposium on \"Intelligent Sensing and Remote Sensing Application\" was held in Changsha, China, during May 13–17, 2024, aiming to provide a platform to share the latest researches, advanced technologies and application experience, to discuss the future development and to seek international cooperation in various forms. The Symposium has received 229 full paper and abstract s, among them 45 double-blind peer-reviewed full papers were published in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information, and 165 papers accepted through abstract review were published in the International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. These papers are mostly dedicated to topics of the 8 TC I Working Groups, 3 Inter-commission Working Groups, including Satellite Missions and Constellations for Remote Sensing, Mobile Mapping Technology, Multispectral, Hyperspectral and Thermal Sensors, LiDAR, Laser Altimetry and Sensor Integration, Microwave and InSAR Technology for Earth Observation, Orientation, Calibration and Validation of Sensors, Data Quality and Benchmark of Sensors, Multi-sensor Modelling and Cross-modality Fusion, Robotics for Mapping and Machine Intelligence, Autonomous Sensing Systems and their Applications, Digital Construction: Reality Capture, Automated Inspection and Integration to BIM, Point Cloud Generation and Processing, Artificial Intelligence Technology Related to Sensor Systems, Multi-sensor Remote Sensing Applications.. These papers presented the latest trends of sensor systems. The full papers and abstracts were reviewed by the members of the Symposium Scientific Committee comprised of Working Group officers and invited experts. We would like to take this opportunity to express our great gratitude to the Scientific Committee, Local Organizing Committee, Sponsors, Exhibitors and all those who have contributed to this successful Symposium. We also want to express our thanks to the authors for their excellent papers and presentations. Tang Xinming, Antonio Maria Garcia Tommaselli, Zhang Tao, Xie JunfengISPRS Technical Commission I on Sensor SystemsMay 2024, Changsha, China\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140997004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.5194/isprs-annals-x-1-2024-313-2024
Yating Zhang, Heyi Li, Jing Yu, Pengjie Tao
Abstract. Currently, using digital orthophoto map (DOM) and digital elevation model (DEM) as reference to achieve geometric positioning of newly acquired satellite images has become a popular photogrammetric approach. However, this method relies on DOM and DEM data which requires a lot of storage space in practical applications. In addition, for geometric positioning of satellite images, only sparse image feature points are needed as control points. Consequently, for the sake of convenience, the compression of control data emerges as a necessity with significant practical implications. This paper investigates a "cloud control" photogrammetry method based on geocoded image features. The method extracts SIFT feature points from DOMs, and obtains their ground coordinates, then constructs geocoded image feature library instead of DOM and DEM data as control, thus realizing the compression of control data. Experiments conducted on the Tianhui-1, Ziyuan-3 and Gaofen-2 satellite images demonstrate that the proposed method can achieve high-precision geometric positioning of satellite images and greatly reduce the size of the control data. Specifically, with the reduction of the reference data from 180~1248 MB 2 m DOM and 30 m DEM to 5~10 MB geocoded image features, the geopositional accuracies of the test Tianhui-1, Ziyuan-3 and Gaofen-2 images are improved from 3.12 pixels to 1.74 pixels, 3.69 pixels to 1.09 pixels, and 150.93 pixels to 2.67 pixels, respectively.
摘要目前,以数字正射影像图(DOM)和数字高程模型(DEM)为基准对新获取的卫星影像进行几何定位已成为一种流行的摄影测量方法。然而,这种方法依赖于 DOM 和 DEM 数据,在实际应用中需要大量的存储空间。此外,卫星图像的几何定位只需要稀疏的图像特征点作为控制点。因此,为了方便起见,必须对控制数据进行压缩,这具有重要的现实意义。本文研究了一种基于地理编码图像特征的 "云控制 "摄影测量方法。该方法从 DOM 中提取 SIFT 特征点并获取其地面坐标,然后构建地理编码影像特征库,代替 DOM 和 DEM 数据作为控制数据,从而实现控制数据的压缩。在天慧一号、致远三号和高分二号卫星图像上进行的实验证明,所提出的方法可以实现卫星图像的高精度几何定位,并大大减少控制数据的大小。具体而言,将参考数据从 180~1248 MB 的 2 m DOM 和 30 m DEM 减少到 5~10 MB 的地理编码图像特征,测试的天慧一号、致远三号和高分二号图像的地理定位精度分别从 3.12 像素提高到 1.74 像素、3.69 像素提高到 1.09 像素和 150.93 像素提高到 2.67 像素。
{"title":"Georeferencing of Satellite Images with Geocoded Image Features","authors":"Yating Zhang, Heyi Li, Jing Yu, Pengjie Tao","doi":"10.5194/isprs-annals-x-1-2024-313-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-313-2024","url":null,"abstract":"Abstract. Currently, using digital orthophoto map (DOM) and digital elevation model (DEM) as reference to achieve geometric positioning of newly acquired satellite images has become a popular photogrammetric approach. However, this method relies on DOM and DEM data which requires a lot of storage space in practical applications. In addition, for geometric positioning of satellite images, only sparse image feature points are needed as control points. Consequently, for the sake of convenience, the compression of control data emerges as a necessity with significant practical implications. This paper investigates a \"cloud control\" photogrammetry method based on geocoded image features. The method extracts SIFT feature points from DOMs, and obtains their ground coordinates, then constructs geocoded image feature library instead of DOM and DEM data as control, thus realizing the compression of control data. Experiments conducted on the Tianhui-1, Ziyuan-3 and Gaofen-2 satellite images demonstrate that the proposed method can achieve high-precision geometric positioning of satellite images and greatly reduce the size of the control data. Specifically, with the reduction of the reference data from 180~1248 MB 2 m DOM and 30 m DEM to 5~10 MB geocoded image features, the geopositional accuracies of the test Tianhui-1, Ziyuan-3 and Gaofen-2 images are improved from 3.12 pixels to 1.74 pixels, 3.69 pixels to 1.09 pixels, and 150.93 pixels to 2.67 pixels, respectively.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 14","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140996720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.5194/isprs-annals-x-1-2024-19-2024
Ting On Chan, Yibo Ling, Yuli Wang, Kin Sum Li, Jing Shen
Abstract. Laser scanning, along with its resultant 3D point clouds, constitutes a prevalent method for the documentation of cultural heritage. This paper introduces a novel workflow for the structural analysis of glazed tubular tiles that adorn the roofs of historical buildings in the Orient, utilizing 3D point clouds. The workflow integrates a robust segmentation algorithm utilizing the maximum principal curvature and normal vectors. Moreover, clustering algorithms, including DBSCAN, are incorporated to refine the clusters and thus increase segmentation accuracy. Structural analysis is enabled by cylindrical model fitting, which allows for the estimation of parameters and residuals. While the results exhibit commendable performance in individual tile segmentation, it is imperative to address the impact of substantial variations in scanning range and incident angles before engaging in the structural analysis fitting process. The results of experiment demonstrate that under conditions of significantly large scanning angles, the root mean square error (RMSE) for inadequately fitted tiles can extend to 0.066 m, surpassing twice the RMSE observed for well-fitted tiles. The proposed workflow proves to be applicable and exhibits significant potential to advance practices in cultural heritage documentation.
{"title":"Structural Analysis of Glazed Tubular Tiles of Oriental Architectures Based on 3D Point Clouds for Cultural Heritage","authors":"Ting On Chan, Yibo Ling, Yuli Wang, Kin Sum Li, Jing Shen","doi":"10.5194/isprs-annals-x-1-2024-19-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-19-2024","url":null,"abstract":"Abstract. Laser scanning, along with its resultant 3D point clouds, constitutes a prevalent method for the documentation of cultural heritage. This paper introduces a novel workflow for the structural analysis of glazed tubular tiles that adorn the roofs of historical buildings in the Orient, utilizing 3D point clouds. The workflow integrates a robust segmentation algorithm utilizing the maximum principal curvature and normal vectors. Moreover, clustering algorithms, including DBSCAN, are incorporated to refine the clusters and thus increase segmentation accuracy. Structural analysis is enabled by cylindrical model fitting, which allows for the estimation of parameters and residuals. While the results exhibit commendable performance in individual tile segmentation, it is imperative to address the impact of substantial variations in scanning range and incident angles before engaging in the structural analysis fitting process. The results of experiment demonstrate that under conditions of significantly large scanning angles, the root mean square error (RMSE) for inadequately fitted tiles can extend to 0.066 m, surpassing twice the RMSE observed for well-fitted tiles. The proposed workflow proves to be applicable and exhibits significant potential to advance practices in cultural heritage documentation.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140997201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.5194/isprs-annals-x-1-2024-7-2024
Donya Azhand, S. Pirasteh, Masood Varshosaz, H. Shahabi, Salimeh Abdollahabadi, Hossein Teimouri, Mojtaba Pirnazar, Xiuqing Wang, Weilian Li
Abstract. This study presents flood extent extraction and mapping from Sentinel images. Here we suggest an algorithm for extracting flooded areas from object-based image analysis (OBIA) using Sentinel-1A and Sentinel-2A images to map and assess the flood extent from the beginning to one week after the event. This study used multi-scale parameters in OBIA for image segmentation. First, we identified the flooded regions by applying our proposed algorithm on the Sentinel-1A. Then, to evaluate the effects of the flood on each land-use/land cover (LULC) class, Sentinel-2A images is classified using the OBIA after the event. Besides, we also used the threshold method to compare the proposed algorithm applying OBIA to determine the efficiency in computing parameters for change detection and flood extent mapping. The findings revealed the best performance for the segmentation process with an Object Fitness Index (OFI) is 0.92 when the scale parameter of 60 is applied. The results also show that 2099.4 km2 of the study area is flooded at the beginning of the flood. Furthermore, we found that the most flooded LULC classes are agricultural land and orchards with 695.28km2 (32.4%) and 708.63 km2 (33.7%), respectively. In comparison, about 33.9% of the remaining flooded area has occurred in other classes (i.e., fish farm, built-up, bare land and water bodies). The resulting object of each scale parameter was evaluated by Object Pureness Index (OPI), Object Matching Index (OMI), and OFI. Finally, our Overall Accuracy (OA) method incorporated field data using the Global Positioning System (GPS) shows 93%, 90%, and 89% for LULC, flood map (i.e., using our proposed algorithm), and threshold method, respectively.
{"title":"Sentinel 1a-2a Incorporating an Object-Based Image Analysis Method for Flood Mapping and Extent Assessment","authors":"Donya Azhand, S. Pirasteh, Masood Varshosaz, H. Shahabi, Salimeh Abdollahabadi, Hossein Teimouri, Mojtaba Pirnazar, Xiuqing Wang, Weilian Li","doi":"10.5194/isprs-annals-x-1-2024-7-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-7-2024","url":null,"abstract":"Abstract. This study presents flood extent extraction and mapping from Sentinel images. Here we suggest an algorithm for extracting flooded areas from object-based image analysis (OBIA) using Sentinel-1A and Sentinel-2A images to map and assess the flood extent from the beginning to one week after the event. This study used multi-scale parameters in OBIA for image segmentation. First, we identified the flooded regions by applying our proposed algorithm on the Sentinel-1A. Then, to evaluate the effects of the flood on each land-use/land cover (LULC) class, Sentinel-2A images is classified using the OBIA after the event. Besides, we also used the threshold method to compare the proposed algorithm applying OBIA to determine the efficiency in computing parameters for change detection and flood extent mapping. The findings revealed the best performance for the segmentation process with an Object Fitness Index (OFI) is 0.92 when the scale parameter of 60 is applied. The results also show that 2099.4 km2 of the study area is flooded at the beginning of the flood. Furthermore, we found that the most flooded LULC classes are agricultural land and orchards with 695.28km2 (32.4%) and 708.63 km2 (33.7%), respectively. In comparison, about 33.9% of the remaining flooded area has occurred in other classes (i.e., fish farm, built-up, bare land and water bodies). The resulting object of each scale parameter was evaluated by Object Pureness Index (OPI), Object Matching Index (OMI), and OFI. Finally, our Overall Accuracy (OA) method incorporated field data using the Global Positioning System (GPS) shows 93%, 90%, and 89% for LULC, flood map (i.e., using our proposed algorithm), and threshold method, respectively.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140996960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.5194/isprs-annals-x-1-2024-305-2024
Hao Zhang, Wei Qin, Kaixin Wang, Qianying Wang, Pengjie Tao
Abstract. With the advancement of China's satellite technology, the HuanJingJianZai-2 A/B (HJ-2 A/B) satellites, equipped with whisk-broom infrared sensors, represent a significant leap forward in environmental monitoring and Earth observation capabilities. This technological leap, however, introduces new challenges in calibration. The unique structure of the HJ-2 A/B infrared spectroradiometer (IRS) necessitates innovative calibration techniques, as traditional methods primarily focused on exterior orientation parameters (EOPs) and often overlooked the importance of interior orientation accuracy, which is essential for accurate multispectral band registration and color rendering. Addressing this gap, we introduce an innovative multi-focal-plane-array joint calibration method specifically designed for whisk-broom cameras. Our method involves selecting a master band from each focal plane array for accurate focal length calibration and deriving ground control points from image matching and altitude interpolation for comprehensive bundle adjustment. This adjustment refines EOPs and interior orientation parameters (IOPs), ensuring globally optimal EOPs and enhanced IOPs calibration stability. The application of our method to the HJ-2 A/B IRS yielded substantial improvements in georeferencing and band registration accuracies, surpassing traditional methods. This paper details the multi-focal-plane-array joint calibration method, describes the IRS and experimental setup, presents the experimental results, and concludes with the implications and potential applications of our findings.
{"title":"On-Orbit Geometric Calibration of the HJ-2 A/B Satellites' Infrared Sensors","authors":"Hao Zhang, Wei Qin, Kaixin Wang, Qianying Wang, Pengjie Tao","doi":"10.5194/isprs-annals-x-1-2024-305-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-305-2024","url":null,"abstract":"Abstract. With the advancement of China's satellite technology, the HuanJingJianZai-2 A/B (HJ-2 A/B) satellites, equipped with whisk-broom infrared sensors, represent a significant leap forward in environmental monitoring and Earth observation capabilities. This technological leap, however, introduces new challenges in calibration. The unique structure of the HJ-2 A/B infrared spectroradiometer (IRS) necessitates innovative calibration techniques, as traditional methods primarily focused on exterior orientation parameters (EOPs) and often overlooked the importance of interior orientation accuracy, which is essential for accurate multispectral band registration and color rendering. Addressing this gap, we introduce an innovative multi-focal-plane-array joint calibration method specifically designed for whisk-broom cameras. Our method involves selecting a master band from each focal plane array for accurate focal length calibration and deriving ground control points from image matching and altitude interpolation for comprehensive bundle adjustment. This adjustment refines EOPs and interior orientation parameters (IOPs), ensuring globally optimal EOPs and enhanced IOPs calibration stability. The application of our method to the HJ-2 A/B IRS yielded substantial improvements in georeferencing and band registration accuracies, surpassing traditional methods. This paper details the multi-focal-plane-array joint calibration method, describes the IRS and experimental setup, presents the experimental results, and concludes with the implications and potential applications of our findings.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 0","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140997811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-09DOI: 10.5194/isprs-annals-x-1-2024-131-2024
Zhiying Li, Sitong Li, Wei Qin, Pengjie Tao
Abstract. It is crucial to calibrate the camera’s intrinsic orientation elements and distortion parameters to ensure the photogrammetric accuracies. However, using nadir images to perform this task often leads to correlation between the intrinsic and extrinsic orientation elements, which will result in different camera calibration results by using different self-calibration strategies. It even has an impact on the follow-up processes and makes the product accuracy declined. To overcome this challenge, a robust camera calibration method based on circular oblique images was developed in this study. Firstly, circular oblique images with different viewing angles and camera distances were captured by unmanned aerial vehicle, following a specially designed circular flight path. Then the camera parameters were solved through the self-calibration bundle adjustment based on the circular oblique images. The experiments were carried out to compare the robustness and accuracy of nadir-image-based and circular-oblique-image-based methods. The standard deviation of focal lengths solved by different self-calibration strategies reduced from 12.99 pixels to 1.72 pixels, proving that the proposed method weakens the correlation between the intrinsic and extrinsic orientation elements and has strong robustness. The accuracy of aerial triangulation based on the camera parameters solved by the proposed method improved from 34.7 cm to 3.5cm, illustrating the effectiveness of the proposed method.
{"title":"A Robust Camera Self-calibration Method Based on Circular Oblique Images","authors":"Zhiying Li, Sitong Li, Wei Qin, Pengjie Tao","doi":"10.5194/isprs-annals-x-1-2024-131-2024","DOIUrl":"https://doi.org/10.5194/isprs-annals-x-1-2024-131-2024","url":null,"abstract":"Abstract. It is crucial to calibrate the camera’s intrinsic orientation elements and distortion parameters to ensure the photogrammetric accuracies. However, using nadir images to perform this task often leads to correlation between the intrinsic and extrinsic orientation elements, which will result in different camera calibration results by using different self-calibration strategies. It even has an impact on the follow-up processes and makes the product accuracy declined. To overcome this challenge, a robust camera calibration method based on circular oblique images was developed in this study. Firstly, circular oblique images with different viewing angles and camera distances were captured by unmanned aerial vehicle, following a specially designed circular flight path. Then the camera parameters were solved through the self-calibration bundle adjustment based on the circular oblique images. The experiments were carried out to compare the robustness and accuracy of nadir-image-based and circular-oblique-image-based methods. The standard deviation of focal lengths solved by different self-calibration strategies reduced from 12.99 pixels to 1.72 pixels, proving that the proposed method weakens the correlation between the intrinsic and extrinsic orientation elements and has strong robustness. The accuracy of aerial triangulation based on the camera parameters solved by the proposed method improved from 34.7 cm to 3.5cm, illustrating the effectiveness of the proposed method.\u0000","PeriodicalId":508124,"journal":{"name":"ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences","volume":" 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140995413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}