Pub Date : 2024-09-01DOI: 10.1016/j.jag.2024.104117
High spatial resolution XCO2 data is key to investigating the mechanisms of carbon sources and sinks. However, current carbon satellites have a narrow swath and uneven observation points, making it difficult to obtain seamless and full-coverage data. We propose a novel method combining extreme gradient boosting (XGBoost) with particle swarm optimization (PSO) to construct the relationship between OCO-2 XCO2 data and auxiliary data (i.e., vegetation, meteorological, anthropogenic emissions, and LST data), and to map the seamless monthly XCO2 concentration in East Asia from 2015 to 2020. Validation results based on TCCON ground station data demonstrate the high accuracy of the model with an average R2 of 0.93, Root Mean Square Error (RMSE) of 1.33 and Mean Absolute Percentage Error (MAPE) of 0.24 % in five sites. The results show that the average atmospheric XCO2 concentration in East Asia shows a continuous increasing trend from 2015 to 2020, with an average annual growth rate of 2.21 ppm/yr. This trend is accompanied by clear seasonal variations, with the highest XCO2 concentration in winter and the lowest in summer. Additionally, anthropogenic activities contributed significantly to XCO2 concentrations, which were higher in urban areas. These findings highlight the dynamics of regional XCO2 concentrations over time and their association with human activities. This study provides a detailed examination of XCO2 distribution and trends in East Asia, enhancing our comprehension of atmospheric CO2 dynamics.
{"title":"Mapping seamless monthly XCO2 in East Asia: Utilizing OCO-2 data and machine learning","authors":"","doi":"10.1016/j.jag.2024.104117","DOIUrl":"10.1016/j.jag.2024.104117","url":null,"abstract":"<div><p>High spatial resolution XCO<sub>2</sub> data is key to investigating the mechanisms of carbon sources and sinks. However, current carbon satellites have a narrow swath and uneven observation points, making it difficult to obtain seamless and full-coverage data. We propose a novel method combining extreme gradient boosting (XGBoost) with particle swarm optimization (PSO) to construct the relationship between OCO-2 XCO<sub>2</sub> data and auxiliary data (i.e., vegetation, meteorological, anthropogenic emissions, and LST data), and to map the seamless monthly XCO<sub>2</sub> concentration in East Asia from 2015 to 2020. Validation results based on TCCON ground station data demonstrate the high accuracy of the model with an average R<sup>2</sup> of 0.93, Root Mean Square Error (RMSE) of 1.33 and Mean Absolute Percentage Error (MAPE) of 0.24 % in five sites. The results show that the average atmospheric XCO<sub>2</sub> concentration in East Asia shows a continuous increasing trend from 2015 to 2020, with an average annual growth rate of 2.21 ppm/yr. This trend is accompanied by clear seasonal variations, with the highest XCO<sub>2</sub> concentration in winter and the lowest in summer. Additionally, anthropogenic activities contributed significantly to XCO<sub>2</sub> concentrations, which were higher in urban areas. These findings highlight the dynamics of regional XCO<sub>2</sub> concentrations over time and their association with human activities. This study provides a detailed examination of XCO<sub>2</sub> distribution and trends in East Asia, enhancing our comprehension of atmospheric CO<sub>2</sub> dynamics.</p></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1569843224004710/pdfft?md5=87d8faed63a37900da35ed19cbe8bb3b&pid=1-s2.0-S1569843224004710-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142098581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01DOI: 10.1016/j.jag.2024.104127
Geo-text data, which combine geographical locations with textual information (e.g., geo-tagged tweets), are typically visualized using tag maps. Since tags are rich in attribute information, tag maps are an intuitive method of visualizing how attribute domains carried by tags vary across space. However, users may be interested not only in the overall spatial distribution of tags but also in exploring detailed attributes-in-space analyses, such as examining how a subclass of attribute domains is distributed globally or checking whether all attribute subclasses exhibit the same global distribution pattern. To date, the methods for representing tags with visual encoding (e.g., size, color) to extend various attributes-in-space tasks to support exploratory analysis remain unclear. In this work, we extended tag maps to support exploratory analysis by distinguishing space searching into local or global spaces and attribute domains into within or between attribute classes, supporting four types of attributes-in-space tasks: global-within, local-within, global-between, and local-between tasks. We evaluated our exploratory tag map through two case studies: investigating major disaster occurrences from 1981 to 2020 and examining the leading causes of death in 2000 and 2019 for Spain, France, Germany and Italy. We used eye-tracking and a questionnaire to evaluate our exploratory tag map for comparison. Both methods had similar self-reported usability scores in terms of aesthetics, density, layout, and legibility. However, our exploratory tag map was more effective and efficient and had a lower cognitive load.
{"title":"An exploratory tag map for attributes-in-space tasks","authors":"","doi":"10.1016/j.jag.2024.104127","DOIUrl":"10.1016/j.jag.2024.104127","url":null,"abstract":"<div><p>Geo-text data, which combine geographical locations with textual information (e.g., geo-tagged tweets), are typically visualized using tag maps. Since tags are rich in attribute information, tag maps are an intuitive method of visualizing how attribute domains carried by tags vary across space. However, users may be interested not only in the overall spatial distribution of tags but also in exploring detailed attributes-in-space analyses, such as examining how a subclass of attribute domains is distributed globally or checking whether all attribute subclasses exhibit the same global distribution pattern. To date, the methods for representing tags with visual encoding (e.g., size, color) to extend various attributes-in-space tasks to support exploratory analysis remain unclear. In this work, we extended tag maps to support exploratory analysis by distinguishing space searching into local or global spaces and attribute domains into within or between attribute classes, supporting four types of attributes-in-space tasks: global-within, local-within, global-between, and local-between tasks. We evaluated our exploratory tag map through two case studies: investigating major disaster occurrences from 1981 to 2020 and examining the leading causes of death in 2000 and 2019 for Spain, France, Germany and Italy. We used eye-tracking and a questionnaire to evaluate our exploratory tag map for comparison. Both methods had similar self-reported usability scores in terms of aesthetics, density, layout, and legibility. However, our exploratory tag map was more effective and efficient and had a lower cognitive load.</p></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1569843224004813/pdfft?md5=42577f2bfaec4dc79a4430232864534d&pid=1-s2.0-S1569843224004813-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142098583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01DOI: 10.1016/j.jag.2024.104157
The successful launch of the Sentinel-1 satellite in 2014 brought a large amount of free SAR images to researchers and scholars, and its application in the fields of ocean monitoring, land use change, natural disaster monitoring and emergency response is becoming increasingly mature and precise. The main applications of InSAR can be categorized into surface deformation monitoring and DEM generation. Sentinel-1 was initially designed for surface deformation monitoring; thus, there are fewer relevant studies on the use of Sentinel-1 data for DEM extraction. However, as the only SAR satellite whose data are currently free and openly available and whose data are constantly updated, it is highly important to study its sources of error in the DEM generation process and the accuracy of its products. In addition, the SAR data provided by the Sentinel-1 satellite has the advantages of high resolution, all-day, all-weather, providing a large data source for DEM production. Taking the Ankang area as an example, this paper analyzes the influence of the InSAR spatiotemporal baseline, ground cover, terrain factors, SAR imaging and other factors on the accuracy of the Sentinel-1-extracted DEM using multisource ground observation data to validate its feasibility for terrain mapping in complex terrain. Finally, we look forward to how to effectively improve the quality of Sentinel-1 DEM products to provide guidance and a reference for subsequent research on DEM extraction using Sentinel-1 SAR images and designation of Sentinel-1 C satellite’s parameters.
2014年 "哨兵一号 "卫星的成功发射为研究人员和学者带来了大量免费的合成孔径雷达图像,其在海洋监测、土地利用变化、自然灾害监测和应急响应等领域的应用日趋成熟和精确。InSAR 的主要应用可分为地表形变监测和 DEM 生成。Sentinel-1 最初是为地表形变监测而设计的,因此利用 Sentinel-1 数据提取 DEM 的相关研究较少。不过,作为目前唯一一颗数据免费公开且不断更新的合成孔径雷达卫星,研究其在 DEM 生成过程中的误差来源及其产品的精度非常重要。此外,"哨兵一号 "卫星提供的合成孔径雷达数据具有高分辨率、全天候、全天时等优点,为 DEM 生成提供了大量数据源。本文以安康地区为例,利用多源地面观测数据分析了InSAR时空基线、地面覆盖、地形因素、SAR成像等因素对哨兵一号提取DEM精度的影响,验证了其在复杂地形地形测绘中的可行性。最后,我们期待如何有效提高哨兵一号DEM产品的质量,为后续利用哨兵一号合成孔径雷达影像提取DEM和指定哨兵一号C卫星参数的研究提供指导和参考。
{"title":"Verification of the accuracy of Sentinel-1 for DEM extraction error analysis under complex terrain conditions","authors":"","doi":"10.1016/j.jag.2024.104157","DOIUrl":"10.1016/j.jag.2024.104157","url":null,"abstract":"<div><p>The successful launch of the Sentinel-1 satellite in 2014 brought a large amount of free SAR images to researchers and scholars, and its application in the fields of ocean monitoring, land use change, natural disaster monitoring and emergency response is becoming increasingly mature and precise. The main applications of InSAR can be categorized into surface deformation monitoring and DEM generation. Sentinel-1 was initially designed for surface deformation monitoring; thus, there are fewer relevant studies on the use of Sentinel-1 data for DEM extraction. However, as the only SAR satellite whose data are currently free and openly available and whose data are constantly updated, it is highly important to study its sources of error in the DEM generation process and the accuracy of its products. In addition, the SAR data provided by the Sentinel-1 satellite has the advantages of high resolution, all-day, all-weather, providing a large data source for DEM production. Taking the Ankang area as an example, this paper analyzes the influence of the InSAR spatiotemporal baseline, ground cover, terrain factors, SAR imaging and other factors on the accuracy of the Sentinel-1-extracted DEM using multisource ground observation data to validate its feasibility for terrain mapping in complex terrain. Finally, we look forward to how to effectively improve the quality of Sentinel-1 DEM products to provide guidance and a reference for subsequent research on DEM extraction using Sentinel-1 SAR images and designation of Sentinel-1 C satellite’s parameters.</p></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1569843224005132/pdfft?md5=c72db4fb88d3394aab691d1a5996365b&pid=1-s2.0-S1569843224005132-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142161785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01DOI: 10.1016/j.jag.2024.104115
High-resolution Digital Elevation Models (DEMs) are essential for precise geographic analysis. However, obtaining high-resolution DEMs in regions with dense vegetation, complex terrain, or satellite imagery voids presents substantial challenges. This study introduces a deep learning approach using three-dimensional (3D) terrain features combined with Conditional Generative Adversarial Networks (CGANs) to reconstruct DEMs. The 3D terrain features, such as valley and ridge lines, exhibit topographic relief patterns and provide constraints for CGANs to reconstruct DEMs. Experiments conducted in the Loess Plateau of Shaanxi confirmed the performance of the proposed method, demonstrating marked improvements in the accuracy of DEM reconstruction compared to models based on two-dimensional (2D) terrain features. The elevation accuracy of the reconstructed DEMs by the proposed method is 5.30 m, which is higher than that of the 2D terrain features method (18.90 m) by 71.96 %. Meanwhile, the proposed method shows a 15.78 % and 17.64 % improvement in elevation accuracy and slope accuracy, respectively, when reconstructing a 5 m high-resolution DEM from a 30 m low-resolution DEM. The proposed method can be flexibly used for reconstructing, repairing, and filling voids in DEM data.
高分辨率数字高程模型(DEM)对于精确的地理分析至关重要。然而,在植被茂密、地形复杂或卫星图像空白的地区获取高分辨率数字高程模型面临巨大挑战。本研究引入了一种深度学习方法,利用三维(3D)地形特征结合条件生成对抗网络(CGANs)来重建 DEM。三维地形特征(如山谷和山脊线)展示了地形起伏模式,为条件生成对抗网络重建 DEM 提供了约束条件。在陕西黄土高原进行的实验证实了所提方法的性能,与基于二维(2D)地形特征的模型相比,该方法显著提高了重建 DEM 的精度。建议方法重建的 DEM 高程精度为 5.30 米,比二维地形特征方法(18.90 米)高出 71.96%。同时,在从 30 米低分辨率 DEM 重建 5 米高分辨率 DEM 时,拟议方法的高程精度和坡度精度分别提高了 15.78 % 和 17.64 %。建议的方法可灵活用于重建、修复和填补 DEM 数据中的空白。
{"title":"Reconstructing high-resolution DEMs from 3D terrain features using conditional generative adversarial networks","authors":"","doi":"10.1016/j.jag.2024.104115","DOIUrl":"10.1016/j.jag.2024.104115","url":null,"abstract":"<div><p>High-resolution Digital Elevation Models (DEMs) are essential for precise geographic analysis. However, obtaining high-resolution DEMs in regions with dense vegetation, complex terrain, or satellite imagery voids presents substantial challenges. This study introduces a deep learning approach using three-dimensional (3D) terrain features combined with Conditional Generative Adversarial Networks (CGANs) to reconstruct DEMs. The 3D terrain features, such as valley and ridge lines, exhibit topographic relief patterns and provide constraints for CGANs to reconstruct DEMs. Experiments conducted in the Loess Plateau of Shaanxi confirmed the performance of the proposed method, demonstrating marked improvements in the accuracy of DEM reconstruction compared to models based on two-dimensional (2D) terrain features. The elevation accuracy of the reconstructed DEMs by the proposed method is 5.30 m, which is higher than that of the 2D terrain features method (18.90 m) by 71.96 %. Meanwhile, the proposed method shows a 15.78 % and 17.64 % improvement in elevation accuracy and slope accuracy, respectively, when reconstructing a 5 m high-resolution DEM from a 30 m low-resolution DEM. The proposed method can be flexibly used for reconstructing, repairing, and filling voids in DEM data.</p></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1569843224004692/pdfft?md5=9f5ab41fc0878551103a4156a63577ad&pid=1-s2.0-S1569843224004692-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142161784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01DOI: 10.1016/j.jag.2024.104105
With the rapid development of LiDAR and artificial intelligence technologies, 3D point cloud semantic segmentation has become a highlight research topic. This technology is able to significantly enhance the capabilities of building information modeling, navigation and environmental perception. However, current deep learning-based methods primarily rely on voxelization or multi-layer convolution for feature extraction. These methods often face challenges in effectively differentiating between homogeneous objects or structurally adherent targets in complex real-world scenes. To this end, we propose a Graph Transformer point cloud semantic segmentation network (ASGFormer) tailored for structurally adherent objects. Firstly, ASGFormer combines Graph and Transformer to promote global correlation understanding in the graph. Secondly, spatial index and position embedding are constructed based on distance relationships and feature differences. Through a learnable mechanism, the structural weights between points are dynamically adjusted, achieving adaptive spatial structure within the graph. Finally, dummy nodes are introduced to facilitate global information storage and transmission between layers, effectively addressing the issue of information loss at the terminal nodes of the graph. Comprehensive experiments are conducted on the various real-world 3D point cloud datasets, analyzing the effectiveness of proposed ASGFormer through qualitative and quantitative evaluations. ASGFormer outperforms existing approaches with of 91.3% for OA, 78.0% for mAcc, and 72.3% for mIoU on S3DIS dataset. Moreover, ASGFormer achieves 72.8%, 45.5%, 81.6%, 70.1% mIoU on ScanNet, City-Facade, Toronto 3D and Semantic KITTI dataset, respectively. Notably, the proposed method demonstrates effective differentiation of homogeneous structurally adherent objects, further contributing to the intelligent perception and modeling of complex scenes.
{"title":"Point cloud semantic segmentation with adaptive spatial structure graph transformer","authors":"","doi":"10.1016/j.jag.2024.104105","DOIUrl":"10.1016/j.jag.2024.104105","url":null,"abstract":"<div><p>With the rapid development of LiDAR and artificial intelligence technologies, 3D point cloud semantic segmentation has become a highlight research topic. This technology is able to significantly enhance the capabilities of building information modeling, navigation and environmental perception. However, current deep learning-based methods primarily rely on voxelization or multi-layer convolution for feature extraction. These methods often face challenges in effectively differentiating between homogeneous objects or structurally adherent targets in complex real-world scenes. To this end, we propose a Graph Transformer point cloud semantic segmentation network (ASGFormer) tailored for structurally adherent objects. Firstly, ASGFormer combines Graph and Transformer to promote global correlation understanding in the graph. Secondly, spatial index and position embedding are constructed based on distance relationships and feature differences. Through a learnable mechanism, the structural weights between points are dynamically adjusted, achieving adaptive spatial structure within the graph. Finally, dummy nodes are introduced to facilitate global information storage and transmission between layers, effectively addressing the issue of information loss at the terminal nodes of the graph. Comprehensive experiments are conducted on the various real-world 3D point cloud datasets, analyzing the effectiveness of proposed ASGFormer through qualitative and quantitative evaluations. ASGFormer outperforms existing approaches with of 91.3% for OA, 78.0% for mAcc, and 72.3% for mIoU on S3DIS dataset. Moreover, ASGFormer achieves 72.8%, 45.5%, 81.6%, 70.1% mIoU on ScanNet, City-Facade, Toronto 3D and Semantic KITTI dataset, respectively. Notably, the proposed method demonstrates effective differentiation of homogeneous structurally adherent objects, further contributing to the intelligent perception and modeling of complex scenes.</p></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S156984322400459X/pdfft?md5=4cd42d1ccc8683c31eb4cb00575853c5&pid=1-s2.0-S156984322400459X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01DOI: 10.1016/j.jag.2024.104130
Assessing the quality of UAV-HSIs (Unmanned aerial vehicle hyperspectral images) is crucial for evaluating sensor performance, identifying distortion types, and measuring data inversion accuracy. Due to the absence of reference images, UAV-HSI quality assessment leans towards no-reference image quality assessment (NR-IQA), offering versatile applications. NR-IQA methods of remote sensing images using machine learning techniques have emerged, however, NR-IQA methods for UAV-HSIs containing multi-type and multiple distortions have not been developed. This paper introduces an NR-IQA method for UAV-HSI, employing machine learning techniques. We summarize and simulate distortion types in UAV-HSIs, constructing a quality assessment dataset based on 23 original high-quality and 806 simulated degraded UAV-HSIs. Extracting 129 features encompassing texture, color, transform domain, structural, and statistical aspects, we form seven feature sets through random and filtered feature selection algorithms. Ten machine learning quality assessment models are trained using this dataset and feature sets. The results showed that the model with the highest evaluation accuracy was extra trees (ET) (R2 = 0.928, RMSE = 0.326, RPD = 3.601), using feature set 1 that fuses Tamura texture, color, wavelet transform, and mean subtracted contrast normalized (MSCN) coefficient for a total of 11 features, the PLCC and SROCC of its predicted and true quality scores reached 0.963 and 0.925, respectively. In addition, the random forest (RF), gradient boosting decision tree (GBDT), generalized regression neural network (GRNN), and extreme learning machine (ELM) also had high evaluation accuracies (R2 > 0.9 and RPD > 2.5). These findings underscore the applicability of our proposed machine learning-based NR-IQA method to assess the quality of the UAV-HSIs containing noise, blur, strip noise, and multiple distortions. Additionally, this study serves as a reference for selecting features and models for other hyperspectral image quality assessments.
{"title":"NR-IQA for UAV hyperspectral image based on distortion constructing, feature screening, and machine learning","authors":"","doi":"10.1016/j.jag.2024.104130","DOIUrl":"10.1016/j.jag.2024.104130","url":null,"abstract":"<div><p>Assessing the quality of UAV-HSIs (Unmanned aerial vehicle hyperspectral images) is crucial for evaluating sensor performance, identifying distortion types, and measuring data inversion accuracy. Due to the absence of reference images, UAV-HSI quality assessment leans towards no-reference image quality assessment (NR-IQA), offering versatile applications. NR-IQA methods of remote sensing images using machine learning techniques have emerged, however, NR-IQA methods for UAV-HSIs containing multi-type and multiple distortions have not been developed. This paper introduces an NR-IQA method for UAV-HSI, employing machine learning techniques. We summarize and simulate distortion types in UAV-HSIs, constructing a quality assessment dataset based on 23 original high-quality and 806 simulated degraded UAV-HSIs. Extracting 129 features encompassing texture, color, transform domain, structural, and statistical aspects, we form seven feature sets through random and filtered feature selection algorithms. Ten machine learning quality assessment models are trained using this dataset and feature sets. The results showed that the model with the highest evaluation accuracy was extra trees (ET) (<em>R</em><sup>2</sup> = 0.928, RMSE = 0.326, RPD = 3.601), using feature set 1 that fuses Tamura texture, color, wavelet transform, and mean subtracted contrast normalized (MSCN) coefficient for a total of 11 features, the PLCC and SROCC of its predicted and true quality scores reached 0.963 and 0.925, respectively. In addition, the random forest (RF), gradient boosting decision tree (GBDT), generalized regression neural network (GRNN), and extreme learning machine (ELM) also had high evaluation accuracies (<em>R</em><sup>2</sup> > 0.9 and RPD > 2.5). These findings underscore the applicability of our proposed machine learning-based NR-IQA method to assess the quality of the UAV-HSIs containing noise, blur, strip noise, and multiple distortions. Additionally, this study serves as a reference for selecting features and models for other hyperspectral image quality assessments.</p></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1569843224004849/pdfft?md5=b03c80f0029295d7bcf7e784fffb2f9d&pid=1-s2.0-S1569843224004849-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142161790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01DOI: 10.1016/j.jag.2024.104120
The North Pacific Ocean plays a pivotal role as a carbon sink within the global carbon cycle. However, a comprehensive understanding of the spatiotemporal dynamics of carbon dioxide concentration and its determinants in this domain remains elusive due to its vast dimensions and the intricacies of influencing factors, with previous research on carbon dioxide partial pressure in the North Pacific Ocean also being relatively scarce. While prevalent machine learning methodologies have been extensively applied to predict the partial pressure of ocean carbon dioxide (pCO2), their limited interpretability has impeded substantial progress in elucidating the underlying mechanisms. This study introduces a gridded spatiotemporal neural network weighted regression (GSTNNWR) model to illuminate temporal and spatial relationships among relevant environmental variables and pCO2. The GSTNNWR model achieves high-precision and high-resolution forecasts of surface pCO2 in the North Pacific Ocean, demonstrating commendable performance (R2 = 0.863 and RMSE=15.123 µatm). Simultaneously, we obtain a quantitative characterization of how various environmental factors influence pCO2 across different temporal and spatial scales. Results show a dominant positive effect of temperature on the pCO2, with an averaged normalized coefficient of 0.28, and variability in the effects of chlorophyll and salinity on the pCO2 at different spatial and temporal locations and temperatures, whose average normalized coefficients are −0.10 and −0.04.The findings of our study will provide insights into the mechanisms and interactions within the North Pacific carbon cycle, contributing to a better understanding of ocean carbon sink formation and the dynamic regulation of the North Pacific carbon cycle.
{"title":"Spatiotemporal weighted neural network reveals surface seawater pCO2 distributions and underlying environmental mechanisms in the North Pacific Ocean","authors":"","doi":"10.1016/j.jag.2024.104120","DOIUrl":"10.1016/j.jag.2024.104120","url":null,"abstract":"<div><p>The North Pacific Ocean plays a pivotal role as a carbon sink within the global carbon cycle. However, a comprehensive understanding of the spatiotemporal dynamics of carbon dioxide concentration and its determinants in this domain remains elusive due to its vast dimensions and the intricacies of influencing factors, with previous research on carbon dioxide partial pressure in the North Pacific Ocean also being relatively scarce. While prevalent machine learning methodologies have been extensively applied to predict the partial pressure of ocean carbon dioxide (pCO<sub>2</sub>), their limited interpretability has impeded substantial progress in elucidating the underlying mechanisms. This study introduces a gridded spatiotemporal neural network weighted regression (GSTNNWR) model to illuminate temporal and spatial relationships among relevant environmental variables and pCO<sub>2</sub>. The GSTNNWR model achieves high-precision and high-resolution forecasts of surface pCO<sub>2</sub> in the North Pacific Ocean, demonstrating commendable performance (R<sup>2</sup> = 0.863 and RMSE=15.123 µatm). Simultaneously, we obtain a quantitative characterization of how various environmental factors influence pCO2 across different temporal and spatial scales. Results show a dominant positive effect of temperature on the pCO2, with an averaged normalized coefficient of 0.28, and variability in the effects of chlorophyll and salinity on the pCO<sub>2</sub> at different spatial and temporal locations and temperatures, whose average normalized coefficients are −0.10 and −0.04.The findings of our study will provide insights into the mechanisms and interactions within the North Pacific carbon cycle, contributing to a better understanding of ocean carbon sink formation and the dynamic regulation of the North Pacific carbon cycle.</p></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1569843224004746/pdfft?md5=02e9cf216f430ff0c5b854b4f5a04680&pid=1-s2.0-S1569843224004746-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142117634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01DOI: 10.1016/j.jag.2024.104136
The hierarchical structure of geographic or urban space can be well-characterized by the concept of living structure, a term coined by Christopher Alexander. All spaces, regardless of their size, possess certain degrees of livingness that can be mathematically quantified. While previous studies have successfully quantified the livingness of small spaces such as images or artworks, the livingness of geographic space has not yet been characterized in a recursive manner. Zipf’s law has been observed in urban systems and intra-urban structures. However, whether Zipf’s law is applicable to the hierarchical substructures of geographic space has rarely been investigated. In this study, we recursively extract the substructures of geographic space using global nighttime light imagery. We quantify the livingness of global cities considering the number of substructures (S) and their inherent hierarchy (H). We further investigate the scaling properties of the extracted substructures across scales and the relationships between livingness and population for global cities. The results demonstrate that all substructures of global cities form a living structure that conforms to Zipf’s law. The degree of livingness better captures population distribution than nighttime light intensity values for the global cities. This study contributes in three aspects: First, it considers global cities as a whole to quantify spatial livingness. Second, it applies the concept of livingness to cities to better capture the spatial structure of the population using nighttime light data. Third, it introduces a novel method to recursively extract substructures from nighttime images, offering a valuable tool to investigate urban structures across multiple spatial scales.
{"title":"Characterizing the livingness of geographic space across scales using global nighttime light data","authors":"","doi":"10.1016/j.jag.2024.104136","DOIUrl":"10.1016/j.jag.2024.104136","url":null,"abstract":"<div><p>The hierarchical structure of geographic or urban space can be well-characterized by the concept of living structure, a term coined by Christopher Alexander. All spaces, regardless of their size, possess certain degrees of livingness that can be mathematically quantified. While previous studies have successfully quantified the livingness of small spaces such as images or artworks, the livingness of geographic space has not yet been characterized in a recursive manner. Zipf’s law has been observed in urban systems and intra-urban structures. However, whether Zipf’s law is applicable to the hierarchical substructures of geographic space has rarely been investigated. In this study, we recursively extract the substructures of geographic space using global nighttime light imagery. We quantify the livingness of global cities considering the number of substructures (S) and their inherent hierarchy (H). We further investigate the scaling properties of the extracted substructures across scales and the relationships between livingness and population for global cities. The results demonstrate that all substructures of global cities form a living structure that conforms to Zipf’s law. The degree of livingness better captures population distribution than nighttime light intensity values for the global cities. This study contributes in three aspects: First, it considers global cities as a whole to quantify spatial livingness. Second, it applies the concept of livingness to cities to better capture the spatial structure of the population using nighttime light data. Third, it introduces a novel method to recursively extract substructures from nighttime images, offering a valuable tool to investigate urban structures across multiple spatial scales.</p></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1569843224004904/pdfft?md5=e939d0125b32341e47ec9b591b6e885c&pid=1-s2.0-S1569843224004904-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01DOI: 10.1016/j.jag.2024.104139
Accurate lane maps with semantics are crucial for various applications, such as high-definition maps (HD Maps), intelligent transportation systems (ITS), and digital twins. Manual annotation of lanes is labor-intensive and costly, prompting researchers to explore automatic lane extraction methods. This paper presents an end-to-end large-scale lane mapping method that considers both lane geometry and semantics. This study represents lane markings as polylines with uniformly sampled points and associated semantics, allowing for adaptation to varying lane shapes. Additionally, we propose an end-to-end network to extract lane polylines from mobile laser scanning (MLS) data, enabling the inference of vectorized lane instances without complex post-processing. The network consists of three components: a feature encoder, a column proposal generator, and a lane information decoder. The feature encoder encodes textual and structural information of lane markings to enhance the method’s robustness to data imperfections, such as varying lane intensity, uneven point density, and occlusion-induced incomplete data. The column proposal generator generates regions of interest for the subsequent decoder. Leveraging the embedded multi-scale features from the feature encoder, the lane decoder effectively predicts lane polylines and their associated semantics without requiring step-by-step conditional inference. Comprehensive experiments conducted on three lane datasets have demonstrated the performance of the proposed method, even in the presence of incomplete data and complex lane topology. Furthermore, the datasets used in this work, including source ground points, generated bird’s eye view (BEV) images, and annotations, will be publicly available with the publication of the paper. The code and dataset will be accessible through here.
{"title":"A benchmark approach and dataset for large-scale lane mapping from MLS point clouds","authors":"","doi":"10.1016/j.jag.2024.104139","DOIUrl":"10.1016/j.jag.2024.104139","url":null,"abstract":"<div><p>Accurate lane maps with semantics are crucial for various applications, such as high-definition maps (HD Maps), intelligent transportation systems (ITS), and digital twins. Manual annotation of lanes is labor-intensive and costly, prompting researchers to explore automatic lane extraction methods. This paper presents an end-to-end large-scale lane mapping method that considers both lane geometry and semantics. This study represents lane markings as polylines with uniformly sampled points and associated semantics, allowing for adaptation to varying lane shapes. Additionally, we propose an end-to-end network to extract lane polylines from mobile laser scanning (MLS) data, enabling the inference of vectorized lane instances without complex post-processing. The network consists of three components: a feature encoder, a column proposal generator, and a lane information decoder. The feature encoder encodes textual and structural information of lane markings to enhance the method’s robustness to data imperfections, such as varying lane intensity, uneven point density, and occlusion-induced incomplete data. The column proposal generator generates regions of interest for the subsequent decoder. Leveraging the embedded multi-scale features from the feature encoder, the lane decoder effectively predicts lane polylines and their associated semantics without requiring step-by-step conditional inference. Comprehensive experiments conducted on three lane datasets have demonstrated the performance of the proposed method, even in the presence of incomplete data and complex lane topology. Furthermore, the datasets used in this work, including source ground points, generated bird’s eye view (BEV) images, and annotations, will be publicly available with the publication of the paper. The code and dataset will be accessible through <span><span>here</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S156984322400493X/pdfft?md5=d68dcec273c41a425a3f022365adcf23&pid=1-s2.0-S156984322400493X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01DOI: 10.1016/j.jag.2024.104141
Increasing population and the consequent rise in the demand for food and water resources pose significant challenges for the future of agriculture in Egypt. Rapid large-scale agricultural expansion has occurred in the country to meet the growing demand, but agricultural loss from urban infringement and field abandonment remains prevalent. Documenting the full spectrum of changes within Egypt’s agricultural systems is crucial for developing effective land-use policies that improve food security. Here we map and estimate the areal extent of multiple types of agricultural change in Egypt (i.e., agricultural gain, agricultural abandonment, and agricultural loss from urban growth) by applying the Landsat-based detection of trends in disturbance and recovery (LandTrendr) algorithm, a widely used time series temporal segmentation algorithm. First, we used LandTrendr to identify areas of agricultural gain and loss throughout Egypt from 1987 to 2019. Second, we combined land-cover maps and the LandTrendr results to create a comprehensive land-cover change map. Lastly, we evaluated the accuracy of our findings and estimated per-class areas with quantified uncertainty using high-quality reference data. Our results reveal a notable expansion in Egypt’s agricultural land area. However, this growth is accompanied by the widespread loss of prime agricultural land, a consequence of urban development and agricultural abandonment. This study emphasizes the pressing need for the implementation of sustainable land-use policies in Egypt, particularly as climate change will exacerbate pressures on the agricultural sector in the future.
{"title":"Estimating the expansion and reduction of agricultural extent in Egypt using Landsat time series","authors":"","doi":"10.1016/j.jag.2024.104141","DOIUrl":"10.1016/j.jag.2024.104141","url":null,"abstract":"<div><p>Increasing population and the consequent rise in the demand for food and water resources pose significant challenges for the future of agriculture in Egypt. Rapid large-scale agricultural expansion has occurred in the country to meet the growing demand, but agricultural loss from urban infringement and field abandonment remains prevalent. Documenting the full spectrum of changes within Egypt’s agricultural systems is crucial for developing effective land-use policies that improve food security. Here we map and estimate the areal extent of multiple types of agricultural change in Egypt (i.e., agricultural gain, agricultural abandonment, and agricultural loss from urban growth) by applying the <em>Landsat-based detection of trends in disturbance and recovery</em> (LandTrendr) algorithm, a widely used time series temporal segmentation algorithm. First, we used LandTrendr to identify areas of agricultural gain and loss throughout Egypt from 1987 to 2019. Second, we combined land-cover maps and the LandTrendr results to create a comprehensive land-cover change map. Lastly, we evaluated the accuracy of our findings and estimated per-class areas with quantified uncertainty using high-quality reference data. Our results reveal a notable expansion in Egypt’s agricultural land area. However, this growth is accompanied by the widespread loss of prime agricultural land, a consequence of urban development and agricultural abandonment. This study emphasizes the pressing need for the implementation of sustainable land-use policies in Egypt, particularly as climate change will exacerbate pressures on the agricultural sector in the future.</p></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1569843224004953/pdfft?md5=d2425d0f3074550c8ece2170ead237f8&pid=1-s2.0-S1569843224004953-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142137270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}