Pub Date : 2022-02-01DOI: 10.14358/pers.21-00021r2
Steven Spiegel, Casey Shanks, Jorge Chen
3D object recognition is one of the most popular areas of study in computer vision. Many of the more recent algorithms focus on indoor point clouds, classifying 3D geometric objects, and segmenting outdoor 3D scenes. One of the challenges of the classification pipeline is finding adequate and accurate training data. Hence, this article seeks to evaluate the accuracy of a synthetically generated data set called SynthCity, tested on two mobile laser-scan data sets. Varying levels of noise were applied to the training data to reflect varying levels of noise in different scanners. The chosen deep-learning algorithm was Kernel Point Convolution, a convolutional neural network that uses kernel points in Euclidean space for convolution weights.
{"title":"Effectiveness of Deep Learning Trained on SynthCity Data for Urban Point-Cloud Classification","authors":"Steven Spiegel, Casey Shanks, Jorge Chen","doi":"10.14358/pers.21-00021r2","DOIUrl":"https://doi.org/10.14358/pers.21-00021r2","url":null,"abstract":"3D object recognition is one of the most popular areas of study in computer vision. Many of the more recent algorithms focus on indoor point clouds, classifying 3D geometric objects, and segmenting outdoor 3D scenes. One of the challenges of the classification pipeline is finding adequate\u0000 and accurate training data. Hence, this article seeks to evaluate the accuracy of a synthetically generated data set called SynthCity, tested on two mobile laser-scan data sets. Varying levels of noise were applied to the training data to reflect varying levels of noise in different scanners.\u0000 The chosen deep-learning algorithm was Kernel Point Convolution, a convolutional neural network that uses kernel points in Euclidean space for convolution weights.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"1 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79270818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.14358/pers.21-00039r2
Xinyu Ding, Qunming Wang
Recently, the method of spatiotemporal spectral unmixing (STSU ) was developed to fully explore multi-scale temporal information (e.g., MODIS –Landsat image pairs) for spectral unmixing of coarse time series (e.g., MODIS data). To further enhance the application for timely monitoring, the real-time STSU( RSTSU) method was developed for real-time data. In RSTSU, we usually choose a spatially complete MODIS–Landsat image pair as auxiliary data. Due to cloud contamination, the temporal distance between the required effective auxiliary data and the real-time data to be unmixed can be large, causing great land cover changes and uncertainty in the extracted unchanged pixels (i.e., training samples). In this article, to extract more reliable training samples, we propose choosing the auxiliary MODIS–Landsat data temporally closest to the prediction time. To deal with the cloud contamination in the auxiliary data, we propose an augmented sample-based RSTSU( ARSTSU) method. ARSTSU selects and augments the training samples extracted from the valid (i.e., non-cloud) area to synthesize more training samples, and then trains an effective learning model to predict the proportions. ARSTSU was validated using two MODIS data sets in the experiments. ARSTSU expands the applicability of RSTSU by solving the problem of cloud contamination in temporal neighbors in actual situations.
{"title":"Augmented Sample-Based Real-Time Spatiotemporal Spectral Unmixing","authors":"Xinyu Ding, Qunming Wang","doi":"10.14358/pers.21-00039r2","DOIUrl":"https://doi.org/10.14358/pers.21-00039r2","url":null,"abstract":"Recently, the method of spatiotemporal spectral unmixing (STSU ) was developed to fully explore multi-scale temporal information (e.g., MODIS –Landsat image pairs) for spectral unmixing of coarse time series (e.g., MODIS data). To further enhance the application for timely monitoring,\u0000 the real-time STSU( RSTSU) method was developed for real-time data. In RSTSU, we usually choose a spatially complete MODIS–Landsat image pair as auxiliary data. Due to cloud contamination, the temporal distance between the required effective auxiliary data and the real-time data to be\u0000 unmixed can be large, causing great land cover changes and uncertainty in the extracted unchanged pixels (i.e., training samples). In this article, to extract more reliable training samples, we propose choosing the auxiliary MODIS–Landsat data temporally closest to the prediction time.\u0000 To deal with the cloud contamination in the auxiliary data, we propose an augmented sample-based RSTSU( ARSTSU) method. ARSTSU selects and augments the training samples extracted from the valid (i.e., non-cloud) area to synthesize more training samples, and then trains an effective learning\u0000 model to predict the proportions. ARSTSU was validated using two MODIS data sets in the experiments. ARSTSU expands the applicability of RSTSU by solving the problem of cloud contamination in temporal neighbors in actual situations.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"22 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86334604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Object and Pattern Recognition in Remote Sensing","authors":"S. Hinz, A. Braun, M. Weinmann","doi":"10.14358/pers.88.1.9","DOIUrl":"https://doi.org/10.14358/pers.88.1.9","url":null,"abstract":"","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"74 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82568745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.14358/pers.21-00024r2
C. Akumu, Ezechirinum Amadi
The mapping of southern yellow pines (loblolly, shortleaf, and Virginia pines) is important to supporting forest inventory and the management of forest resources. The overall aim of this study was to examine the integration of Landsat Operational Land Imager (OLI ) optical data with Sentinel-1 microwave C-band satellite data and vegetation indices in mapping the canopy cover of southern yellow pines. Specifically, this study assessed the overall mapping accuracies of the canopy cover classification of southern yellow pines derived using four data-integration scenarios: Landsat OLI alone; Landsat OLI and Sentinel-1; Landsat OLI with vegetation indices derived from satellite data—normalized difference vegetation index, soil-adjusted vegetation index, modified soil-adjusted vegetation index, transformed soil-adjusted vegetation index, and infrared percentage vegetation index; and 4) Landsat OLI with Sentinel-1 and vegetation indices. The results showed that the integration of Landsat OLI reflectance bands with Sentinel-1 backscattering coefficients and vegetation indices yielded the best overall classification accuracy, about 77%, and standalone Landsat OLI the weakest accuracy, approximately 67%. The findings in this study demonstrate that the addition of backscattering coefficients from Sentinel-1 and vegetation indices positively contributed to the mapping of southern yellow pines.
南方黄松(火炬松、短叶松和维吉尼亚松)的制图对支持森林清查和森林资源管理具有重要意义。本研究的总体目的是研究将Landsat Operational Land Imager (OLI)光学数据与Sentinel-1微波c波段卫星数据和植被指数相结合在南方黄松冠层覆盖制图中的应用。具体而言,本研究评估了使用四种数据集成方案得出的南方黄松冠层覆盖分类的总体制图精度:单独使用Landsat OLI;陆地卫星OLI和哨兵1号;采用归一化植被指数、土壤调整植被指数、修正土壤调整植被指数、转化土壤调整植被指数、红外植被百分比指数等反演植被指数的Landsat OLI;4)基于Sentinel-1和植被指数的Landsat OLI。结果表明,基于Sentinel-1后向散射系数和植被指数的Landsat OLI综合分类精度最高,约为77%,而独立的Landsat OLI分类精度最低,约为67%。研究结果表明,Sentinel-1的后向散射系数和植被指数的加入对南方黄松的制图有积极的促进作用。
{"title":"Examining the Integration of Landsat Operational Land Imager with Sentinel-1 and Vegetation Indices in Mapping Southern Yellow Pines (Loblolly, Shortleaf, and Virginia Pines)","authors":"C. Akumu, Ezechirinum Amadi","doi":"10.14358/pers.21-00024r2","DOIUrl":"https://doi.org/10.14358/pers.21-00024r2","url":null,"abstract":"The mapping of southern yellow pines (loblolly, shortleaf, and Virginia pines) is important to supporting forest inventory and the management of forest resources. The overall aim of this study was to examine the integration of Landsat Operational Land Imager (OLI ) optical data with\u0000 Sentinel-1 microwave C-band satellite data and vegetation indices in mapping the canopy cover of southern yellow pines. Specifically, this study assessed the overall mapping accuracies of the canopy cover classification of southern yellow pines derived using four data-integration scenarios:\u0000 Landsat OLI alone; Landsat OLI and Sentinel-1; Landsat OLI with vegetation indices derived from satellite data—normalized difference vegetation index, soil-adjusted vegetation index, modified soil-adjusted vegetation index, transformed soil-adjusted vegetation index, and infrared\u0000 percentage vegetation index; and 4) Landsat OLI with Sentinel-1 and vegetation indices. The results showed that the integration of Landsat OLI reflectance bands with Sentinel-1 backscattering coefficients and vegetation indices yielded the best overall classification accuracy,\u0000 about 77%, and standalone Landsat OLI the weakest accuracy, approximately 67%. The findings in this study demonstrate that the addition of backscattering coefficients from Sentinel-1 and vegetation indices positively contributed to the mapping of southern yellow pines.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"6 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75443470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Grids and Datums Update: This month we look at the Republic of Vanuatu","authors":"C. Mugnier","doi":"10.14358/pers.88.1.11","DOIUrl":"https://doi.org/10.14358/pers.88.1.11","url":null,"abstract":"","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"134 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88213326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.14358/pers.21-00030r2
Qing Ding, Z. Shao, Xiao Huang, O. Altan, Yewen Fan
Taking the Futian District as the research area, this study proposed an effective urban land cover mapping framework fusing optical and SAR data. To simplify the model complexity and improve the mapping results, various feature selection methods were compared and evaluated. The results showed that feature selection can eliminate irrelevant features, increase the mean correlation between features slightly, and improve the classification accuracy and computational efficiency significantly. The recursive feature elimination-support vector machine (RFE-SVM) model obtained the best results, with an overall accuracy of 89.17% and a kappa coefficient of 0.8695, respectively. In addition, this study proved that the fusion of optical and SAR data can effectively improve mapping and reduce the confusion between different land covers. The novelty of this study is with the insight into the merits of multi-source data fusion and feature selection in the land cover mapping process over complex urban environments, and to evaluate the performance differences between different feature selection methods.
{"title":"Improving Urban Land Cover Mapping with the Fusion of Optical and SAR Data Based on Feature Selection Strategy","authors":"Qing Ding, Z. Shao, Xiao Huang, O. Altan, Yewen Fan","doi":"10.14358/pers.21-00030r2","DOIUrl":"https://doi.org/10.14358/pers.21-00030r2","url":null,"abstract":"Taking the Futian District as the research area, this study proposed an effective urban land cover mapping framework fusing optical and SAR data. To simplify the model complexity and improve the mapping results, various feature selection methods were compared and evaluated. The results\u0000 showed that feature selection can eliminate irrelevant features, increase the mean correlation between features slightly, and improve the classification accuracy and computational efficiency significantly. The recursive feature elimination-support vector machine (RFE-SVM) model obtained the\u0000 best results, with an overall accuracy of 89.17% and a kappa coefficient of 0.8695, respectively. In addition, this study proved that the fusion of optical and SAR data can effectively improve mapping and reduce the confusion between different land covers. The novelty of this study is with\u0000 the insight into the merits of multi-source data fusion and feature selection in the land cover mapping process over complex urban environments, and to evaluate the performance differences between different feature selection methods.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"12 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81960703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.14358/pers.21-00025r2
Muhammad Nasar Ahmad, Z. Shao, O. Altan
This study comprises the identification of the locust outbreak that happened in February 2020. It is not possible to conduct ground-based surveys to monitor such huge disasters in a timely and adequate manner. Therefore, we used a combination of automatic and manual remote sensing data processing techniques to find out the aftereffects of locust attack effectively. We processed MODIS -normalized difference vegetation index (NDVI ) manually on ENVI and Landsat 8 NDVI using the Google Earth Engine (GEE ) cloud computing platform. We found from the results that, (a) NDVI computation on GEE is more effective, prompt, and reliable compared with the results of manual NDVI computations; (b) there is a high effect of locust disasters in the northern part of Sindh, Thul, Ghari Khairo, Garhi Yaseen, Jacobabad, and Ubauro, which are more vulnerable; and (c) NDVI value suddenly decreased to 0.68 from 0.92 in 2020 using Landsat NDVI and from 0.81 to 0.65 using MODIS satellite imagery. Results clearly indicate an abrupt decrease in vegetation in 2020 due to a locust disaster. That is a big threat to crop yield and food production because it provides a major portion of food chain and gross domestic product for Sindh, Pakistan.
{"title":"Effect of Locust Invasion and Mitigation Using Remote Sensing Techniques: A Case Study of North Sindh Pakistan","authors":"Muhammad Nasar Ahmad, Z. Shao, O. Altan","doi":"10.14358/pers.21-00025r2","DOIUrl":"https://doi.org/10.14358/pers.21-00025r2","url":null,"abstract":"This study comprises the identification of the locust outbreak that happened in February 2020. It is not possible to conduct ground-based surveys to monitor such huge disasters in a timely and adequate manner. Therefore, we used a combination of automatic and manual remote sensing data\u0000 processing techniques to find out the aftereffects of locust attack effectively. We processed MODIS -normalized difference vegetation index (NDVI ) manually on ENVI and Landsat 8 NDVI using the Google Earth Engine (GEE ) cloud computing platform. We found from the results that, (a) NDVI computation\u0000 on GEE is more effective, prompt, and reliable compared with the results of manual NDVI computations; (b) there is a high effect of locust disasters in the northern part of Sindh, Thul, Ghari Khairo, Garhi Yaseen, Jacobabad, and Ubauro, which are more vulnerable; and (c) NDVI value suddenly\u0000 decreased to 0.68 from 0.92 in 2020 using Landsat NDVI and from 0.81 to 0.65 using MODIS satellite imagery. Results clearly indicate an abrupt decrease in vegetation in 2020 due to a locust disaster. That is a big threat to crop yield and food production because it provides a major portion\u0000 of food chain and gross domestic product for Sindh, Pakistan.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"14 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80468727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.14358/pers.21-00012r2
Raechel A. Portelli, P. Pope
Human experts are integral to the success of computational earth observation. They perform various visual decision-making tasks, from selecting data and training machine-learning algorithms to interpreting accuracy and credibility. Research concerning the various human factors which affect performance has a long history within the fields of earth observation and the military. Shifts in the analytical environment from analog to digital workspaces necessitate continued research, focusing on human-in-the-loop processing. This article reviews the history of human-factors research within the field of remote sensing and suggests a framework for refocusing the discipline's efforts to understand the role that humans play in earth observation.
{"title":"Sensing and Human Factors Research: A Review","authors":"Raechel A. Portelli, P. Pope","doi":"10.14358/pers.21-00012r2","DOIUrl":"https://doi.org/10.14358/pers.21-00012r2","url":null,"abstract":"Human experts are integral to the success of computational earth observation. They perform various visual decision-making tasks, from selecting data and training machine-learning algorithms to interpreting accuracy and credibility. Research concerning the various human factors which\u0000 affect performance has a long history within the fields of earth observation and the military. Shifts in the analytical environment from analog to digital workspaces necessitate continued research, focusing on human-in-the-loop processing. This article reviews the history of human-factors\u0000 research within the field of remote sensing and suggests a framework for refocusing the discipline's efforts to understand the role that humans play in earth observation.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"30 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75198542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01DOI: 10.14358/pers.21-00062r2
Wanxuan Geng, Weixun Zhou, Shuanggen Jin
Traditional urban scene-classification approaches focus on images taken either by satellite or in aerial view. Although single-view images are able to achieve satisfactory results for scene classification in most situations, the complementary information provided by other image views is needed to further improve performance. Therefore, we present a complementary information-learning model (CILM) to perform multi-view scene classification of aerial and ground-level images. Specifically, the proposed CILM takes aerial and ground-level image pairs as input to learn view-specific features for later fusion to integrate the complementary information. To train CILM, a unified loss consisting of cross entropy and contrastive losses is exploited to force the network to be more robust. Once CILM is trained, the features of each view are extracted via the two proposed feature-extraction scenarios and then fused to train the support vector machine classifier for classification. The experimental results on two publicly available benchmark data sets demonstrate that CILM achieves remarkable performance, indicating that it is an effective model for learning complementary information and thus improving urban scene classification.
{"title":"Multi-View Urban Scene Classification with a Complementary-Information Learning Model","authors":"Wanxuan Geng, Weixun Zhou, Shuanggen Jin","doi":"10.14358/pers.21-00062r2","DOIUrl":"https://doi.org/10.14358/pers.21-00062r2","url":null,"abstract":"Traditional urban scene-classification approaches focus on images taken either by satellite or in aerial view. Although single-view images are able to achieve satisfactory results for scene classification in most situations, the complementary information provided by other image views\u0000 is needed to further improve performance. Therefore, we present a complementary information-learning model (CILM) to perform multi-view scene classification of aerial and ground-level images. Specifically, the proposed CILM takes aerial and ground-level image pairs as input to\u0000 learn view-specific features for later fusion to integrate the complementary information. To train CILM, a unified loss consisting of cross entropy and contrastive losses is exploited to force the network to be more robust. Once CILM is trained, the features of each view are\u0000 extracted via the two proposed feature-extraction scenarios and then fused to train the support vector machine classifier for classification. The experimental results on two publicly available benchmark data sets demonstrate that CILM achieves remarkable performance, indicating that\u0000 it is an effective model for learning complementary information and thus improving urban scene classification.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"50 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77610927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}