Pub Date : 2021-09-01DOI: 10.14358/pers.87.20-00080
Longjie Ye, Ka Zhang, W. Xiao, Y. Sheng, D. Su, Pengbo Wang, Shan Zhang, Na Zhao, Hui Chen
This paper proposes a Gaussian mixture model of a ground filtering method based on hierarchical curvature constraints. Firstly, the thin plate spline function is iteratively applied to interpolate the reference surface. Secondly, gradually changing grid size and curvature threshold are used to construct hierarchical constraints. Finally, an adaptive height difference classifier based on the Gaussian mixture model is proposed. Using the latent variables obtained by the expectation-maximization algorithm, the posterior probability of each point is computed. As a result, ground and objects can be marked separately according to the calculated possibility. 15 data samples provided by the International Society for Photogrammetry and Remote Sensing are used to verify the proposed method, which is also compared with eight classical filtering algorithms. Experimental results demonstrate that the average total errors and average Cohen's kappa coefficient of the proposed method are 6.91% and 80.9%, respectively. In general, it has better performance in areas with terrain discontinuities and bridges.
{"title":"Gaussian Mixture Model of Ground Filtering Based on Hierarchical Curvature Constraints for Airborne Lidar Point Clouds","authors":"Longjie Ye, Ka Zhang, W. Xiao, Y. Sheng, D. Su, Pengbo Wang, Shan Zhang, Na Zhao, Hui Chen","doi":"10.14358/pers.87.20-00080","DOIUrl":"https://doi.org/10.14358/pers.87.20-00080","url":null,"abstract":"This paper proposes a Gaussian mixture model of a ground filtering method based on hierarchical curvature constraints. Firstly, the thin plate spline function is iteratively applied to interpolate the reference surface. Secondly, gradually changing grid size and curvature threshold\u0000 are used to construct hierarchical constraints. Finally, an adaptive height difference classifier based on the Gaussian mixture model is proposed. Using the latent variables obtained by the expectation-maximization algorithm, the posterior probability of each point is computed. As a result,\u0000 ground and objects can be marked separately according to the calculated possibility. 15 data samples provided by the International Society for Photogrammetry and Remote Sensing are used to verify the proposed method, which is also compared with eight classical filtering algorithms. Experimental\u0000 results demonstrate that the average total errors and average Cohen's kappa coefficient of the proposed method are 6.91% and 80.9%, respectively. In general, it has better performance in areas with terrain discontinuities and bridges.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"40 10","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72610097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Ye, Hongfu Li, Rui-long Wei, Lixuan Wang, Tianbo Sui, Wensen Bai, Pirasteh Saied
Due to the large volume and high redundancy of point clouds, there are many dilemmas in road-marking extraction algorithms, especially from uneven lidar point clouds. To extract road markings efficiently, this study presents a novel method for handling the uneven density distribution of point clouds and the high reflection intensity of road markings. The method first segments the point-cloud data into blocks perpendicular to the vehicle trajectory. Then it applies the double adaptive intensity-threshold method to extract road markings from road surfaces. Finally, it performs an adaptive spatial density filter based on the density distribution of point-cloud data to remove false road-marking points. The average completeness, correctness, and F measure of road-marking extraction are 0.827, 0.887, and 0.854, respectively, indicating that the proposed method is efficient and robust.
{"title":"Double Adaptive Intensity-Threshold Method for Uneven Lidar Data to Extract Road Markings","authors":"C. Ye, Hongfu Li, Rui-long Wei, Lixuan Wang, Tianbo Sui, Wensen Bai, Pirasteh Saied","doi":"10.14358/pers.20-00099","DOIUrl":"https://doi.org/10.14358/pers.20-00099","url":null,"abstract":"Due to the large volume and high redundancy of point clouds, there are many dilemmas in road-marking extraction algorithms, especially from uneven lidar point clouds. To extract road markings efficiently, this study presents a novel method for handling the uneven density distribution\u0000 of point clouds and the high reflection intensity of road markings. The method first segments the point-cloud data into blocks perpendicular to the vehicle trajectory. Then it applies the double adaptive intensity-threshold method to extract road markings from road surfaces. Finally, it performs\u0000 an adaptive spatial density filter based on the density distribution of point-cloud data to remove false road-marking points. The average completeness, correctness, and F measure of road-marking extraction are 0.827, 0.887, and 0.854, respectively, indicating that the proposed method is efficient\u0000 and robust.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"96 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80494947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SectorInsight.edu—Making a Difference in a Developing Country – One Student at a Time","authors":"Jennifer Murphy","doi":"10.14358/pers.87.9.606","DOIUrl":"https://doi.org/10.14358/pers.87.9.606","url":null,"abstract":"","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"25 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81566923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Statistical methods for detecting bias in global positioning system (GPS) error are presented and applied to imagery collected using three common unmanned aerial systems (UASs). Imagery processed without ground control points (GCPs) had horizontal errors of 1.0–2.5 m; however, the errors had unequal variances, significant directional bias, and did not conform to the expected statistical distribution and so should be considered unreliable. When GCPswere used, horizontal errors decreased to less than 5 cm, and the errors had equal variances, directional uniformity, and they conformed to the expected distribution. The analysis identified a longitudinal bias in some of the reference data, which were subsequently excluded from the analysis. Had these data been retained, the estimates of positional accuracy would have been unreliable and inaccurate. These results strongly suggest that examining GPS data for bias should be a much more common practice.
{"title":"Detecting Geo-Positional Bias in Imagery Collected Using Small UASs","authors":"J. Thayn, Aaron M. Paque, Megan C. Maher","doi":"10.14358/pers.20-00124","DOIUrl":"https://doi.org/10.14358/pers.20-00124","url":null,"abstract":"Statistical methods for detecting bias in global positioning system (GPS) error are presented and applied to imagery collected using three common unmanned aerial systems (UASs). Imagery processed without ground control points (GCPs)\u0000 had horizontal errors of 1.0–2.5 m; however, the errors had unequal variances, significant directional bias, and did not conform to the expected statistical distribution and so should be considered unreliable. When GCPswere used, horizontal errors decreased\u0000 to less than 5 cm, and the errors had equal variances, directional uniformity, and they conformed to the expected distribution. The analysis identified a longitudinal bias in some of the reference data, which were subsequently excluded from the analysis. Had these data been retained, the estimates\u0000 of positional accuracy would have been unreliable and inaccurate. These results strongly suggest that examining GPS data for bias should be a much more common practice.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"9 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89137548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Drake, Carlos Martín Molina, Edier F Avila, A. Baena
{"title":"Photogrammetry and Electrical Resistivity Tomography for the Investigation of the Clandestine Graves in Colombia","authors":"J. Drake, Carlos Martín Molina, Edier F Avila, A. Baena","doi":"10.14358/pers.87.9.597","DOIUrl":"https://doi.org/10.14358/pers.87.9.597","url":null,"abstract":"","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"214 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74433411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"GIS Tips & Tricks — You don't have to accept the defaults in GlobalMapper","authors":"Brittany Capra, A. Karlin","doi":"10.14358/pers.87.8.541","DOIUrl":"https://doi.org/10.14358/pers.87.8.541","url":null,"abstract":"","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"7 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76545416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fengpeng Li, Jiabao Li, Wei Han, Ruyi Feng, Lizhe Wang
Inspired by the outstanding achievement of deep learning, supervised deep learning representation methods for high-spatial-resolution remote sensing image scene classification obtained state-of-the-art performance. However, supervised deep learning representation methods need a considerable amount of labeled data to capture class-specific features, limiting the application of deep learning-based methods while there are a few labeled training samples. An unsupervised deep learning representation, high-resolution remote sensing image scene classification method is proposed in this work to address this issue. The proposed method, called contrastive learning, narrows the distance between positive views: color channels belonging to the same images widens the gaps between negative view pairs consisting of color channels from different images to obtain class-specific data representations of the input data without any supervised information. The classifier uses extracted features by the convolutional neural network (CNN)-based feature extractor with labeled information of training data to set space of each category and then, using linear regression, makes predictions in the testing procedure. Comparing with existing unsupervised deep learning representation high-resolution remote sensing image scene classification methods, contrastive learning CNN achieves state-of-the-art performance on three different scale benchmark data sets: small scale RSSCN7 data set, midscale aerial image data set, and large-scale NWPU-RESISC45 data set.
{"title":"Unsupervised Representation High-Resolution Remote Sensing Image Scene Classification via Contrastive Learning Convolutional Neural Network","authors":"Fengpeng Li, Jiabao Li, Wei Han, Ruyi Feng, Lizhe Wang","doi":"10.14358/pers.87.8.577","DOIUrl":"https://doi.org/10.14358/pers.87.8.577","url":null,"abstract":"Inspired by the outstanding achievement of deep learning, supervised deep learning representation methods for high-spatial-resolution remote sensing image scene classification obtained state-of-the-art performance. However, supervised deep learning representation methods need a considerable\u0000 amount of labeled data to capture class-specific features, limiting the application of deep learning-based methods while there are a few labeled training samples. An unsupervised deep learning representation, high-resolution remote sensing image scene classification method is proposed in this\u0000 work to address this issue. The proposed method, called contrastive learning, narrows the distance between positive views: color channels belonging to the same images widens the gaps between negative view pairs consisting of color channels from different images to obtain class-specific data\u0000 representations of the input data without any supervised information. The classifier uses extracted features by the convolutional neural network (CNN)-based feature extractor with labeled information of training data to set space of each category and then, using linear regression, makes predictions\u0000 in the testing procedure. Comparing with existing unsupervised deep learning representation high-resolution remote sensing image scene classification methods, contrastive learning CNN achieves state-of-the-art performance on three different scale benchmark data sets: small scale RSSCN7 data\u0000 set, midscale aerial image data set, and large-scale NWPU-RESISC45 data set.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"63 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82651123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Wan, Jia Wang, K. Di, Jian Li, Zhaoqin Liu, Peng Man, Yexin Wang, Tzuyang Yu, Chuankai Liu, Lichun Li
In a planetary-rover exploration mission, stereovision-based 3D reconstruction has been widely applied to topographic mapping of the planetary surface using stereo cameras onboard the rover. In this study, we propose an enhanced topographic mapping method based on multiple stereo images taken at the same rover location with changing illumination conditions. Key steps of the method include dense matching of stereo images, 3D point-cloud generation, point-cloud co-registration, and fusion. The final point cloud has more complete coverage and more details of the terrain than that conventionally generated from a single stereo pair. The effectiveness of the proposed method is verified by experiments using the Yutu-2 rover, in which two data sets were acquired by the navigation cameras at two locations and under changing illumination conditions. This method, which does not involve complex operations, has great potential for application in planetary-rover and lander missions.
{"title":"Enhanced Lunar Topographic Mapping Using Multiple Stereo Images Taken by Yutu-2 Rover with Changing Illumination Conditions","authors":"W. Wan, Jia Wang, K. Di, Jian Li, Zhaoqin Liu, Peng Man, Yexin Wang, Tzuyang Yu, Chuankai Liu, Lichun Li","doi":"10.14358/pers.87.8.567","DOIUrl":"https://doi.org/10.14358/pers.87.8.567","url":null,"abstract":"In a planetary-rover exploration mission, stereovision-based 3D reconstruction has been widely applied to topographic mapping of the planetary surface using stereo cameras onboard the rover. In this study, we propose an enhanced topographic mapping method based on multiple stereo images\u0000 taken at the same rover location with changing illumination conditions. Key steps of the method include dense matching of stereo images, 3D point-cloud generation, point-cloud co-registration, and fusion. The final point cloud has more complete coverage and more details of the terrain than\u0000 that conventionally generated from a single stereo pair. The effectiveness of the proposed method is verified by experiments using the Yutu-2 rover, in which two data sets were acquired by the navigation cameras at two locations and under changing illumination conditions. This method, which\u0000 does not involve complex operations, has great potential for application in planetary-rover and lander missions.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"124 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88074191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In terms of current interest, the expression “change detection” signifies one of the premier applications of remote sensing. This book makes a momentous and substantial contribution showcasing the fundamentals of select processing techniques used with imagery time series at both coarse and fine spatiotemporal resolutions. In doing so, it also provides substantive examples of real-world applications for some of the algorithms. Often, as exemplified in this book, the existence of disparate sensor data at varying spectral, temporal, and spatial resolutions results in the creation of synthetic or fusion images and simulated time series.
{"title":"Remote Sensing Time Series Image Processing. First Edition","authors":"Qihao Weng","doi":"10.14358/pers.87.8.545","DOIUrl":"https://doi.org/10.14358/pers.87.8.545","url":null,"abstract":"In terms of current interest, the expression “change detection” signifies one of the premier applications of remote sensing. This book makes a momentous and substantial contribution showcasing the fundamentals of select processing techniques used with imagery time series at both coarse and fine spatiotemporal resolutions. In doing so, it also provides substantive examples of real-world applications for some of the algorithms. Often, as exemplified in this book, the existence of disparate sensor data at varying spectral, temporal, and spatial resolutions results in the creation of synthetic or fusion images and simulated time series.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"133 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76569941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Goward, J. Masek, T. Loveland, J. Dwyer, Darrel L. Williams, T. Arvidson, L. Rocchio, J. Irons
The first Landsat was placed in orbit on 23 July 1972, followed by a series of missions that have provided nearly continuous, two-satellite 8-day repeat image coverage of the Earth's land areas for the last half-century. These observations have substantially enhanced our understanding of the Earth's terrestrial dynamics, both as a major element of the Earth's physical system, the primary home of humans, and the major source of resources that support them. The history of Landsat is complex, reflective of the human systems that sustain it. Despite the conflicted perspectives surrounding the continuation of the program, Landsat has survived based on worldwide recognition of its critical contributions to understanding land dynamics, management of natural resources and Earth system science. Launch of Landsat 9 is anticipated in Fall 2021, and current planning for the next generation, Landsat Next is well underway. The community of Landsat data users is looking forward to another 50 years of the Landsat program.
{"title":"Semi-Centennial of Landsat Observations & Pending Landsat 9 Launch","authors":"S. Goward, J. Masek, T. Loveland, J. Dwyer, Darrel L. Williams, T. Arvidson, L. Rocchio, J. Irons","doi":"10.14358/pers.87.8.533","DOIUrl":"https://doi.org/10.14358/pers.87.8.533","url":null,"abstract":"The first Landsat was placed in orbit on 23 July 1972, followed by a series of missions that have provided nearly continuous, two-satellite 8-day repeat image coverage of the Earth's land areas for the last half-century. These observations have substantially enhanced our understanding\u0000 of the Earth's terrestrial dynamics, both as a major element of the Earth's physical system, the primary home of humans, and the major source of resources that support them. The history of Landsat is complex, reflective of the human systems that sustain it. Despite the conflicted perspectives\u0000 surrounding the continuation of the program, Landsat has survived based on worldwide recognition of its critical contributions to understanding land dynamics, management of natural resources and Earth system science. Launch of Landsat 9 is anticipated in Fall 2021, and current planning for\u0000 the next generation, Landsat Next is well underway. The community of Landsat data users is looking forward to another 50 years of the Landsat program.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"28 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2021-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90382362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}