首页 > 最新文献

Photogrammetric Engineering and Remote Sensing最新文献

英文 中文
Effectiveness of Deep Learning Trained on SynthCity Data for Urban Point-Cloud Classification 基于SynthCity数据训练的深度学习在城市点云分类中的有效性
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2022-02-01 DOI: 10.14358/pers.21-00021r2
Steven Spiegel, Casey Shanks, Jorge Chen
3D object recognition is one of the most popular areas of study in computer vision. Many of the more recent algorithms focus on indoor point clouds, classifying 3D geometric objects, and segmenting outdoor 3D scenes. One of the challenges of the classification pipeline is finding adequate and accurate training data. Hence, this article seeks to evaluate the accuracy of a synthetically generated data set called SynthCity, tested on two mobile laser-scan data sets. Varying levels of noise were applied to the training data to reflect varying levels of noise in different scanners. The chosen deep-learning algorithm was Kernel Point Convolution, a convolutional neural network that uses kernel points in Euclidean space for convolution weights.
三维物体识别是计算机视觉中最热门的研究领域之一。许多最新的算法专注于室内点云、3D几何物体分类和室外3D场景分割。分类管道的挑战之一是找到足够和准确的训练数据。因此,本文试图评估名为SynthCity的合成生成数据集的准确性,并在两个移动激光扫描数据集上进行测试。对训练数据施加不同程度的噪声,以反映不同扫描仪中不同程度的噪声。选择的深度学习算法是核点卷积,这是一种卷积神经网络,使用欧几里得空间中的核点作为卷积权值。
{"title":"Effectiveness of Deep Learning Trained on SynthCity Data for Urban Point-Cloud Classification","authors":"Steven Spiegel, Casey Shanks, Jorge Chen","doi":"10.14358/pers.21-00021r2","DOIUrl":"https://doi.org/10.14358/pers.21-00021r2","url":null,"abstract":"3D object recognition is one of the most popular areas of study in computer vision. Many of the more recent algorithms focus on indoor point clouds, classifying 3D geometric objects, and segmenting outdoor 3D scenes. One of the challenges of the classification pipeline is finding adequate\u0000 and accurate training data. Hence, this article seeks to evaluate the accuracy of a synthetically generated data set called SynthCity, tested on two mobile laser-scan data sets. Varying levels of noise were applied to the training data to reflect varying levels of noise in different scanners.\u0000 The chosen deep-learning algorithm was Kernel Point Convolution, a convolutional neural network that uses kernel points in Euclidean space for convolution weights.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"1 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79270818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented Sample-Based Real-Time Spatiotemporal Spectral Unmixing 基于增强样本的实时时空光谱解混
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2022-01-01 DOI: 10.14358/pers.21-00039r2
Xinyu Ding, Qunming Wang
Recently, the method of spatiotemporal spectral unmixing (STSU ) was developed to fully explore multi-scale temporal information (e.g., MODIS –Landsat image pairs) for spectral unmixing of coarse time series (e.g., MODIS data). To further enhance the application for timely monitoring, the real-time STSU( RSTSU) method was developed for real-time data. In RSTSU, we usually choose a spatially complete MODIS–Landsat image pair as auxiliary data. Due to cloud contamination, the temporal distance between the required effective auxiliary data and the real-time data to be unmixed can be large, causing great land cover changes and uncertainty in the extracted unchanged pixels (i.e., training samples). In this article, to extract more reliable training samples, we propose choosing the auxiliary MODIS–Landsat data temporally closest to the prediction time. To deal with the cloud contamination in the auxiliary data, we propose an augmented sample-based RSTSU( ARSTSU) method. ARSTSU selects and augments the training samples extracted from the valid (i.e., non-cloud) area to synthesize more training samples, and then trains an effective learning model to predict the proportions. ARSTSU was validated using two MODIS data sets in the experiments. ARSTSU expands the applicability of RSTSU by solving the problem of cloud contamination in temporal neighbors in actual situations.
近年来,为了充分挖掘MODIS -Landsat影像对等多尺度时间信息,对MODIS数据等粗时间序列进行光谱解混,发展了时空光谱解混方法(STSU)。为了进一步增强实时监测的应用,针对实时数据,开发了实时STSU(RSTSU)方法。在RSTSU中,我们通常选择空间完整的MODIS-Landsat图像对作为辅助数据。由于云污染,所需有效辅助数据与待待混合实时数据之间的时间距离可能较大,导致提取的未变化像元(即训练样本)的土地覆盖变化较大,不确定性较大。在本文中,为了提取更可靠的训练样本,我们建议选择时间上最接近预测时间的辅助MODIS-Landsat数据。为了处理辅助数据中的云污染,我们提出了一种基于增强样本的RSTSU(ARSTSU)方法。ARSTSU对有效(即非云)区域中提取的训练样本进行选择和增强,合成更多的训练样本,然后训练出一个有效的学习模型来预测比例。ARSTSU在实验中使用两个MODIS数据集进行验证。ARSTSU通过解决实际情况中时间邻居的云污染问题,扩展了RSTSU的适用性。
{"title":"Augmented Sample-Based Real-Time Spatiotemporal Spectral Unmixing","authors":"Xinyu Ding, Qunming Wang","doi":"10.14358/pers.21-00039r2","DOIUrl":"https://doi.org/10.14358/pers.21-00039r2","url":null,"abstract":"Recently, the method of spatiotemporal spectral unmixing (STSU ) was developed to fully explore multi-scale temporal information (e.g., MODIS –Landsat image pairs) for spectral unmixing of coarse time series (e.g., MODIS data). To further enhance the application for timely monitoring,\u0000 the real-time STSU( RSTSU) method was developed for real-time data. In RSTSU, we usually choose a spatially complete MODIS–Landsat image pair as auxiliary data. Due to cloud contamination, the temporal distance between the required effective auxiliary data and the real-time data to be\u0000 unmixed can be large, causing great land cover changes and uncertainty in the extracted unchanged pixels (i.e., training samples). In this article, to extract more reliable training samples, we propose choosing the auxiliary MODIS–Landsat data temporally closest to the prediction time.\u0000 To deal with the cloud contamination in the auxiliary data, we propose an augmented sample-based RSTSU( ARSTSU) method. ARSTSU selects and augments the training samples extracted from the valid (i.e., non-cloud) area to synthesize more training samples, and then trains an effective learning\u0000 model to predict the proportions. ARSTSU was validated using two MODIS data sets in the experiments. ARSTSU expands the applicability of RSTSU by solving the problem of cloud contamination in temporal neighbors in actual situations.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"22 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86334604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object and Pattern Recognition in Remote Sensing 遥感中的目标与模式识别
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2022-01-01 DOI: 10.14358/pers.88.1.9
S. Hinz, A. Braun, M. Weinmann
{"title":"Object and Pattern Recognition in Remote Sensing","authors":"S. Hinz, A. Braun, M. Weinmann","doi":"10.14358/pers.88.1.9","DOIUrl":"https://doi.org/10.14358/pers.88.1.9","url":null,"abstract":"","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"74 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82568745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examining the Integration of Landsat Operational Land Imager with Sentinel-1 and Vegetation Indices in Mapping Southern Yellow Pines (Loblolly, Shortleaf, and Virginia Pines) 基于Sentinel-1和植被指数的Landsat业务陆地成像仪在南方黄松(火炬松、短叶松和维吉尼亚松)制图中的集成研究
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2022-01-01 DOI: 10.14358/pers.21-00024r2
C. Akumu, Ezechirinum Amadi
The mapping of southern yellow pines (loblolly, shortleaf, and Virginia pines) is important to supporting forest inventory and the management of forest resources. The overall aim of this study was to examine the integration of Landsat Operational Land Imager (OLI ) optical data with Sentinel-1 microwave C-band satellite data and vegetation indices in mapping the canopy cover of southern yellow pines. Specifically, this study assessed the overall mapping accuracies of the canopy cover classification of southern yellow pines derived using four data-integration scenarios: Landsat OLI alone; Landsat OLI and Sentinel-1; Landsat OLI with vegetation indices derived from satellite data—normalized difference vegetation index, soil-adjusted vegetation index, modified soil-adjusted vegetation index, transformed soil-adjusted vegetation index, and infrared percentage vegetation index; and 4) Landsat OLI with Sentinel-1 and vegetation indices. The results showed that the integration of Landsat OLI reflectance bands with Sentinel-1 backscattering coefficients and vegetation indices yielded the best overall classification accuracy, about 77%, and standalone Landsat OLI the weakest accuracy, approximately 67%. The findings in this study demonstrate that the addition of backscattering coefficients from Sentinel-1 and vegetation indices positively contributed to the mapping of southern yellow pines.
南方黄松(火炬松、短叶松和维吉尼亚松)的制图对支持森林清查和森林资源管理具有重要意义。本研究的总体目的是研究将Landsat Operational Land Imager (OLI)光学数据与Sentinel-1微波c波段卫星数据和植被指数相结合在南方黄松冠层覆盖制图中的应用。具体而言,本研究评估了使用四种数据集成方案得出的南方黄松冠层覆盖分类的总体制图精度:单独使用Landsat OLI;陆地卫星OLI和哨兵1号;采用归一化植被指数、土壤调整植被指数、修正土壤调整植被指数、转化土壤调整植被指数、红外植被百分比指数等反演植被指数的Landsat OLI;4)基于Sentinel-1和植被指数的Landsat OLI。结果表明,基于Sentinel-1后向散射系数和植被指数的Landsat OLI综合分类精度最高,约为77%,而独立的Landsat OLI分类精度最低,约为67%。研究结果表明,Sentinel-1的后向散射系数和植被指数的加入对南方黄松的制图有积极的促进作用。
{"title":"Examining the Integration of Landsat Operational Land Imager with Sentinel-1 and Vegetation Indices in Mapping Southern Yellow Pines (Loblolly, Shortleaf, and Virginia Pines)","authors":"C. Akumu, Ezechirinum Amadi","doi":"10.14358/pers.21-00024r2","DOIUrl":"https://doi.org/10.14358/pers.21-00024r2","url":null,"abstract":"The mapping of southern yellow pines (loblolly, shortleaf, and Virginia pines) is important to supporting forest inventory and the management of forest resources. The overall aim of this study was to examine the integration of Landsat Operational Land Imager (OLI ) optical data with\u0000 Sentinel-1 microwave C-band satellite data and vegetation indices in mapping the canopy cover of southern yellow pines. Specifically, this study assessed the overall mapping accuracies of the canopy cover classification of southern yellow pines derived using four data-integration scenarios:\u0000 Landsat OLI alone; Landsat OLI and Sentinel-1; Landsat OLI with vegetation indices derived from satellite data—normalized difference vegetation index, soil-adjusted vegetation index, modified soil-adjusted vegetation index, transformed soil-adjusted vegetation index, and infrared\u0000 percentage vegetation index; and 4) Landsat OLI with Sentinel-1 and vegetation indices. The results showed that the integration of Landsat OLI reflectance bands with Sentinel-1 backscattering coefficients and vegetation indices yielded the best overall classification accuracy,\u0000 about 77%, and standalone Landsat OLI the weakest accuracy, approximately 67%. The findings in this study demonstrate that the addition of backscattering coefficients from Sentinel-1 and vegetation indices positively contributed to the mapping of southern yellow pines.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"6 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75443470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Grids and Datums Update: This month we look at the Republic of Vanuatu 网格和基准更新:本月我们将关注瓦努阿图共和国
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2022-01-01 DOI: 10.14358/pers.88.1.11
C. Mugnier
{"title":"Grids and Datums Update: This month we look at the Republic of Vanuatu","authors":"C. Mugnier","doi":"10.14358/pers.88.1.11","DOIUrl":"https://doi.org/10.14358/pers.88.1.11","url":null,"abstract":"","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"134 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88213326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Urban Land Cover Mapping with the Fusion of Optical and SAR Data Based on Feature Selection Strategy 基于特征选择策略的光学与SAR数据融合改进城市土地覆盖制图
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2022-01-01 DOI: 10.14358/pers.21-00030r2
Qing Ding, Z. Shao, Xiao Huang, O. Altan, Yewen Fan
Taking the Futian District as the research area, this study proposed an effective urban land cover mapping framework fusing optical and SAR data. To simplify the model complexity and improve the mapping results, various feature selection methods were compared and evaluated. The results showed that feature selection can eliminate irrelevant features, increase the mean correlation between features slightly, and improve the classification accuracy and computational efficiency significantly. The recursive feature elimination-support vector machine (RFE-SVM) model obtained the best results, with an overall accuracy of 89.17% and a kappa coefficient of 0.8695, respectively. In addition, this study proved that the fusion of optical and SAR data can effectively improve mapping and reduce the confusion between different land covers. The novelty of this study is with the insight into the merits of multi-source data fusion and feature selection in the land cover mapping process over complex urban environments, and to evaluate the performance differences between different feature selection methods.
以福田区为研究区,提出了一种有效的融合光学和SAR数据的城市土地覆被制图框架。为了简化模型复杂度,提高映射效果,对各种特征选择方法进行了比较和评价。结果表明,特征选择可以消除不相关特征,略微提高特征之间的平均相关性,显著提高分类精度和计算效率。递归特征消除-支持向量机(RFE-SVM)模型获得了最好的结果,总体准确率为89.17%,kappa系数为0.8695。此外,本研究还证明了光学和SAR数据的融合可以有效地改善制图,减少不同土地覆被之间的混淆。本研究的新颖之处在于深入探讨了多源数据融合和特征选择在复杂城市环境下土地覆盖制图过程中的优势,并评估了不同特征选择方法之间的性能差异。
{"title":"Improving Urban Land Cover Mapping with the Fusion of Optical and SAR Data Based on Feature Selection Strategy","authors":"Qing Ding, Z. Shao, Xiao Huang, O. Altan, Yewen Fan","doi":"10.14358/pers.21-00030r2","DOIUrl":"https://doi.org/10.14358/pers.21-00030r2","url":null,"abstract":"Taking the Futian District as the research area, this study proposed an effective urban land cover mapping framework fusing optical and SAR data. To simplify the model complexity and improve the mapping results, various feature selection methods were compared and evaluated. The results\u0000 showed that feature selection can eliminate irrelevant features, increase the mean correlation between features slightly, and improve the classification accuracy and computational efficiency significantly. The recursive feature elimination-support vector machine (RFE-SVM) model obtained the\u0000 best results, with an overall accuracy of 89.17% and a kappa coefficient of 0.8695, respectively. In addition, this study proved that the fusion of optical and SAR data can effectively improve mapping and reduce the confusion between different land covers. The novelty of this study is with\u0000 the insight into the merits of multi-source data fusion and feature selection in the land cover mapping process over complex urban environments, and to evaluate the performance differences between different feature selection methods.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"12 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81960703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Effect of Locust Invasion and Mitigation Using Remote Sensing Techniques: A Case Study of North Sindh Pakistan 利用遥感技术对蝗虫入侵和减灾的影响:以巴基斯坦信德省北部为例
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2022-01-01 DOI: 10.14358/pers.21-00025r2
Muhammad Nasar Ahmad, Z. Shao, O. Altan
This study comprises the identification of the locust outbreak that happened in February 2020. It is not possible to conduct ground-based surveys to monitor such huge disasters in a timely and adequate manner. Therefore, we used a combination of automatic and manual remote sensing data processing techniques to find out the aftereffects of locust attack effectively. We processed MODIS -normalized difference vegetation index (NDVI ) manually on ENVI and Landsat 8 NDVI using the Google Earth Engine (GEE ) cloud computing platform. We found from the results that, (a) NDVI computation on GEE is more effective, prompt, and reliable compared with the results of manual NDVI computations; (b) there is a high effect of locust disasters in the northern part of Sindh, Thul, Ghari Khairo, Garhi Yaseen, Jacobabad, and Ubauro, which are more vulnerable; and (c) NDVI value suddenly decreased to 0.68 from 0.92 in 2020 using Landsat NDVI and from 0.81 to 0.65 using MODIS satellite imagery. Results clearly indicate an abrupt decrease in vegetation in 2020 due to a locust disaster. That is a big threat to crop yield and food production because it provides a major portion of food chain and gross domestic product for Sindh, Pakistan.
这项研究包括确定2020年2月发生的蝗灾。通过地面调查来及时、充分地监测如此巨大的灾害是不可能的。因此,我们采用自动与人工相结合的遥感数据处理技术,有效地找出蝗灾后的影响。利用谷歌Earth Engine (GEE)云计算平台,在ENVI和Landsat 8 NDVI上手动处理MODIS归一化植被指数(NDVI)。结果表明:(a)与人工NDVI计算结果相比,基于GEE的NDVI计算更有效、快捷、可靠;(b)信德省北部、Thul、Ghari Khairo、Garhi Yaseen、Jacobabad和Ubauro的蝗灾影响较大,这些地区更脆弱;(c) 2020年Landsat NDVI值从0.92骤降至0.68,MODIS NDVI值从0.81骤降至0.65。结果清楚地表明,由于蝗灾,2020年植被急剧减少。这对农作物产量和粮食生产是一个巨大的威胁,因为它为巴基斯坦信德省提供了食物链和国内生产总值的主要部分。
{"title":"Effect of Locust Invasion and Mitigation Using Remote Sensing Techniques: A Case Study of North Sindh Pakistan","authors":"Muhammad Nasar Ahmad, Z. Shao, O. Altan","doi":"10.14358/pers.21-00025r2","DOIUrl":"https://doi.org/10.14358/pers.21-00025r2","url":null,"abstract":"This study comprises the identification of the locust outbreak that happened in February 2020. It is not possible to conduct ground-based surveys to monitor such huge disasters in a timely and adequate manner. Therefore, we used a combination of automatic and manual remote sensing data\u0000 processing techniques to find out the aftereffects of locust attack effectively. We processed MODIS -normalized difference vegetation index (NDVI ) manually on ENVI and Landsat 8 NDVI using the Google Earth Engine (GEE ) cloud computing platform. We found from the results that, (a) NDVI computation\u0000 on GEE is more effective, prompt, and reliable compared with the results of manual NDVI computations; (b) there is a high effect of locust disasters in the northern part of Sindh, Thul, Ghari Khairo, Garhi Yaseen, Jacobabad, and Ubauro, which are more vulnerable; and (c) NDVI value suddenly\u0000 decreased to 0.68 from 0.92 in 2020 using Landsat NDVI and from 0.81 to 0.65 using MODIS satellite imagery. Results clearly indicate an abrupt decrease in vegetation in 2020 due to a locust disaster. That is a big threat to crop yield and food production because it provides a major portion\u0000 of food chain and gross domestic product for Sindh, Pakistan.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"14 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80468727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
GIS Tips & Tricks—Disappearing Layers? – Here's a Quick Fix GIS提示和技巧-图层消失?-这是一个快速解决方案
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2022-01-01 DOI: 10.14358/pers.88.1.5
Alma M. Karlin
{"title":"GIS Tips & Tricks—Disappearing Layers? – Here's a Quick Fix","authors":"Alma M. Karlin","doi":"10.14358/pers.88.1.5","DOIUrl":"https://doi.org/10.14358/pers.88.1.5","url":null,"abstract":"","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"22 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80776717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sensing and Human Factors Research: A Review 传感与人为因素研究综述
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2022-01-01 DOI: 10.14358/pers.21-00012r2
Raechel A. Portelli, P. Pope
Human experts are integral to the success of computational earth observation. They perform various visual decision-making tasks, from selecting data and training machine-learning algorithms to interpreting accuracy and credibility. Research concerning the various human factors which affect performance has a long history within the fields of earth observation and the military. Shifts in the analytical environment from analog to digital workspaces necessitate continued research, focusing on human-in-the-loop processing. This article reviews the history of human-factors research within the field of remote sensing and suggests a framework for refocusing the discipline's efforts to understand the role that humans play in earth observation.
计算地球观测的成功离不开人类专家。他们执行各种视觉决策任务,从选择数据和训练机器学习算法到解释准确性和可信度。在对地观测和军事领域,对影响性能的各种人为因素的研究有着悠久的历史。分析环境从模拟到数字工作空间的转变需要持续的研究,重点是人在循环处理。本文回顾了遥感领域中人为因素研究的历史,并提出了一个框架,以重新聚焦该学科的努力,以理解人类在地球观测中所扮演的角色。
{"title":"Sensing and Human Factors Research: A Review","authors":"Raechel A. Portelli, P. Pope","doi":"10.14358/pers.21-00012r2","DOIUrl":"https://doi.org/10.14358/pers.21-00012r2","url":null,"abstract":"Human experts are integral to the success of computational earth observation. They perform various visual decision-making tasks, from selecting data and training machine-learning algorithms to interpreting accuracy and credibility. Research concerning the various human factors which\u0000 affect performance has a long history within the fields of earth observation and the military. Shifts in the analytical environment from analog to digital workspaces necessitate continued research, focusing on human-in-the-loop processing. This article reviews the history of human-factors\u0000 research within the field of remote sensing and suggests a framework for refocusing the discipline's efforts to understand the role that humans play in earth observation.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"30 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75198542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-View Urban Scene Classification with a Complementary-Information Learning Model 基于互补信息学习模型的多视角城市场景分类
IF 1.3 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Pub Date : 2022-01-01 DOI: 10.14358/pers.21-00062r2
Wanxuan Geng, Weixun Zhou, Shuanggen Jin
Traditional urban scene-classification approaches focus on images taken either by satellite or in aerial view. Although single-view images are able to achieve satisfactory results for scene classification in most situations, the complementary information provided by other image views is needed to further improve performance. Therefore, we present a complementary information-learning model (CILM) to perform multi-view scene classification of aerial and ground-level images. Specifically, the proposed CILM takes aerial and ground-level image pairs as input to learn view-specific features for later fusion to integrate the complementary information. To train CILM, a unified loss consisting of cross entropy and contrastive losses is exploited to force the network to be more robust. Once CILM is trained, the features of each view are extracted via the two proposed feature-extraction scenarios and then fused to train the support vector machine classifier for classification. The experimental results on two publicly available benchmark data sets demonstrate that CILM achieves remarkable performance, indicating that it is an effective model for learning complementary information and thus improving urban scene classification.
传统的城市场景分类方法主要集中在卫星或鸟瞰图上。虽然在大多数情况下,单视图图像能够获得令人满意的场景分类结果,但需要其他图像视图提供的补充信息来进一步提高性能。因此,我们提出了一种互补信息学习模型(CILM)来对航空和地面图像进行多视图场景分类。具体而言,本文提出的CILM以空中和地面图像对为输入,学习特定于视图的特征,然后进行融合,以整合互补信息。为了训练CILM,利用由交叉熵和对比损失组成的统一损失来增强网络的鲁棒性。训练完CILM后,通过提出的两种特征提取场景提取每个视图的特征,然后融合训练支持向量机分类器进行分类。在两个公开的基准数据集上的实验结果表明,CILM模型取得了显著的性能,表明它是一种学习互补信息从而改进城市场景分类的有效模型。
{"title":"Multi-View Urban Scene Classification with a Complementary-Information Learning Model","authors":"Wanxuan Geng, Weixun Zhou, Shuanggen Jin","doi":"10.14358/pers.21-00062r2","DOIUrl":"https://doi.org/10.14358/pers.21-00062r2","url":null,"abstract":"Traditional urban scene-classification approaches focus on images taken either by satellite or in aerial view. Although single-view images are able to achieve satisfactory results for scene classification in most situations, the complementary information provided by other image views\u0000 is needed to further improve performance. Therefore, we present a complementary information-learning model (CILM) to perform multi-view scene classification of aerial and ground-level images. Specifically, the proposed CILM takes aerial and ground-level image pairs as input to\u0000 learn view-specific features for later fusion to integrate the complementary information. To train CILM, a unified loss consisting of cross entropy and contrastive losses is exploited to force the network to be more robust. Once CILM is trained, the features of each view are\u0000 extracted via the two proposed feature-extraction scenarios and then fused to train the support vector machine classifier for classification. The experimental results on two publicly available benchmark data sets demonstrate that CILM achieves remarkable performance, indicating that\u0000 it is an effective model for learning complementary information and thus improving urban scene classification.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":"50 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77610927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Photogrammetric Engineering and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1