Multi-View Urban Scene Classification with a Complementary-Information Learning Model

IF 1 4区 地球科学 Q4 GEOGRAPHY, PHYSICAL Photogrammetric Engineering and Remote Sensing Pub Date : 2022-01-01 DOI:10.14358/pers.21-00062r2
Wanxuan Geng, Weixun Zhou, Shuanggen Jin
{"title":"Multi-View Urban Scene Classification with a Complementary-Information Learning Model","authors":"Wanxuan Geng, Weixun Zhou, Shuanggen Jin","doi":"10.14358/pers.21-00062r2","DOIUrl":null,"url":null,"abstract":"Traditional urban scene-classification approaches focus on images taken either by satellite or in aerial view. Although single-view images are able to achieve satisfactory results for scene classification in most situations, the complementary information provided by other image views\n is needed to further improve performance. Therefore, we present a complementary information-learning model (CILM) to perform multi-view scene classification of aerial and ground-level images. Specifically, the proposed CILM takes aerial and ground-level image pairs as input to\n learn view-specific features for later fusion to integrate the complementary information. To train CILM, a unified loss consisting of cross entropy and contrastive losses is exploited to force the network to be more robust. Once CILM is trained, the features of each view are\n extracted via the two proposed feature-extraction scenarios and then fused to train the support vector machine classifier for classification. The experimental results on two publicly available benchmark data sets demonstrate that CILM achieves remarkable performance, indicating that\n it is an effective model for learning complementary information and thus improving urban scene classification.","PeriodicalId":49702,"journal":{"name":"Photogrammetric Engineering and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Photogrammetric Engineering and Remote Sensing","FirstCategoryId":"89","ListUrlMain":"https://doi.org/10.14358/pers.21-00062r2","RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 4

Abstract

Traditional urban scene-classification approaches focus on images taken either by satellite or in aerial view. Although single-view images are able to achieve satisfactory results for scene classification in most situations, the complementary information provided by other image views is needed to further improve performance. Therefore, we present a complementary information-learning model (CILM) to perform multi-view scene classification of aerial and ground-level images. Specifically, the proposed CILM takes aerial and ground-level image pairs as input to learn view-specific features for later fusion to integrate the complementary information. To train CILM, a unified loss consisting of cross entropy and contrastive losses is exploited to force the network to be more robust. Once CILM is trained, the features of each view are extracted via the two proposed feature-extraction scenarios and then fused to train the support vector machine classifier for classification. The experimental results on two publicly available benchmark data sets demonstrate that CILM achieves remarkable performance, indicating that it is an effective model for learning complementary information and thus improving urban scene classification.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于互补信息学习模型的多视角城市场景分类
传统的城市场景分类方法主要集中在卫星或鸟瞰图上。虽然在大多数情况下,单视图图像能够获得令人满意的场景分类结果,但需要其他图像视图提供的补充信息来进一步提高性能。因此,我们提出了一种互补信息学习模型(CILM)来对航空和地面图像进行多视图场景分类。具体而言,本文提出的CILM以空中和地面图像对为输入,学习特定于视图的特征,然后进行融合,以整合互补信息。为了训练CILM,利用由交叉熵和对比损失组成的统一损失来增强网络的鲁棒性。训练完CILM后,通过提出的两种特征提取场景提取每个视图的特征,然后融合训练支持向量机分类器进行分类。在两个公开的基准数据集上的实验结果表明,CILM模型取得了显著的性能,表明它是一种学习互补信息从而改进城市场景分类的有效模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Photogrammetric Engineering and Remote Sensing
Photogrammetric Engineering and Remote Sensing 地学-成像科学与照相技术
CiteScore
1.70
自引率
15.40%
发文量
89
审稿时长
9 months
期刊介绍: Photogrammetric Engineering & Remote Sensing commonly referred to as PE&RS, is the official journal of imaging and geospatial information science and technology. Included in the journal on a regular basis are highlight articles such as the popular columns “Grids & Datums” and “Mapping Matters” and peer reviewed technical papers. We publish thousands of documents, reports, codes, and informational articles in and about the industries relating to Geospatial Sciences, Remote Sensing, Photogrammetry and other imaging sciences.
期刊最新文献
A Powerful Correspondence Selection Method for Point Cloud Registration Based on Machine Learning Identification of Critical Urban Clusters for Placating Urban Heat Island Effects over Fast-Growing Tropical City Regions: Estimating the Contribution of Different City Sizes in Escalating UHI Intensity A Novel Object Detection Method for Solid Waste Incorporating a Weighted Deformable Convolution GIS Tips & Tricks ‐ Relationships Count when Mapping? An Integrated Approach for Wildfire Photography Telemetry using WRF Numerical Forecast Products
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1