基于视觉变换和迁移学习的遥感图像分类

M. Khan, Muhammad Rajwana
{"title":"基于视觉变换和迁移学习的遥感图像分类","authors":"M. Khan, Muhammad Rajwana","doi":"10.15849/ijasca.220328.14","DOIUrl":null,"url":null,"abstract":"Abstract Aerial scene classification, which aims to automatically tag an aerial image with a specific semantic category, is a fundamental problem for understanding high-resolution remote sensing imagery. The classification of remote sensing image scenes can provide significant value, from forest fire monitoring to land use and land cover classification. From the first aerial photographs of the early 20th century to today's satellite imagery, the amount of remote sensing data has increased geometrically with higher resolution. The need to analyze this modern digital data has motivated research to accelerate the classification of remotely sensed images. Fortunately, the computer vision community has made great strides in classifying natural images. Transformers first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its strong representation capabilities, researchers are looking at ways to apply transformers to computer vision tasks. In a variety of visual benchmarks, transformer-based models perform similar to or better than other types of networks such as convolutional and recurrent networks. Given its high performance and less need for vision-specific inductive bias, the transformer is receiving more and more attention from the computer vision community. In this paper, we provide a systematic review of the Transfer Learning and Transformer techniques for scene classification using AID datasets. Both approaches give an accuracy of 80% and 84%, for the AID dataset. Keywords: remote sensing, vision transformers, transfer learning, classification accuracy","PeriodicalId":38638,"journal":{"name":"International Journal of Advances in Soft Computing and its Applications","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Remote Sensing Image Classification Via Vision Transformer and Transfer Learning\",\"authors\":\"M. Khan, Muhammad Rajwana\",\"doi\":\"10.15849/ijasca.220328.14\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Aerial scene classification, which aims to automatically tag an aerial image with a specific semantic category, is a fundamental problem for understanding high-resolution remote sensing imagery. The classification of remote sensing image scenes can provide significant value, from forest fire monitoring to land use and land cover classification. From the first aerial photographs of the early 20th century to today's satellite imagery, the amount of remote sensing data has increased geometrically with higher resolution. The need to analyze this modern digital data has motivated research to accelerate the classification of remotely sensed images. Fortunately, the computer vision community has made great strides in classifying natural images. Transformers first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its strong representation capabilities, researchers are looking at ways to apply transformers to computer vision tasks. In a variety of visual benchmarks, transformer-based models perform similar to or better than other types of networks such as convolutional and recurrent networks. Given its high performance and less need for vision-specific inductive bias, the transformer is receiving more and more attention from the computer vision community. In this paper, we provide a systematic review of the Transfer Learning and Transformer techniques for scene classification using AID datasets. Both approaches give an accuracy of 80% and 84%, for the AID dataset. Keywords: remote sensing, vision transformers, transfer learning, classification accuracy\",\"PeriodicalId\":38638,\"journal\":{\"name\":\"International Journal of Advances in Soft Computing and its Applications\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Advances in Soft Computing and its Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.15849/ijasca.220328.14\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"Computer Science\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Advances in Soft Computing and its Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15849/ijasca.220328.14","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Computer Science","Score":null,"Total":0}
引用次数: 0

摘要

摘要航空场景分类是理解高分辨率遥感图像的一个基本问题,其目的是用特定的语义类别自动标记航空图像。遥感图像场景的分类可以提供重要的价值,从森林火灾监测到土地利用和土地覆盖分类。从20世纪初的第一张航空照片到今天的卫星图像,遥感数据的数量以更高的分辨率呈几何级数增长。分析这些现代数字数据的需要促使研究加速遥感图像的分类。幸运的是,计算机视觉界在对自然图像进行分类方面取得了长足的进步。变形金刚最早应用于自然语言处理领域,是一种主要基于自注意机制的深度神经网络。由于其强大的表示能力,研究人员正在寻找将转换器应用于计算机视觉任务的方法。在各种视觉基准测试中,基于转换器的模型的性能类似于或优于其他类型的网络,如卷积和递归网络。由于其高性能和对视觉特定感应偏置的需求较少,变压器越来越受到计算机视觉界的关注。在本文中,我们对使用AID数据集进行场景分类的迁移学习和变换技术进行了系统综述。对于AID数据集,这两种方法的准确率分别为80%和84%。关键词:遥感、视觉转换器、迁移学习、分类精度
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Remote Sensing Image Classification Via Vision Transformer and Transfer Learning
Abstract Aerial scene classification, which aims to automatically tag an aerial image with a specific semantic category, is a fundamental problem for understanding high-resolution remote sensing imagery. The classification of remote sensing image scenes can provide significant value, from forest fire monitoring to land use and land cover classification. From the first aerial photographs of the early 20th century to today's satellite imagery, the amount of remote sensing data has increased geometrically with higher resolution. The need to analyze this modern digital data has motivated research to accelerate the classification of remotely sensed images. Fortunately, the computer vision community has made great strides in classifying natural images. Transformers first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its strong representation capabilities, researchers are looking at ways to apply transformers to computer vision tasks. In a variety of visual benchmarks, transformer-based models perform similar to or better than other types of networks such as convolutional and recurrent networks. Given its high performance and less need for vision-specific inductive bias, the transformer is receiving more and more attention from the computer vision community. In this paper, we provide a systematic review of the Transfer Learning and Transformer techniques for scene classification using AID datasets. Both approaches give an accuracy of 80% and 84%, for the AID dataset. Keywords: remote sensing, vision transformers, transfer learning, classification accuracy
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Advances in Soft Computing and its Applications
International Journal of Advances in Soft Computing and its Applications Computer Science-Computer Science Applications
CiteScore
3.30
自引率
0.00%
发文量
31
期刊介绍: The aim of this journal is to provide a lively forum for the communication of original research papers and timely review articles on Advances in Soft Computing and Its Applications. IJASCA will publish only articles of the highest quality. Submissions will be evaluated on their originality and significance. IJASCA invites submissions in all areas of Soft Computing and Its Applications. The scope of the journal includes, but is not limited to: √ Soft Computing Fundamental and Optimization √ Soft Computing for Big Data Era √ GPU Computing for Machine Learning √ Soft Computing Modeling for Perception and Spiritual Intelligence √ Soft Computing and Agents Technology √ Soft Computing in Computer Graphics √ Soft Computing and Pattern Recognition √ Soft Computing in Biomimetic Pattern Recognition √ Data mining for Social Network Data √ Spatial Data Mining & Information Retrieval √ Intelligent Software Agent Systems and Architectures √ Advanced Soft Computing and Multi-Objective Evolutionary Computation √ Perception-Based Intelligent Decision Systems √ Spiritual-Based Intelligent Systems √ Soft Computing in Industry ApplicationsOther issues related to the Advances of Soft Computing in various applications.
期刊最新文献
Insider Threat Prevention in the US Banking System Cybersecurity Strategies for Safeguarding Customer’s Data and Preventing Financial Fraud in the United States Financial Sectors Multilevel Thresholding Image Segmentation Based-Logarithm Decreasing Inertia Weight Particle Swarm Optimization Improvement on I-Devices Using L-GCNN Classifier for Smart Mosque Simulation Supervised Learning Algorithms for Predicting Customer Churn with Hyperparameter Optimization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1