LeiLei Xu, Shanqiu Shi, Yujun Liu, Hao Zhang, Dan Wang, Lu Zhang, Wan Liang, Hao Chen
{"title":"A large-scale remote sensing scene dataset construction for semantic segmentation","authors":"LeiLei Xu, Shanqiu Shi, Yujun Liu, Hao Zhang, Dan Wang, Lu Zhang, Wan Liang, Hao Chen","doi":"10.1080/19479832.2023.2199005","DOIUrl":null,"url":null,"abstract":"ABSTRACT As fuelled by the advancement of deep learning for computer vision tasks, its application in other fields has been boosted. This technology has been increasingly applied to the interpretation of remote sensing image, showing high potential economic and societal significance, such as automatically mapping land cover. However, the model requires a considerable number of samples for training, and it is now adversely affected by the lack of a large-scale dataset. Moreover, labelling samples is a time-consuming and laborious task, and a complete land classification system suitable for deep learning has not been established. This limitation hinders the development and application of deep learning. To meet the data needs of deep learning in the field of remote sensing, this study develops JSsampleP, a large-scale dataset for segmentation, generating 110,170 data samples that cover various categories of scenes within Jiangsu Province, China. The existing Geographical Condition Dataset (GCD) and Basic Surveying and Mapping Dataset (BSMD) in Jiangsu were fully utilised, significantly reducing the cost of labelling samples. Furthermore, the samples were subject to a rigorous cleaning process to ensure data quality. Finally, the accuracy of the dataset is verified using the U-Net model, and the future version will be optimised continuously.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"14 1","pages":"299 - 323"},"PeriodicalIF":1.8000,"publicationDate":"2023-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Image and Data Fusion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/19479832.2023.2199005","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"REMOTE SENSING","Score":null,"Total":0}
引用次数: 0
Abstract
ABSTRACT As fuelled by the advancement of deep learning for computer vision tasks, its application in other fields has been boosted. This technology has been increasingly applied to the interpretation of remote sensing image, showing high potential economic and societal significance, such as automatically mapping land cover. However, the model requires a considerable number of samples for training, and it is now adversely affected by the lack of a large-scale dataset. Moreover, labelling samples is a time-consuming and laborious task, and a complete land classification system suitable for deep learning has not been established. This limitation hinders the development and application of deep learning. To meet the data needs of deep learning in the field of remote sensing, this study develops JSsampleP, a large-scale dataset for segmentation, generating 110,170 data samples that cover various categories of scenes within Jiangsu Province, China. The existing Geographical Condition Dataset (GCD) and Basic Surveying and Mapping Dataset (BSMD) in Jiangsu were fully utilised, significantly reducing the cost of labelling samples. Furthermore, the samples were subject to a rigorous cleaning process to ensure data quality. Finally, the accuracy of the dataset is verified using the U-Net model, and the future version will be optimised continuously.
期刊介绍:
International Journal of Image and Data Fusion provides a single source of information for all aspects of image and data fusion methodologies, developments, techniques and applications. Image and data fusion techniques are important for combining the many sources of satellite, airborne and ground based imaging systems, and integrating these with other related data sets for enhanced information extraction and decision making. Image and data fusion aims at the integration of multi-sensor, multi-temporal, multi-resolution and multi-platform image data, together with geospatial data, GIS, in-situ, and other statistical data sets for improved information extraction, as well as to increase the reliability of the information. This leads to more accurate information that provides for robust operational performance, i.e. increased confidence, reduced ambiguity and improved classification enabling evidence based management. The journal welcomes original research papers, review papers, shorter letters, technical articles, book reviews and conference reports in all areas of image and data fusion including, but not limited to, the following aspects and topics: • Automatic registration/geometric aspects of fusing images with different spatial, spectral, temporal resolutions; phase information; or acquired in different modes • Pixel, feature and decision level fusion algorithms and methodologies • Data Assimilation: fusing data with models • Multi-source classification and information extraction • Integration of satellite, airborne and terrestrial sensor systems • Fusing temporal data sets for change detection studies (e.g. for Land Cover/Land Use Change studies) • Image and data mining from multi-platform, multi-source, multi-scale, multi-temporal data sets (e.g. geometric information, topological information, statistical information, etc.).