{"title":"Unsupervised self-training method based on deep learning for soil moisture estimation using synergy of sentinel-1 and sentinel-2 images","authors":"A. Ben Abbes, N. Jarray","doi":"10.1080/19479832.2022.2106317","DOIUrl":null,"url":null,"abstract":"ABSTRACT Here, we present a novel unsupervised self-training method (USTM) for SM estimation. First, a ML model is trained using the labeled and unlabeled data. Then, the pseudo-labeled data are generated employing the second model by adding a proxy labeled data. Eventually, SM is estimated applying the third model by pseudo-labeled data generated by the second model and unlabeled data. The final SM estimation result is obtained by training the third model. Subsequently, in-situ measurements are performed to validate our method. The final model is an unsupervised learning model. Experiments were carried out at two different sites located in southern Tunisia using Sentinel-1A and Sentinel-2A data. The input data include the backscatter coefficient in two-mode polarization ( and ), derived from Sentinel-1A, as well as the Normalized Difference Vegetation Index (NDVI) and the Normalized Difference Infrared Index (NDII) for Sentinel-2A and in-situ data. The USTM method based on (Random Forest (RF)- Convolutional neural network (CNN)-CNN) combination allowed obtaining the best performance and precision rate, compared to other combinations (Artificial Neural Network (ANN)-CNN-CNN) and (eXtreme Gradient Boosting (XGBoost)-CNN-CNN).","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":1.8000,"publicationDate":"2022-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Image and Data Fusion","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/19479832.2022.2106317","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"REMOTE SENSING","Score":null,"Total":0}
引用次数: 9
Abstract
ABSTRACT Here, we present a novel unsupervised self-training method (USTM) for SM estimation. First, a ML model is trained using the labeled and unlabeled data. Then, the pseudo-labeled data are generated employing the second model by adding a proxy labeled data. Eventually, SM is estimated applying the third model by pseudo-labeled data generated by the second model and unlabeled data. The final SM estimation result is obtained by training the third model. Subsequently, in-situ measurements are performed to validate our method. The final model is an unsupervised learning model. Experiments were carried out at two different sites located in southern Tunisia using Sentinel-1A and Sentinel-2A data. The input data include the backscatter coefficient in two-mode polarization ( and ), derived from Sentinel-1A, as well as the Normalized Difference Vegetation Index (NDVI) and the Normalized Difference Infrared Index (NDII) for Sentinel-2A and in-situ data. The USTM method based on (Random Forest (RF)- Convolutional neural network (CNN)-CNN) combination allowed obtaining the best performance and precision rate, compared to other combinations (Artificial Neural Network (ANN)-CNN-CNN) and (eXtreme Gradient Boosting (XGBoost)-CNN-CNN).
期刊介绍:
International Journal of Image and Data Fusion provides a single source of information for all aspects of image and data fusion methodologies, developments, techniques and applications. Image and data fusion techniques are important for combining the many sources of satellite, airborne and ground based imaging systems, and integrating these with other related data sets for enhanced information extraction and decision making. Image and data fusion aims at the integration of multi-sensor, multi-temporal, multi-resolution and multi-platform image data, together with geospatial data, GIS, in-situ, and other statistical data sets for improved information extraction, as well as to increase the reliability of the information. This leads to more accurate information that provides for robust operational performance, i.e. increased confidence, reduced ambiguity and improved classification enabling evidence based management. The journal welcomes original research papers, review papers, shorter letters, technical articles, book reviews and conference reports in all areas of image and data fusion including, but not limited to, the following aspects and topics: • Automatic registration/geometric aspects of fusing images with different spatial, spectral, temporal resolutions; phase information; or acquired in different modes • Pixel, feature and decision level fusion algorithms and methodologies • Data Assimilation: fusing data with models • Multi-source classification and information extraction • Integration of satellite, airborne and terrestrial sensor systems • Fusing temporal data sets for change detection studies (e.g. for Land Cover/Land Use Change studies) • Image and data mining from multi-platform, multi-source, multi-scale, multi-temporal data sets (e.g. geometric information, topological information, statistical information, etc.).