Pub Date : 2023-12-11DOI: 10.1080/22797254.2023.2289616
Kun Wang, Ling Han, Juan Liao
This paper takes landslide as a special research object. For the problems of landslide detection in remote sensing images, deep learning and playback method is adopted. Using the You Only Look Once...
{"title":"A study of high-resolution remote sensing image landslide detection with optimized anchor boxes and edge enhancement","authors":"Kun Wang, Ling Han, Juan Liao","doi":"10.1080/22797254.2023.2289616","DOIUrl":"https://doi.org/10.1080/22797254.2023.2289616","url":null,"abstract":"This paper takes landslide as a special research object. For the problems of landslide detection in remote sensing images, deep learning and playback method is adopted. Using the You Only Look Once...","PeriodicalId":49077,"journal":{"name":"European Journal of Remote Sensing","volume":"3 1","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138630498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-03DOI: 10.1080/22797254.2023.2289013
Bincai Cao, Wang Jianrong, Hu Yan, Lv Yuan, Yang Xiuce, Lu Xueliang, Li Gang, Wei Yongqiang, Liu Zhuang
The GaoFen-14 (GF-14) satellite is China’s most recent high-resolution earth observation satellite system. It is equipped with a two linear-array stereo camera and is intend for topographic mapping...
{"title":"On-orbit geometric calibration and preliminary accuracy verification of GaoFen-14 (GF-14) optical two linear-array stereo camera","authors":"Bincai Cao, Wang Jianrong, Hu Yan, Lv Yuan, Yang Xiuce, Lu Xueliang, Li Gang, Wei Yongqiang, Liu Zhuang","doi":"10.1080/22797254.2023.2289013","DOIUrl":"https://doi.org/10.1080/22797254.2023.2289013","url":null,"abstract":"The GaoFen-14 (GF-14) satellite is China’s most recent high-resolution earth observation satellite system. It is equipped with a two linear-array stereo camera and is intend for topographic mapping...","PeriodicalId":49077,"journal":{"name":"European Journal of Remote Sensing","volume":"25 1","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138531862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-23DOI: 10.1080/22797254.2023.2281073
Tu Nguyen, Nam P. Nguyen, Claudio Savaglio, Ying Zhang, Braulio Dumba
Published in European Journal of Remote Sensing (Ahead of Print, 2023)
发表于欧洲遥感杂志(提前印刷,2023年)
{"title":"Future of urban remote sensing and new sensors","authors":"Tu Nguyen, Nam P. Nguyen, Claudio Savaglio, Ying Zhang, Braulio Dumba","doi":"10.1080/22797254.2023.2281073","DOIUrl":"https://doi.org/10.1080/22797254.2023.2281073","url":null,"abstract":"Published in European Journal of Remote Sensing (Ahead of Print, 2023)","PeriodicalId":49077,"journal":{"name":"European Journal of Remote Sensing","volume":"32 1","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138531867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-18DOI: 10.1080/22797254.2023.2277213
H.M.K.D. Wickramathilaka, D. Fernando, D. Jayasundara, D. Wickramasinghe, D.Y.L. Ranasinghe, G.M.R.I. Godaliyadda, M.P.B. Ekanayake, H.M.V.R. Herath, L. Ramanayake, N. Senarath, H.M.H.K. Weerasooriya
This study introduces GAUSS (Guided encoder-decoder Architecture for hyperspectral Unmixing with Spatial Smoothness), a novel autoencoder-based architecture for hyperspectral unmixing (HU). GAUSS c...
本文介绍了一种新的基于自编码器的高光谱解混(HU)体系结构GAUSS (Guided encoder-decoder Architecture for hyperspectral unmix with Spatial smooth)。高斯c…
{"title":"GAUSS: Guided encoder - decoder Architecture for hyperspectral Unmixing with Spatial Smoothness","authors":"H.M.K.D. Wickramathilaka, D. Fernando, D. Jayasundara, D. Wickramasinghe, D.Y.L. Ranasinghe, G.M.R.I. Godaliyadda, M.P.B. Ekanayake, H.M.V.R. Herath, L. Ramanayake, N. Senarath, H.M.H.K. Weerasooriya","doi":"10.1080/22797254.2023.2277213","DOIUrl":"https://doi.org/10.1080/22797254.2023.2277213","url":null,"abstract":"This study introduces GAUSS (Guided encoder-decoder Architecture for hyperspectral Unmixing with Spatial Smoothness), a novel autoencoder-based architecture for hyperspectral unmixing (HU). GAUSS c...","PeriodicalId":49077,"journal":{"name":"European Journal of Remote Sensing","volume":"40 1","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138531863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-15DOI: 10.1080/22797254.2023.2278066
A. Karen Ramírez-López, Noel Carbajal, Luis F. Pineda-Martínez, José Tuxpan-Vargas
The geographical regions of northwestern Mexico consisting of the Pacific Ocean, the Baja California Peninsula with its mountain range along it, the Gulf of California, and the coastal zone with it...
墨西哥西北部的地理区域包括太平洋、下加利福尼亚半岛及其山脉、加利福尼亚湾及其沿岸地区……
{"title":"Cloud climatology of northwestern Mexico based on MODIS data","authors":"A. Karen Ramírez-López, Noel Carbajal, Luis F. Pineda-Martínez, José Tuxpan-Vargas","doi":"10.1080/22797254.2023.2278066","DOIUrl":"https://doi.org/10.1080/22797254.2023.2278066","url":null,"abstract":"The geographical regions of northwestern Mexico consisting of the Pacific Ocean, the Baja California Peninsula with its mountain range along it, the Gulf of California, and the coastal zone with it...","PeriodicalId":49077,"journal":{"name":"European Journal of Remote Sensing","volume":"20 1","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138531866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-15DOI: 10.1080/22797254.2023.2278743
Songxin Tan
The detection of groundwater is essential not only for scientific research but also for agricultural purposes. This research aims to improve the accuracy and reliability of detecting ground standin...
地下水的探测不仅对科学研究很重要,而且对农业也很重要。本研究旨在提高地面站探测的精度和可靠性。
{"title":"A New ground open water detection scheme using Sentinel-1 SAR images","authors":"Songxin Tan","doi":"10.1080/22797254.2023.2278743","DOIUrl":"https://doi.org/10.1080/22797254.2023.2278743","url":null,"abstract":"The detection of groundwater is essential not only for scientific research but also for agricultural purposes. This research aims to improve the accuracy and reliability of detecting ground standin...","PeriodicalId":49077,"journal":{"name":"European Journal of Remote Sensing","volume":"40 1","pages":""},"PeriodicalIF":4.0,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138531868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1080/22797254.2023.2264473
Brendan N. Marais, Marian Schönauer, Philip Bester van Niekerk, Jonas Niklewski, Christian Brischke
This article presents models to predict the time until mechanical failure of in-ground wooden test specimens resulting from fungal decay. Historical records of decay ratings were modelled by remotely sensed data from ERA5-Land. In total, 2,570 test specimens of 16 different wood species were exposed at 21 different test sites, representing three continents and climatic conditions from sub-polar to tropical, spanning a period from 1980 until 2022. To obtain specimen decay ratings over their exposure time, inspections were conducted in mostly annual and sometimes bi-annual intervals. For each specimen’s exposure period, a laboratory developed dose–response model was populated using remotely sensed soil moisture and temperature data retrieved from ERA5-Land. Wood specimens were grouped according to natural durability rankings to reduce the variability of in-ground wood decay rate between wood species. Non-linear, sigmoid-shaped models were then constructed to describe wood decay progression as a function of daily accumulated exposure to soil moisture and temperature conditions (dose). Dose, a mechanistic weighting of daily exposure conditions over time, generally performed better than exposure time alone as a predictor of in-ground wood decay progression. The open-access availability of remotely sensed soil-state data in combination with wood specimen data proved promising for in-ground wood decay predictions.
{"title":"Modelling in-ground wood decay using time-series retrievals from the 5 <sup>th</sup> European climate reanalysis (ERA5-Land)","authors":"Brendan N. Marais, Marian Schönauer, Philip Bester van Niekerk, Jonas Niklewski, Christian Brischke","doi":"10.1080/22797254.2023.2264473","DOIUrl":"https://doi.org/10.1080/22797254.2023.2264473","url":null,"abstract":"This article presents models to predict the time until mechanical failure of in-ground wooden test specimens resulting from fungal decay. Historical records of decay ratings were modelled by remotely sensed data from ERA5-Land. In total, 2,570 test specimens of 16 different wood species were exposed at 21 different test sites, representing three continents and climatic conditions from sub-polar to tropical, spanning a period from 1980 until 2022. To obtain specimen decay ratings over their exposure time, inspections were conducted in mostly annual and sometimes bi-annual intervals. For each specimen’s exposure period, a laboratory developed dose–response model was populated using remotely sensed soil moisture and temperature data retrieved from ERA5-Land. Wood specimens were grouped according to natural durability rankings to reduce the variability of in-ground wood decay rate between wood species. Non-linear, sigmoid-shaped models were then constructed to describe wood decay progression as a function of daily accumulated exposure to soil moisture and temperature conditions (dose). Dose, a mechanistic weighting of daily exposure conditions over time, generally performed better than exposure time alone as a predictor of in-ground wood decay progression. The open-access availability of remotely sensed soil-state data in combination with wood specimen data proved promising for in-ground wood decay predictions.","PeriodicalId":49077,"journal":{"name":"European Journal of Remote Sensing","volume":"279 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135475102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1080/22797254.2023.2271651
Maja Michałowska, Jacek Rapiński, Joanna Janicka
Deep learning is a powerful tool for automating the process of recognizing and classifying objects in images. In this study, we used ML.NET, a popular open-source machine learning framework, to develop a model for identifying tree species in images obtained from airborne mobile mapping. These high-resolution images can be used to create detailed maps of the landscape. They can also be analyzed and processed to extract information about visual features, including tree species recognition. The deep learning model was trained using ML.NET to classify two tree species based on the combination of airborne mobile mapping images. Our approach yielded impressive results, with a maximum classification accuracy of 93.9%. This demonstrates the effectiveness of combining imagery sources with deep learning tools in ML.NET for efficient and accurate tree species classification. This study highlights the potential of the ML.NET framework for automating object classification and can provide valuable insights and information for forestry management and conservation efforts. The primary objective of this research was to evaluate the effectiveness of an approach for identifying tree species through a model generated using a combination of ortho and oblique images captured by a mobile mapping system.
{"title":"Tree species classification on images from airborne mobile mapping using ML.NET","authors":"Maja Michałowska, Jacek Rapiński, Joanna Janicka","doi":"10.1080/22797254.2023.2271651","DOIUrl":"https://doi.org/10.1080/22797254.2023.2271651","url":null,"abstract":"Deep learning is a powerful tool for automating the process of recognizing and classifying objects in images. In this study, we used ML.NET, a popular open-source machine learning framework, to develop a model for identifying tree species in images obtained from airborne mobile mapping. These high-resolution images can be used to create detailed maps of the landscape. They can also be analyzed and processed to extract information about visual features, including tree species recognition. The deep learning model was trained using ML.NET to classify two tree species based on the combination of airborne mobile mapping images. Our approach yielded impressive results, with a maximum classification accuracy of 93.9%. This demonstrates the effectiveness of combining imagery sources with deep learning tools in ML.NET for efficient and accurate tree species classification. This study highlights the potential of the ML.NET framework for automating object classification and can provide valuable insights and information for forestry management and conservation efforts. The primary objective of this research was to evaluate the effectiveness of an approach for identifying tree species through a model generated using a combination of ortho and oblique images captured by a mobile mapping system.","PeriodicalId":49077,"journal":{"name":"European Journal of Remote Sensing","volume":"273 29‐32","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135474967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A wind field reconstruction method for dual-polarized (vertical-vertical [VV] and vertical-horizontal [VH]) Sentinel-1 (S-1) synthetic aperture radar (SAR) images collected during tropical cyclones (TCs) that does not require external information is proposed. Forty S-1 images acquired in interferometric-wide (IW) and extra-wide (EW) modes during the Satellite Hurricane Observation Campaign in 2015–2022 were collected. Stepped-frequency microwave radiometer (SFMR) observations made onboard the National Oceanic and Atmospheric Administration’s hurricane aircraft are available for 13 images. The geophysical model functions, namely VV-polarized C-SARMOD and cross-polarized S-1 IW/EW mode wind speed retrieval model after noise removal (S1IW.NR/S1EW.NR), were employed to invert the wind fields from the collected images. TC wind fields were reconstructed based on SAR-derived winds, enhancing TC intensity representation in the VV-polarized SAR retrievals and minimizing the error of the VH-polarized SAR retrievals at the sub-swath edge. The wind speeds retrieved from the SAR IW image were validated against the remote-sensing products from the soil moisture active passive (SMAP) radiometer, yielding a root mean squared error (RMSE) of approximately 4.3 m s−1, which is slightly smaller than the RMSE (4.4 m s−1) for the operational CyclObs wind product provided by the French Research Institute for Exploitation of the Sea (IFREMER). However, the CyclObs wind product has better performance than the approach proposed in this paper for the S-1 EW mode. Moreover, the RMSE of the wind speed between SAR-derived wind speed obtained using the proposed approach and the CyclObs wind product is within 3 m s−1 in all flow directions clockwise relative to north centered on the TC’s eye. This study provides an alternative method for TC wind retrieval from dual-polarized S-1 images without suffering saturation problem and external information; however, the pattern of the wind field around the TC’s eye needs to be further improved, especially at the head and back of the TC’s eye.
提出了一种不需要外界信息的Sentinel-1 (S-1)合成孔径雷达(SAR)双极化(垂直-垂直[VV]和垂直-水平[VH])图像风场重建方法。收集了2015-2022年卫星飓风观测运动期间以干涉宽(IW)和超宽(EW)模式获取的40幅S-1图像。步进频率微波辐射计(SFMR)在国家海洋和大气管理局的飓风飞机上观测到13幅图像。利用vv极化C-SARMOD和交叉极化S-1 IW/EW模式去噪后风速反演模型(S1IW.NR/S1EW.NR)对采集图像进行风场反演。基于SAR衍生风重建TC风场,增强了vh极化SAR检索中TC强度的表征,减小了vh极化SAR检索在子带边缘的误差。从SAR IW图像中获取的风速与土壤湿度主被动(SMAP)辐射计的遥感产品进行了验证,得到的均方根误差(RMSE)约为4.3 m s - 1,略小于法国海洋开发研究所(IFREMER)提供的操作CyclObs风产品的RMSE (4.4 m s - 1)。然而,在S-1 EW模式下,CyclObs风产品的性能优于本文提出的方法。此外,在以风眼为中心的顺时针方向上,利用该方法获得的sar反演风速与CyclObs风产品的均方根误差(RMSE)在3 m s−1以内。该研究提供了一种从双偏振S-1图像中提取TC风的替代方法,不受饱和问题和外部信息的影响;然而,风眼周围的风场格局需要进一步改善,特别是在风眼的头部和后部。
{"title":"Wind field reconstruction based on dual-polarized synthetic aperture radar during a tropical cyclone","authors":"Zhengzhong Lai, Mengyu Hao, Weizeng Shao, Wei Shen, Yuyi Hu, Xingwei Jiang","doi":"10.1080/22797254.2023.2273867","DOIUrl":"https://doi.org/10.1080/22797254.2023.2273867","url":null,"abstract":"A wind field reconstruction method for dual-polarized (vertical-vertical [VV] and vertical-horizontal [VH]) Sentinel-1 (S-1) synthetic aperture radar (SAR) images collected during tropical cyclones (TCs) that does not require external information is proposed. Forty S-1 images acquired in interferometric-wide (IW) and extra-wide (EW) modes during the Satellite Hurricane Observation Campaign in 2015–2022 were collected. Stepped-frequency microwave radiometer (SFMR) observations made onboard the National Oceanic and Atmospheric Administration’s hurricane aircraft are available for 13 images. The geophysical model functions, namely VV-polarized C-SARMOD and cross-polarized S-1 IW/EW mode wind speed retrieval model after noise removal (S1IW.NR/S1EW.NR), were employed to invert the wind fields from the collected images. TC wind fields were reconstructed based on SAR-derived winds, enhancing TC intensity representation in the VV-polarized SAR retrievals and minimizing the error of the VH-polarized SAR retrievals at the sub-swath edge. The wind speeds retrieved from the SAR IW image were validated against the remote-sensing products from the soil moisture active passive (SMAP) radiometer, yielding a root mean squared error (RMSE) of approximately 4.3 m s−1, which is slightly smaller than the RMSE (4.4 m s−1) for the operational CyclObs wind product provided by the French Research Institute for Exploitation of the Sea (IFREMER). However, the CyclObs wind product has better performance than the approach proposed in this paper for the S-1 EW mode. Moreover, the RMSE of the wind speed between SAR-derived wind speed obtained using the proposed approach and the CyclObs wind product is within 3 m s−1 in all flow directions clockwise relative to north centered on the TC’s eye. This study provides an alternative method for TC wind retrieval from dual-polarized S-1 images without suffering saturation problem and external information; however, the pattern of the wind field around the TC’s eye needs to be further improved, especially at the head and back of the TC’s eye.","PeriodicalId":49077,"journal":{"name":"European Journal of Remote Sensing","volume":"123 3-4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135270674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyperspectral unmixing (HU) is considered one of the most important ways to improve hyperspectral image analysis. HU aims to break down the mixed pixel into a set of spectral signatures, often commonly referred to as endmembers, and determine the fractional abundance of those endmembers. Deep learning (DL) approaches have recently received great attention regarding HU. In particular, convolutional neural networks (CNNs)-based methods have performed exceptionally well in such tasks. However, the ability of CNNs to learn deep semantic features is limited, and computing cost increase dramatically with the number of layers. The appearance of the transformer addresses these issues by effectively representing high-level semantic features well. In this article, we present a novel approach for HU that utilizes a deep convolutional transformer network. Firstly, the CNN-based autoencoder (AE) is used to extract low-level features from the input image. Secondly, the concept of tokenizer is applied for feature transformation. Thirdly, the transformer module is used to capture the deep semantic features derived from the tokenizer. Finally, a convolutional decoder is utilized to reconstruct the input image. The experimental results on synthetic and real datasets demonstrate the effectiveness and superiority of the proposed method compared with other unmixing methods.
{"title":"Deep convolutional transformer network for hyperspectral unmixing","authors":"Fazal Hadi, Jingxiang Yang, Ghulam Farooque, Liang Xiao","doi":"10.1080/22797254.2023.2268820","DOIUrl":"https://doi.org/10.1080/22797254.2023.2268820","url":null,"abstract":"Hyperspectral unmixing (HU) is considered one of the most important ways to improve hyperspectral image analysis. HU aims to break down the mixed pixel into a set of spectral signatures, often commonly referred to as endmembers, and determine the fractional abundance of those endmembers. Deep learning (DL) approaches have recently received great attention regarding HU. In particular, convolutional neural networks (CNNs)-based methods have performed exceptionally well in such tasks. However, the ability of CNNs to learn deep semantic features is limited, and computing cost increase dramatically with the number of layers. The appearance of the transformer addresses these issues by effectively representing high-level semantic features well. In this article, we present a novel approach for HU that utilizes a deep convolutional transformer network. Firstly, the CNN-based autoencoder (AE) is used to extract low-level features from the input image. Secondly, the concept of tokenizer is applied for feature transformation. Thirdly, the transformer module is used to capture the deep semantic features derived from the tokenizer. Finally, a convolutional decoder is utilized to reconstruct the input image. The experimental results on synthetic and real datasets demonstrate the effectiveness and superiority of the proposed method compared with other unmixing methods.","PeriodicalId":49077,"journal":{"name":"European Journal of Remote Sensing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136068300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}