Pub Date : 2024-10-11DOI: 10.1016/j.jag.2024.104212
Montane cloud forests (MCFs) feature frequent, wind-driven cloud bands (fog and low stratus [FLS]), providing crucial moisture to the ecosystems. Elevated temperatures may displace FLS, impacting MCFs significantly. To evaluate the consequences, quantifying FLS occurrences is vital. In this study, we employed “RANdom forest GEneRator” (Ranger), an advanced machine learning algorithm, to detect diurnal (07:00–17:00) FLS (dFLS) occurrence from 2018 to 2021 in MCFs in northeast Taiwan using 31 variables, including the visible and infrared bands of the Advanced Himawari Imager onboard Himawari-8, pixel solar azimuth and zenith angles, band differences, the Normalized Difference Vegetation Index (NDVI) and topographic attributes. We applied simple (lumping all data) and three-mode (sunrise/sunset, cloudy and clear sky) models to predict dFLS occurrence. We randomly selected 80 % of the data for model development and the rest for validation by referring to four ground dFLS observation stations across an elevation range of 1151–1811 m a.s.l with 53,358 diurnal time-lapse photographs. We found that it was possible to detect dFLS occurrence in MCFs using both simple and three-mode models regardless of the weather conditions (F1 ≥ 0.864, accuracy ≥ 0.905 and the Matthews correlation coefficient ≥ 0.786); the performance of the simple model was slightly better. The NDVI was more important than other variables in both models. This study demonstrates that Ranger may be able to detect dFLS in MCFs solely using a comprehensive array of satellite features insensitive to varying atmospheric conditions and terrain effects, permitting systematic monitoring of dFLS over vast regions.
{"title":"Quantification of the spatiotemporal dynamics of diurnal fog and low stratus occurrence in subtropical montane cloud forests using Himawari-8 imagery and topographic attributes","authors":"","doi":"10.1016/j.jag.2024.104212","DOIUrl":"10.1016/j.jag.2024.104212","url":null,"abstract":"<div><div>Montane cloud forests (MCFs) feature frequent, wind-driven cloud bands (fog and low stratus [FLS]), providing crucial moisture to the ecosystems. Elevated temperatures may displace FLS, impacting MCFs significantly. To evaluate the consequences, quantifying FLS occurrences is vital. In this study, we employed “RANdom forest GEneRator” (Ranger), an advanced machine learning algorithm, to detect diurnal (07:00–17:00) FLS (dFLS) occurrence from 2018 to 2021 in MCFs in northeast Taiwan using 31 variables, including the visible and infrared bands of the Advanced Himawari Imager onboard Himawari-8, pixel solar azimuth and zenith angles, band differences, the Normalized Difference Vegetation Index (NDVI) and topographic attributes. We applied simple (lumping all data) and three-mode (sunrise/sunset, cloudy and clear sky) models to predict dFLS occurrence. We randomly selected 80 % of the data for model development and the rest for validation by referring to four ground dFLS observation stations across an elevation range of 1151–1811 m a.s.l with 53,358 diurnal time-lapse photographs. We found that it was possible to detect dFLS occurrence in MCFs using both simple and three-mode models regardless of the weather conditions (F1 ≥ 0.864, accuracy ≥ 0.905 and the Matthews correlation coefficient ≥ 0.786); the performance of the simple model was slightly better. The NDVI was more important than other variables in both models. This study demonstrates that Ranger may be able to detect dFLS in MCFs solely using a comprehensive array of satellite features insensitive to varying atmospheric conditions and terrain effects, permitting systematic monitoring of dFLS over vast regions.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-11DOI: 10.1016/j.jag.2024.104210
Forest fires pose a significant threat to ecosystems, biodiversity, and human settlements, necessitating accurate and timely detection of burned areas for post-fire management. This study focused on the immediate assessment of a recent major forest fire that occurred on March 15, 2024, in southwestern China. We comprehensively utilized high temporal resolution MODIS and Black Marble nighttime light images to monitor the fire’s development and introduced a novel method for detecting burned forest areas using a new Shadow-Enhanced Vegetation Index (SEVI) coupling with a machine learning technique. The SEVI effectively enhances the vegetation index (VI) values on shaded slopes and hence reduces the VI disparity between shaded and sunlit areas, which is critical for accurately extracting fire scars in such terrain. While SEVI primarily identifies burned forest areas, the Random Forest (RF) technique detects all burned areas, including both forested and non-forested regions. Consequently, the total burned area of the Yajiang forest fire was estimated at 23,588 ha, with the burned forest area covering 19,266 ha. The combination of SEVI and RF algorithms provided a comprehensive and efficient tool for identifying burned areas. Additionally, our study employed the Remote Sensing-based Ecological Index (RSEI) to assess the ecological impact of the fire on the region, uncovering an immediate 15 % decline in regional ecological conditions following the fire. The usage of RSEI has the potential to quantitatively understand ecological responses to the fire. The findings achieved in this study underscore the significance of precise fire-burned area extraction techniques for enhancing forest fire management and ecosystem recovery strategies, while also highlighting the broader ecological implications of such events.
{"title":"Immediate assessment of forest fire using a novel vegetation index and machine learning based on multi-platform, high temporal resolution remote sensing images","authors":"","doi":"10.1016/j.jag.2024.104210","DOIUrl":"10.1016/j.jag.2024.104210","url":null,"abstract":"<div><div>Forest fires pose a significant threat to ecosystems, biodiversity, and human settlements, necessitating accurate and timely detection of burned areas for post-fire management. This study focused on the immediate assessment of a recent major forest fire that occurred on March 15, 2024, in southwestern China. We comprehensively utilized high temporal resolution MODIS and Black Marble nighttime light images to monitor the fire’s development and introduced a novel method for detecting burned forest areas using a new Shadow-Enhanced Vegetation Index (SEVI) coupling with a machine learning technique. The SEVI effectively enhances the vegetation index (VI) values on shaded slopes and hence reduces the VI disparity between shaded and sunlit areas, which is critical for accurately extracting fire scars in such terrain. While SEVI primarily identifies burned forest areas, the Random Forest (RF) technique detects all burned areas, including both forested and non-forested regions. Consequently, the total burned area of the Yajiang forest fire was estimated at 23,588 ha, with the burned forest area covering 19,266 ha. The combination of SEVI and RF algorithms provided a comprehensive and efficient tool for identifying burned areas. Additionally, our study employed the Remote Sensing-based Ecological Index (RSEI) to assess the ecological impact of the fire on the region, uncovering an immediate 15 % decline in regional ecological conditions following the fire. The usage of RSEI has the potential to quantitatively understand ecological responses to the fire. The findings achieved in this study underscore the significance of precise fire-burned area extraction techniques for enhancing forest fire management and ecosystem recovery strategies, while also highlighting the broader ecological implications of such events.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-11DOI: 10.1016/j.jag.2024.104194
Understanding the characteristics of the growth zones of live corals and competitive algae, including turf algae and macroalgae, is crucial for assessing the degradation of coral reef ecosystems. However, identifying live corals and competitive algae in multispectral satellite images is challenging because different objects can have similar spectra. To address this, we used two satellite images acquired at different times (Landsat thematic mapper (TM), Landsat operational land imager (OLI), or Sentinel-2 multi-spectral instrument (MSI)) to assess the growth zone characteristics of live corals and competitive algae. This assessment leveraged the seasonal dieback of competitive algae and the relative stability of live-coral growth zones over a short period. Specifically, we developed a normalized red–green difference index (NRGI) to segment live-coral-or-competitive-alga growth zones in satellite images. By comparing the segmentation results from an image captured during a period with few competitive algae and another image captured during a period with lush competitive algae, we estimated the growth zone areas of the live corals and competitive algae. Finally, we calculated the ratio of the competitive-alga growth zone area to the live-coral growth zone area (RCL). Experiments on eight typical coral islands and reefs in the South China Sea (SCS) from 1995 to 2022 revealed that: (1) the identification accuracies of live-coral-or-competitive-alga growth zones reached 80.3 % and 92.6 % during periods with few competitive algae (January to March) and lush competitive algae (April to October), respectively; (2) the RCL was well correlated with the coral-macroalgae encounter rate (an ecological index indicating the pressure of the competitive algae on the live corals) (r = 0.79, P<0.05); and (3) the trends in the growth zones of competitive algae and live corals, along with the RCL, were consistent with major ecological events in the SCS, such as coral bleaching, outbreak of Acanthaster planci, and black band disease. (4) Moreover, a time-lagged correlation was observed between heat stress and the RCL. In summary, the proposed approach is simple, effective, and feasible. The RCL is a valuable indicator of the status of coral reef ecosystems, highlighting the pressure of competitive algae on live corals and the degradation of coral reef ecosystems. This method introduces a novel application of multispectral satellite images for assessing coral reef ecosystems and has significant potential for future coral reef ecosystem monitoring.
{"title":"Development of a coral and competitive alga-related index using historical multi-spectral satellite imagery to assess ecological status of coral reefs","authors":"","doi":"10.1016/j.jag.2024.104194","DOIUrl":"10.1016/j.jag.2024.104194","url":null,"abstract":"<div><div>Understanding the characteristics of the growth zones of live corals and competitive algae, including turf algae and macroalgae, is crucial for assessing the degradation of coral reef ecosystems. However, identifying live corals and competitive algae in multispectral satellite images is challenging because different objects can have similar spectra. To address this, we used two satellite images acquired at different times (Landsat thematic mapper (TM), Landsat operational land imager (OLI), or Sentinel-2 multi-spectral instrument (MSI)) to assess the growth zone characteristics of live corals and competitive algae. This assessment leveraged the seasonal dieback of competitive algae and the relative stability of live-coral growth zones over a short period. Specifically, we developed a normalized red–green difference index (<em>NRGI</em>) to segment live-coral-or-competitive-alga growth zones in satellite images. By comparing the segmentation results from an image captured during a period with few competitive algae and another image captured during a period with lush competitive algae, we estimated the growth zone areas of the live corals and competitive algae. Finally, we calculated the ratio of the competitive-alga growth zone area to the live-coral growth zone area (RCL). Experiments on eight typical coral islands and reefs in the South China Sea (SCS) from 1995 to 2022 revealed that: (1) the identification accuracies of live-coral-or-competitive-alga growth zones reached 80.3 % and 92.6 % during periods with few competitive algae (January to March) and lush competitive algae (April to October), respectively; (2) the RCL was well correlated with the coral-macroalgae encounter rate (an ecological index indicating the pressure of the competitive algae on the live corals) (<em>r</em> = 0.79, <em>P</em><0.05); and (3) the trends in the growth zones of competitive algae and live corals, along with the RCL, were consistent with major ecological events in the SCS, such as coral bleaching, outbreak of <em>Acanthaster planci</em>, and black band disease. (4) Moreover, a time-lagged correlation was observed between heat stress and the RCL. In summary, the proposed approach is simple, effective, and feasible. The RCL is a valuable indicator of the status of coral reef ecosystems, highlighting the pressure of competitive algae on live corals and the degradation of coral reef ecosystems. This method introduces a novel application of multispectral satellite images for assessing coral reef ecosystems and has significant potential for future coral reef ecosystem monitoring.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1016/j.jag.2024.104196
Rice, a crucial global food crop, necessitates accurate mapping for food security assessment. China, a major rice producer and consumer, includes Jiangsu Province as a significant rice production region. The Hongzehu (HZH) area in Jiangsu contributes substantially to rice supply, supporting food security locally and province-wide. Sentinel-1 SAR data, particularly Single Look Complex (SLC) products, holds promise for precise crop mapping with enhanced phase and polarization information, enhancing sensitivity to rice growth changes by analyzing rice surface features information. However, challenges persist, especially climate impacts and timing inconsistencies between fields for planting rice. To overcome this, our study proposes a progressive feature screening and fusion method using multi-temporal SAR images. We introduce fuzzy coarse screening based on statistical distribution characteristics and refine it with Gaussian fitting. A model incorporating time-series sample separation and polarization decomposition feature fusion based on rice growth height enhances rice growth expression. For more precise results, we advocate a multi-temporal feature fusion approach using optimized sample features in the BiLSTM network to characterize rice growth and ground features. Experimental results demonstrate the method’s efficacy in two cities with a limited number of sampling points. The progressive feature fusion (DF) method outperforms classical classification methods using single feature (SF) or combined features (CF). Our proposed strategy proves effective for rice mapping applications, providing a promising approach for leveraging Sentinel-1 SLC SAR data. In conclusion, our study enhances accuracy in identifying rice fields and characterizing rice growth, contributing to improved food security assessments despite challenges associated with rainy seasons and planting times.
水稻是全球重要的粮食作物,需要精确的测绘来进行粮食安全评估。中国是稻米生产和消费大国,江苏省是重要的稻米产区。江苏洪泽湖(HZH)地区对水稻供应做出了巨大贡献,为当地和全省的粮食安全提供了支持。哨兵-1合成孔径雷达数据,特别是单看复合(SLC)产品,通过分析水稻表面特征信息,增强了相位和偏振信息,有望精确绘制作物图,提高对水稻生长变化的敏感性。然而,挑战依然存在,特别是气候影响和不同田块的插秧时间不一致。为了克服这一问题,我们的研究提出了一种利用多时相合成孔径雷达图像的渐进式特征筛选和融合方法。我们引入了基于统计分布特征的模糊粗筛选,并通过高斯拟合对其进行细化。基于水稻生长高度的时间序列样本分离和偏振分解特征融合模型增强了水稻生长表达。为了获得更精确的结果,我们提倡在 BiLSTM 网络中使用优化样本特征的多时相特征融合方法,以表征水稻生长和地面特征。实验结果证明了该方法在两个采样点数量有限的城市中的有效性。渐进式特征融合(DF)方法优于使用单一特征(SF)或组合特征(CF)的经典分类方法。我们提出的策略在水稻测绘应用中证明是有效的,为利用 Sentinel-1 SLC SAR 数据提供了一种前景广阔的方法。总之,我们的研究提高了识别稻田和描述水稻生长特征的准确性,有助于改进粮食安全评估,尽管存在与雨季和种植时间相关的挑战。
{"title":"Rice recognition from Sentinel-1 SLC SAR data based on progressive feature screening and fusion","authors":"","doi":"10.1016/j.jag.2024.104196","DOIUrl":"10.1016/j.jag.2024.104196","url":null,"abstract":"<div><div>Rice, a crucial global food crop, necessitates accurate mapping for food security assessment. China, a major rice producer and consumer, includes Jiangsu Province as a significant rice production region. The Hongzehu (HZH) area in Jiangsu contributes substantially to rice supply, supporting food security locally and province-wide. Sentinel-1 SAR data, particularly Single Look Complex (SLC) products, holds promise for precise crop mapping with enhanced phase and polarization information, enhancing sensitivity to rice growth changes by analyzing rice surface features information. However, challenges persist, especially climate impacts and timing inconsistencies between fields for planting rice. To overcome this, our study proposes a progressive feature screening and fusion method using multi-temporal SAR images. We introduce fuzzy coarse screening based on statistical distribution characteristics and refine it with Gaussian fitting. A model incorporating time-series sample separation and polarization decomposition feature fusion based on rice growth height enhances rice growth expression. For more precise results, we advocate a multi-temporal feature fusion approach using optimized sample features in the BiLSTM network to characterize rice growth and ground features. Experimental results demonstrate the method’s efficacy in two cities with a limited number of sampling points. The progressive feature fusion (DF) method outperforms classical classification methods using single feature (SF) or combined features (CF). Our proposed strategy proves effective for rice mapping applications, providing a promising approach for leveraging Sentinel-1 SLC SAR data. In conclusion, our study enhances accuracy in identifying rice fields and characterizing rice growth, contributing to improved food security assessments despite challenges associated with rainy seasons and planting times.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1016/j.jag.2024.104189
Stereo matching is essential for establishing pixel-level correspondences and estimating depth in scene reconstruction. However, applying stereo matching networks to UAV scenarios presents unique challenges due to varying altitudes, angles, and rapidly changing conditions, unlike the controlled settings in autonomous driving or the uniform scenes in satellite imagery. To address these UAV-specific challenges, we propose the CSStereo network (Contrastive Learning and Feature Selection Stereo Matching Network), which integrates contrastive learning and feature selection modules. The contrastive learning module enhances feature representation by comparing similarities and differences between samples, thereby improving discrimination among features in UAV scenarios. The feature selection module enhances robustness and generalization across different UAV scenarios by selecting relevant and informative features. Extensive experimental evaluations demonstrate the effectiveness of CSStereo in UAV scenarios, and show superior performance in both qualitative and quantitative assessments.
{"title":"CSStereo: A UAV scenarios stereo matching network enhanced with contrastive learning and feature selection","authors":"","doi":"10.1016/j.jag.2024.104189","DOIUrl":"10.1016/j.jag.2024.104189","url":null,"abstract":"<div><div>Stereo matching is essential for establishing pixel-level correspondences and estimating depth in scene reconstruction. However, applying stereo matching networks to UAV scenarios presents unique challenges due to varying altitudes, angles, and rapidly changing conditions, unlike the controlled settings in autonomous driving or the uniform scenes in satellite imagery. To address these UAV-specific challenges, we propose the CSStereo network (Contrastive Learning and Feature Selection Stereo Matching Network), which integrates contrastive learning and feature selection modules. The contrastive learning module enhances feature representation by comparing similarities and differences between samples, thereby improving discrimination among features in UAV scenarios. The feature selection module enhances robustness and generalization across different UAV scenarios by selecting relevant and informative features. Extensive experimental evaluations demonstrate the effectiveness of CSStereo in UAV scenarios, and show superior performance in both qualitative and quantitative assessments.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1016/j.jag.2024.104195
The wave spectrum describes the distribution of wave energy across frequency and direction. Obtaining wave spectrum information with high accuracy is of great value for oceanographic research and disaster prevention and reduction. Currently, wave spectral data can be obtained from remote sensing observations, global meteorological and climate reanalysis products, and in-situ observations, which exhibit different advantages and limitations in terms of spatio-temporal resolution, accuracy, and data coverage. Fusing these diverse spectral data to complement the advantage of improving the accuracy of wave spectrum is very promising. However, there is still no simple and effective method to fuse the above spectral data. In this study, a multi-source spectral fusion method is developed based on BU-NET, which realizes the integration of ERA5 spectra and SWIM spectra, with buoy spectra as the reference. The results of the systematic evaluation indicate that the fusion spectra alleviate parasitic peaks, address the issue of larger mean energy, and compensate for energy loss due to the cutoff frequency in the SWIM spectra. The fusion spectra also alleviate energy underestimation during high sea states in the ERA5 spectra. Furthermore, the accuracy of the significant wave height, mean wave period, dominant wave period, and dominant wave direction obtained from the fusion spectra is improved. The root mean square errors between these parameters from the fusion spectra and those from buoy spectra are 0.217 m, 0.378 s, 1.599 s, and 33.094°, respectively.
{"title":"Fusion of multi-source wave spectra based on BU-NET","authors":"","doi":"10.1016/j.jag.2024.104195","DOIUrl":"10.1016/j.jag.2024.104195","url":null,"abstract":"<div><div>The wave spectrum describes the distribution of wave energy across frequency and direction. Obtaining wave spectrum information with high accuracy is of great value for oceanographic research and disaster prevention and reduction. Currently, wave spectral data can be obtained from remote sensing observations, global meteorological and climate reanalysis products, and in-situ observations, which exhibit different advantages and limitations in terms of spatio-temporal resolution, accuracy, and data coverage. Fusing these diverse spectral data to complement the advantage of improving the accuracy of wave spectrum is very promising. However, there is still no simple and effective method to fuse the above spectral data. In this study, a multi-source spectral fusion method is developed based on BU-NET, which realizes the integration of ERA5 spectra and SWIM spectra, with buoy spectra as the reference. The results of the systematic evaluation indicate that the fusion spectra alleviate parasitic peaks, address the issue of larger mean energy, and compensate for energy loss due to the cutoff frequency in the SWIM spectra. The fusion spectra also alleviate energy underestimation during high sea states in the ERA5 spectra. Furthermore, the accuracy of the significant wave height, mean wave period, dominant wave period, and dominant wave direction obtained from the fusion spectra is improved. The root mean square errors between these parameters from the fusion spectra and those from buoy spectra are 0.217 m, 0.378 s, 1.599 s, and 33.094°, respectively.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1016/j.jag.2024.104197
Mangroves are one of the most important marine ecosystems globally, their spatial distribution is crucial for promoting mangrove ecosystems conservation, restoration, and sustainable managements. This study proposed a novel Unet-Multi-Scale High-Resolution Vision Transformer (UHRViT) model for classifying mangrove species using unmanned aerial vehicle (UAV-RGB), UAV-LiDAR, and Gaofen-3 Synthetic Aperture Radar (GF-3 SAR) images. The UHRViT utilized a multi-scale high-resolution visual Transformer as its backbone network and was designed to a multi-branch U-shaped network structure to extract features of different scales layer by layer, and to facilitate the interaction of high and low-level semantic information. We further verified the classification performance superiority of UHRViT model by comparing to HRViT and HRNetV2 algorithms. We also systematically investigated the effects of active–passive image combination ratios on mangrove communities mapping. The results revealed that: UAV-RGB images exhibited the better classification accuracy (mean F1-score>95 %) for mangrove species than UAV-LiDAR and GF-3 SAR images; The classification performances and stability of UHRViT algorithm in the fifteen datasets outperformed the HRViT and HRNetV2 algorithms; Combining UAV-RGB with GF-3 SAR or UAV-LiDAR images respectively, both achieved better classifications than the single data source. Based on the UHRViT algorithm, the combination of UAV-RGB and UAV-LiDAR achieved the highest classification accuracy (Iou = 0.944, MIou = 50.2 %) for Avicennia corniculatum (AC). When the combination ratio of UAV-RGB with GF-3 SAR or UAV-LiDAR was 3:1, Avicennia marina and AC both obtained the optimal classification accuracy with average F1-scores of 98.19 % and 97.3 %, respectively. Our works revealed that the changes in the classification accuracies of mangrove communities under multi-sensor image combination ratios, and demonstrated that our model could effectively improve the classification accuracy of mangrove communities.
红树林是全球最重要的海洋生态系统之一,其空间分布对促进红树林生态系统的保护、恢复和可持续管理至关重要。本研究提出了一种新颖的 Unet 多尺度高分辨率视觉变换器(UHRViT)模型,用于利用无人机(UAV-RGB)、无人机激光雷达(UAV-LiDAR)和高分三号合成孔径雷达(GF-3 SAR)图像对红树林物种进行分类。UHRViT 采用多尺度高分辨率视觉转换器作为骨干网络,并设计成多分支 U 型网络结构,以逐层提取不同尺度的特征,并促进高低级语义信息的交互。通过与 HRViT 和 HRNetV2 算法的比较,我们进一步验证了 UHRViT 模型的分类性能优势。我们还系统地研究了主动-被动图像组合比例对红树林群落绘图的影响。结果显示与 UAV-LiDAR 和 GF-3 SAR 图像相比,UAV-RGB 图像表现出更高的红树林物种分类准确率(平均 F1 分数>95%);UHRViT 算法在 15 个数据集中的分类性能和稳定性优于 HRViT 和 HRNetV2 算法;将 UAV-RGB 与 GF-3 SAR 或 UAV-LiDAR 图像分别结合使用,分类效果均优于单一数据源。根据 UHRViT 算法,UAV-RGB 与 UAV-LiDAR 的组合对 Avicennia corniculatum(AC)的分类准确率最高(Iou = 0.944,MIou = 50.2 %)。当 UAV-RGB 与 GF-3 SAR 或 UAV-LiDAR 的组合比例为 3:1 时,Avicennia marina 和 AC 都获得了最佳分类精度,平均 F1 分数分别为 98.19 % 和 97.3 %。我们的研究揭示了多传感器图像组合比例下红树林群落分类精度的变化,证明我们的模型能有效提高红树林群落的分类精度。
{"title":"Exploring the effects of different combination ratios of multi-source remote sensing images on mangrove communities classification","authors":"","doi":"10.1016/j.jag.2024.104197","DOIUrl":"10.1016/j.jag.2024.104197","url":null,"abstract":"<div><div>Mangroves are one of the most important marine ecosystems globally, their spatial distribution is crucial for promoting mangrove ecosystems conservation, restoration, and sustainable managements. This study proposed a novel Unet-Multi-Scale High-Resolution Vision Transformer (UHRViT) model for classifying mangrove species using unmanned aerial vehicle (UAV-RGB), UAV-LiDAR, and Gaofen-3 Synthetic Aperture Radar (GF-3 SAR) images. The UHRViT utilized a multi-scale high-resolution visual Transformer as its backbone network and was designed to a multi-branch U-shaped network structure to extract features of different scales layer by layer, and to facilitate the interaction of high and low-level semantic information. We further verified the classification performance superiority of UHRViT model by comparing to HRViT and HRNetV2 algorithms. We also systematically investigated the effects of active–passive image combination ratios on mangrove communities mapping. The results revealed that: UAV-RGB images exhibited the better classification accuracy (mean F1-score>95 %) for mangrove species than UAV-LiDAR and GF-3 SAR images; The classification performances and stability of UHRViT algorithm in the fifteen datasets outperformed the HRViT and HRNetV2 algorithms; Combining UAV-RGB with GF-3 SAR or UAV-LiDAR images respectively, both achieved better classifications than the single data source. Based on the UHRViT algorithm, the combination of UAV-RGB and UAV-LiDAR achieved the highest classification accuracy (Iou = 0.944, MIou = 50.2 %) for <em>Avicennia corniculatum</em> (AC). When the combination ratio of UAV-RGB with GF-3 SAR or UAV-LiDAR was 3:1, <em>Avicennia marina</em> and AC both obtained the optimal classification accuracy with average F1-scores of 98.19 % and 97.3 %, respectively. Our works revealed that the changes in the classification accuracies of mangrove communities under multi-sensor image combination ratios, and demonstrated that our model could effectively improve the classification accuracy of mangrove communities.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1016/j.jag.2024.104198
Hyperspectral imaging of solar-induced chlorophyll fluorescence (SIF) is required for plant phenotyping and stress detection. However, the most accurate instruments for SIF quantification, such as sub-nanometer (≤1-nm full-width at half-maximum, FWHM) airborne hyperspectral imagers, are expensive and uncommon. Previous studies have demonstrated that standard narrow-band hyperspectral imagers (i.e., 4–6-nm FWHM) are more cost-effective and can provide far-red SIF quantified at 760 nm (SIF760), which correlates strongly with precise sub-nanometer resolution measurements. Nevertheless, narrow-band SIF760 quantifications are subject to systematic overestimation owing to the influence of the spectral resolution (SR). In this study, we propose a modelling approach based on the Soil Canopy Observation, Photochemistry and Energy Fluxes (SCOPE) model with the objective of enhancing the accuracy of absolute SIF760 levels derived from standard airborne hyperspectral imagers in practical settings. The performance of the proposed method was evaluated using airborne imagery acquired from two airborne hyperspectral imagers (FWHM ≤ 0.2-nm and 5.8-nm) flown in tandem on board an aircraft that collected data from two different wheat and maize phenotyping trials. Leaf biophysical and biochemical traits were first estimated from airborne narrow-band reflectance imagery and subsequently used as SCOPE model inputs to simulate a range of top-of-canopy (TOC) radiance and SIF spectra at 1-nm FWHM. The SCOPE simulated radiance spectra were then convolved to match the spectral configuration of the narrow-band imager to compute the 5.8-nm FWHM SIF760. A site-specific model was constructed by employing the convolved 5.8-nm SR SIF760 as the independent variable and the 1-nm SR SIF760 directly simulated by SCOPE as the dependent variable. When applied to the airborne dataset, the estimated SIF760 at 1-nm SR from the standard narrow-band hyperspectral imager matched the reference sub-nanometer quantified SIF760 with root mean square error (RMSE) less than 0.5 mW/m2/nm/sr, yielding R2 = 0.93–0.95 from the two experiments. These results suggest that the proposed modelling approach enables the interpretation of SIF760 quantified using standard hyperspectral imagers of 4–6 nm FWHM for stress detection and plant physiological condition assessment.
{"title":"Improving the accuracy of SIF quantified from moderate spectral resolution airborne hyperspectral imager using SCOPE: assessment with sub-nanometer imagery","authors":"","doi":"10.1016/j.jag.2024.104198","DOIUrl":"10.1016/j.jag.2024.104198","url":null,"abstract":"<div><div>Hyperspectral imaging of solar-induced chlorophyll fluorescence (SIF) is required for plant phenotyping and stress detection. However, the most accurate instruments for SIF quantification, such as sub-nanometer (≤1-nm full-width at half-maximum, FWHM) airborne hyperspectral imagers, are expensive and uncommon. Previous studies have demonstrated that standard narrow-band hyperspectral imagers (i.e., 4–6-nm FWHM) are more cost-effective and can provide far-red SIF quantified at 760 nm (SIF<sub>760</sub>), which correlates strongly with precise sub-nanometer resolution measurements. Nevertheless, narrow-band SIF<sub>760</sub> quantifications are subject to systematic overestimation owing to the influence of the spectral resolution (SR). In this study, we propose a modelling approach based on the Soil Canopy Observation, Photochemistry and Energy Fluxes (SCOPE) model with the objective of enhancing the accuracy of absolute SIF<sub>760</sub> levels derived from standard airborne hyperspectral imagers in practical settings. The performance of the proposed method was evaluated using airborne imagery acquired from two airborne hyperspectral imagers (FWHM ≤ 0.2-nm and 5.8-nm) flown in tandem on board an aircraft that collected data from two different wheat and maize phenotyping trials. Leaf biophysical and biochemical traits were first estimated from airborne narrow-band reflectance imagery and subsequently used as SCOPE model inputs to simulate a range of top-of-canopy (TOC) radiance and SIF spectra at 1-nm FWHM. The SCOPE simulated radiance spectra were then convolved to match the spectral configuration of the narrow-band imager to compute the 5.8-nm FWHM SIF<sub>760</sub>. A site-specific model was constructed by employing the convolved 5.8-nm SR SIF<sub>760</sub> as the independent variable and the 1-nm SR SIF<sub>760</sub> directly simulated by SCOPE as the dependent variable. When applied to the airborne dataset, the estimated SIF<sub>760</sub> at 1-nm SR from the standard narrow-band hyperspectral imager matched the reference sub-nanometer quantified SIF<sub>760</sub> with root mean square error (RMSE) less than 0.5 mW/m<sup>2</sup>/nm/sr, yielding R<sup>2</sup> = 0.93–0.95 from the two experiments. These results suggest that the proposed modelling approach enables the interpretation of SIF<sub>760</sub> quantified using standard hyperspectral imagers of 4–6 nm FWHM for stress detection and plant physiological condition assessment.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142423001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-07DOI: 10.1016/j.jag.2024.104192
There is a need to help farmers make decisions to maximize crop yields. Many studies have emerged in recent years using deep learning on remotely sensed images to detect plant diseases, which can be caused by multiple factors such as environmental conditions, genetics or pathogens. This problem can be considered as an anomaly detection task. However, these approaches are often limited by the availability of annotated data or prior knowledge of the existence of an anomaly. In many cases, it is not possible to obtain this information. In this work, we propose an approach that can detect plant anomalies without prior knowledge of their existence, thus overcoming these limitations. To this end, we train a model on an auxiliary prediction task using a dataset composed of samples of normal and abnormal plants. Our proposed method studies the distribution of heatmaps retrieved from an explainability model. Based on the assumptions that the model trained on the auxiliary task is able to extract important plant characteristics, we propose to study how closely the heatmap of a new observation follows the heatmap distribution of a normal dataset. Through the proposed a contrario approach, we derive a score indicating potential anomalies.
Experiments show that our approach outperforms reference approaches such as f-AnoGAN and OCSVM on the GrowliFlower and PlantDoc datasets and has competitive performances on the PlantVillage dataset, while not requiring the prior knowledge on the existence of anomalies.
{"title":"Can we detect plant diseases without prior knowledge of their existence?","authors":"","doi":"10.1016/j.jag.2024.104192","DOIUrl":"10.1016/j.jag.2024.104192","url":null,"abstract":"<div><div>There is a need to help farmers make decisions to maximize crop yields. Many studies have emerged in recent years using deep learning on remotely sensed images to detect plant diseases, which can be caused by multiple factors such as environmental conditions, genetics or pathogens. This problem can be considered as an anomaly detection task. However, these approaches are often limited by the availability of annotated data or prior knowledge of the existence of an anomaly. In many cases, it is not possible to obtain this information. In this work, we propose an approach that can detect plant anomalies without prior knowledge of their existence, thus overcoming these limitations. To this end, we train a model on an auxiliary prediction task using a dataset composed of samples of normal and abnormal plants. Our proposed method studies the distribution of heatmaps retrieved from an explainability model. Based on the assumptions that the model trained on the auxiliary task is able to extract important plant characteristics, we propose to study how closely the heatmap of a new observation follows the heatmap distribution of a normal dataset. Through the proposed <em>a contrario</em> approach, we derive a score indicating potential anomalies.</div><div>Experiments show that our approach outperforms reference approaches such as f-AnoGAN and OCSVM on the GrowliFlower and PlantDoc datasets and has competitive performances on the PlantVillage dataset, while not requiring the prior knowledge on the existence of anomalies.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-06DOI: 10.1016/j.jag.2024.104205
Increasing exposure to heatwaves threatens public health, challenging various socioeconomic sectors in the coming decades. Prior studies mostly concentrated on the heatwaves occurring in specific regions by examining temperature durations, ignoring the fact that heatwaves typically swept across a large area. To comprehensively assess the effects of heatwaves, we jointly analyzed public attention to heatwaves using a dataset of over 10 million geo-located Weibo tweets across 321 cities in China. By considering spatial disparities, two kinds of public attention at city level, namely the number of heat-related tweets (NHTs) and the ratio of heat-related tweets (RHTs), were designed to indicate the severity and location of heatwave impacts, respectively. The heat cumulative intensity was used as a proxy for heatwaves, which exhibited more significant correlations with RHTs than NHTs. The multiscale geographically weighted regression (MGWR) model was employed to investigate the spatiotemporal variations of environment, demographic, and economic-social factors. Six city groups were clustered with MGWR coefficients that were consistent with the seven geographic subregions of China. This research provides a new perspective and methodology for public attention to heatwaves using geo-located social sensing data and highlights the need for actions to mitigate future heatwave stress in sensitive cities.
{"title":"Public responses to heatwaves in Chinese cities: A social media-based geospatial modelling approach","authors":"","doi":"10.1016/j.jag.2024.104205","DOIUrl":"10.1016/j.jag.2024.104205","url":null,"abstract":"<div><div>Increasing exposure to heatwaves threatens public health, challenging various socioeconomic sectors in the coming decades. Prior studies mostly concentrated on the heatwaves occurring in specific regions by examining temperature durations, ignoring the fact that heatwaves typically swept across a large area. To comprehensively assess the effects of heatwaves, we jointly analyzed public attention to heatwaves using a dataset of over 10 million geo-located Weibo tweets across 321 cities in China. By considering spatial disparities, two kinds of public attention at city level, namely the number of heat-related tweets (NHTs) and the ratio of heat-related tweets (RHTs), were designed to indicate the severity and location of heatwave impacts, respectively. The heat cumulative intensity was used as a proxy for heatwaves, which exhibited more significant correlations with RHTs than NHTs. The multiscale geographically weighted regression (MGWR) model was employed to investigate the spatiotemporal variations of environment, demographic, and economic-social factors. Six city groups were clustered with MGWR coefficients that were consistent with the seven geographic subregions of China. This research provides a new perspective and methodology for public attention to heatwaves using geo-located social sensing data and highlights the need for actions to mitigate future heatwave stress in sensitive cities.</div></div>","PeriodicalId":73423,"journal":{"name":"International journal of applied earth observation and geoinformation : ITC journal","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142422527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}