The atmosphere is a complex nonlinear system, with the information of its temperature, water vapor, pressure, and cloud being crucial aspects of remote-sensing data analysis. There exist intricate interactions among these internal components, such as convection, radiation, and humidity exchange. Atmospheric phenomena span multiple spatial and temporal scales, from small-scale thunderstorms to large-scale events like El Niño. The dynamic interactions across different scales, along with external disturbances to the atmospheric system, such as variations in solar radiation and Earth surface conditions, contribute to the chaotic nature of the atmosphere, making long-term predictions challenging. Grasping the intrinsic chaotic dynamics is essential for advancing atmospheric analysis, which holds profound implications for enhancing meteorological forecasts, mitigating disaster risks, and safeguarding ecological systems. To validate the chaotic nature of the atmosphere, this paper reviewed the definitions and main features of chaotic systems, elucidated the method of phase space reconstruction centered on Takens’ theorem, and categorized the qualitative and quantitative methods for determining the chaotic nature of time series data. Among quantitative methods, the Wolf method is used to calculate the Largest Lyapunov Exponents, while the G–P method is used to calculate the correlation dimensions. A new method named Improved Saturated Correlation Dimension method was proposed to address the subjectivity and noise sensitivity inherent in the traditional G–P method. Subsequently, the Largest Lyapunov Exponents and saturated correlation dimensions were utilized to conduct a quantitative analysis of FY-4A and Himawari-8 remote-sensing infrared observation data, and ERA5 reanalysis data. For both short-term remote-sensing data and long-term reanalysis data, the results showed that more than 99.91% of the regional points have corresponding sequences with positive Largest Lyapunov exponents and all the regional points have correlation dimensions that tended to saturate at values greater than 1 with increasing embedding dimensions, thereby proving that the atmospheric system exhibits chaotic properties on both short and long temporal scales, with extreme sensitivity to initial conditions. This conclusion provided a theoretical foundation for the short-term prediction of atmospheric infrared radiation field variables and the detection of weak, time-sensitive signals in complex atmospheric environments.
{"title":"Research on Multiscale Atmospheric Chaos Based on Infrared Remote-Sensing and Reanalysis Data","authors":"Zhong Wang, Shengli Sun, Wenjun Xu, Rui Chen, Yijun Ma, Gaorui Liu","doi":"10.3390/rs16183376","DOIUrl":"https://doi.org/10.3390/rs16183376","url":null,"abstract":"The atmosphere is a complex nonlinear system, with the information of its temperature, water vapor, pressure, and cloud being crucial aspects of remote-sensing data analysis. There exist intricate interactions among these internal components, such as convection, radiation, and humidity exchange. Atmospheric phenomena span multiple spatial and temporal scales, from small-scale thunderstorms to large-scale events like El Niño. The dynamic interactions across different scales, along with external disturbances to the atmospheric system, such as variations in solar radiation and Earth surface conditions, contribute to the chaotic nature of the atmosphere, making long-term predictions challenging. Grasping the intrinsic chaotic dynamics is essential for advancing atmospheric analysis, which holds profound implications for enhancing meteorological forecasts, mitigating disaster risks, and safeguarding ecological systems. To validate the chaotic nature of the atmosphere, this paper reviewed the definitions and main features of chaotic systems, elucidated the method of phase space reconstruction centered on Takens’ theorem, and categorized the qualitative and quantitative methods for determining the chaotic nature of time series data. Among quantitative methods, the Wolf method is used to calculate the Largest Lyapunov Exponents, while the G–P method is used to calculate the correlation dimensions. A new method named Improved Saturated Correlation Dimension method was proposed to address the subjectivity and noise sensitivity inherent in the traditional G–P method. Subsequently, the Largest Lyapunov Exponents and saturated correlation dimensions were utilized to conduct a quantitative analysis of FY-4A and Himawari-8 remote-sensing infrared observation data, and ERA5 reanalysis data. For both short-term remote-sensing data and long-term reanalysis data, the results showed that more than 99.91% of the regional points have corresponding sequences with positive Largest Lyapunov exponents and all the regional points have correlation dimensions that tended to saturate at values greater than 1 with increasing embedding dimensions, thereby proving that the atmospheric system exhibits chaotic properties on both short and long temporal scales, with extreme sensitivity to initial conditions. This conclusion provided a theoretical foundation for the short-term prediction of atmospheric infrared radiation field variables and the detection of weak, time-sensitive signals in complex atmospheric environments.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"165 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Angelly de Jesus Pugliese Viloria, Andrea Folini, Daniela Carrion, Maria Antonia Brovelli
With the increase in climate-change-related hazardous events alongside population concentration in urban centres, it is important to provide resilient cities with tools for understanding and eventually preparing for such events. Machine learning (ML) and deep learning (DL) techniques have increasingly been employed to model susceptibility of hazardous events. This study consists of a systematic review of the ML/DL techniques applied to model the susceptibility of air pollution, urban heat islands, floods, and landslides, with the aim of providing a comprehensive source of reference both for techniques and modelling approaches. A total of 1454 articles published between 2020 and 2023 were systematically selected from the Scopus and Web of Science search engines based on search queries and selection criteria. ML/DL techniques were extracted from the selected articles and categorised using ad hoc classification. Consequently, a general approach for modelling the susceptibility of hazardous events was consolidated, covering the data preprocessing, feature selection, modelling, model interpretation, and susceptibility map validation, along with examples of related global/continental data. The most frequently employed techniques across various hazards include random forest, artificial neural networks, and support vector machines. This review also provides, per hazard, the definition, data requirements, and insights into the ML/DL techniques used, including examples of both state-of-the-art and novel modelling approaches.
随着气候变化相关危险事件的增加以及城市中心人口的集中,为具有抗灾能力的城市提供了解并最终准备应对此类事件的工具非常重要。人们越来越多地采用机器学习(ML)和深度学习(DL)技术来模拟危险事件的易感性。本研究对应用于空气污染、城市热岛、洪水和山体滑坡易发性建模的 ML/DL 技术进行了系统回顾,旨在为技术和建模方法提供全面的参考来源。根据搜索查询和选择标准,从 Scopus 和 Web of Science 搜索引擎中系统地选取了 2020 至 2023 年间发表的 1454 篇文章。从所选文章中提取了 ML/DL 技术,并使用特别分类法进行了分类。因此,整合了危险事件易感性建模的一般方法,包括数据预处理、特征选择、建模、模型解释和易感性地图验证,以及相关的全球/大陆数据示例。在各种灾害中最常用的技术包括随机森林、人工神经网络和支持向量机。本综述还提供了每种灾害的定义、数据要求以及对所用 ML/DL 技术的见解,包括最新建模方法和新型建模方法的示例。
{"title":"Hazard Susceptibility Mapping with Machine and Deep Learning: A Literature Review","authors":"Angelly de Jesus Pugliese Viloria, Andrea Folini, Daniela Carrion, Maria Antonia Brovelli","doi":"10.3390/rs16183374","DOIUrl":"https://doi.org/10.3390/rs16183374","url":null,"abstract":"With the increase in climate-change-related hazardous events alongside population concentration in urban centres, it is important to provide resilient cities with tools for understanding and eventually preparing for such events. Machine learning (ML) and deep learning (DL) techniques have increasingly been employed to model susceptibility of hazardous events. This study consists of a systematic review of the ML/DL techniques applied to model the susceptibility of air pollution, urban heat islands, floods, and landslides, with the aim of providing a comprehensive source of reference both for techniques and modelling approaches. A total of 1454 articles published between 2020 and 2023 were systematically selected from the Scopus and Web of Science search engines based on search queries and selection criteria. ML/DL techniques were extracted from the selected articles and categorised using ad hoc classification. Consequently, a general approach for modelling the susceptibility of hazardous events was consolidated, covering the data preprocessing, feature selection, modelling, model interpretation, and susceptibility map validation, along with examples of related global/continental data. The most frequently employed techniques across various hazards include random forest, artificial neural networks, and support vector machines. This review also provides, per hazard, the definition, data requirements, and insights into the ML/DL techniques used, including examples of both state-of-the-art and novel modelling approaches.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"13 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human settlement areas significantly impact the environment, leading to changes in both natural and built environments. Comprehensive information on human settlements, particularly in urban areas, is crucial for effective sustainable development planning. However, urban land use investigations are often limited to two-dimensional building footprint maps, neglecting the three-dimensional aspect of building structures. This paper addresses this issue to contribute to Sustainable Development Goal 11, which focuses on making human settlements inclusive, safe, and sustainable. In this study, Sentinel-1 data are used as the primary source to estimate building heights. One challenge addressed is the issue of multiple backscattering in Sentinel-1’s signal, particularly in densely populated areas with high-rise buildings. To mitigate this, firstly, Sentinel-1 data from different directions, orbit paths, and polarizations are utilized. Combining ascending and descending orbits significantly improves estimation accuracy, and incorporating a higher number of paths provides additional information. However, Sentinel-1 data alone are not sufficiently rich at a global scale across different orbits and polarizations. Secondly, to enhance the accuracy further, Sentinel-1 data are corrected using nighttime light data as additional information, which shows promising results in addressing multiple backscattering issues. Finally, a deep learning model is trained to generate building height maps using these features, achieving a mean absolute error of around 2 m and a mean square error of approximately 13. The generalizability of this method is demonstrated in several cities with diverse built-up structures, including London, Berlin, and others. Finally, a building height map of Iran is generated and evaluated against surveyed buildings, showcasing its large-scale mapping capability.
{"title":"Mapping Building Heights at Large Scales Using Sentinel-1 Radar Imagery and Nighttime Light Data","authors":"Mohammad Kakooei, Yasser Baleghi","doi":"10.3390/rs16183371","DOIUrl":"https://doi.org/10.3390/rs16183371","url":null,"abstract":"Human settlement areas significantly impact the environment, leading to changes in both natural and built environments. Comprehensive information on human settlements, particularly in urban areas, is crucial for effective sustainable development planning. However, urban land use investigations are often limited to two-dimensional building footprint maps, neglecting the three-dimensional aspect of building structures. This paper addresses this issue to contribute to Sustainable Development Goal 11, which focuses on making human settlements inclusive, safe, and sustainable. In this study, Sentinel-1 data are used as the primary source to estimate building heights. One challenge addressed is the issue of multiple backscattering in Sentinel-1’s signal, particularly in densely populated areas with high-rise buildings. To mitigate this, firstly, Sentinel-1 data from different directions, orbit paths, and polarizations are utilized. Combining ascending and descending orbits significantly improves estimation accuracy, and incorporating a higher number of paths provides additional information. However, Sentinel-1 data alone are not sufficiently rich at a global scale across different orbits and polarizations. Secondly, to enhance the accuracy further, Sentinel-1 data are corrected using nighttime light data as additional information, which shows promising results in addressing multiple backscattering issues. Finally, a deep learning model is trained to generate building height maps using these features, achieving a mean absolute error of around 2 m and a mean square error of approximately 13. The generalizability of this method is demonstrated in several cities with diverse built-up structures, including London, Berlin, and others. Finally, a building height map of Iran is generated and evaluated against surveyed buildings, showcasing its large-scale mapping capability.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"46 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zongqing Cao, Bing Liu, Jianchao Yang, Ke Tan, Zheng Dai, Xingyu Lu, Hong Gu
Interrupted and multi-source track segment association (TSA) are two key challenges in target trajectory research within radar data processing. Traditional methods often rely on simplistic assumptions about target motion and statistical techniques for track association, leading to problems such as unrealistic assumptions, susceptibility to noise, and suboptimal performance limits. This study proposes a unified framework to address the challenges of associating interrupted and multi-source track segments by measuring trajectory similarity. We present TSA-cTFER, a novel network utilizing contrastive learning and TransFormer Encoder to accurately assess trajectory similarity through learned Representations by computing distances between high-dimensional feature vectors. Additionally, we tackle dynamic association scenarios with a two-stage online algorithm designed to manage tracks that appear or disappear at any time. This algorithm categorizes track pairs into easy and hard groups, employing tailored association strategies to achieve precise and robust associations in dynamic environments. Experimental results on real-world datasets demonstrate that our proposed TSA-cTFER network with the two-stage online algorithm outperforms existing methods, achieving 94.59% accuracy in interrupted track segment association tasks and 94.83% in multi-source track segment association tasks.
{"title":"Contrastive Transformer Network for Track Segment Association with Two-Stage Online Method","authors":"Zongqing Cao, Bing Liu, Jianchao Yang, Ke Tan, Zheng Dai, Xingyu Lu, Hong Gu","doi":"10.3390/rs16183380","DOIUrl":"https://doi.org/10.3390/rs16183380","url":null,"abstract":"Interrupted and multi-source track segment association (TSA) are two key challenges in target trajectory research within radar data processing. Traditional methods often rely on simplistic assumptions about target motion and statistical techniques for track association, leading to problems such as unrealistic assumptions, susceptibility to noise, and suboptimal performance limits. This study proposes a unified framework to address the challenges of associating interrupted and multi-source track segments by measuring trajectory similarity. We present TSA-cTFER, a novel network utilizing contrastive learning and TransFormer Encoder to accurately assess trajectory similarity through learned Representations by computing distances between high-dimensional feature vectors. Additionally, we tackle dynamic association scenarios with a two-stage online algorithm designed to manage tracks that appear or disappear at any time. This algorithm categorizes track pairs into easy and hard groups, employing tailored association strategies to achieve precise and robust associations in dynamic environments. Experimental results on real-world datasets demonstrate that our proposed TSA-cTFER network with the two-stage online algorithm outperforms existing methods, achieving 94.59% accuracy in interrupted track segment association tasks and 94.83% in multi-source track segment association tasks.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"6 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visible and infrared image fusion is a strategy that effectively extracts and fuses information from different sources. However, most existing methods largely neglect the issue of lighting imbalance, which makes the same fusion models inapplicable to different scenes. Several methods obtain low-level features from visible and infrared images at an early stage of input or shallow feature extraction. However, these methods do not explore how low-level features provide a foundation for recognizing and utilizing the complementarity and common information between the two types of images. As a result, the complementarity and common information between the images is not fully analyzed and discussed. To address these issues, we propose a Self-Attention Progressive Network for the fusion of infrared and visible images in this paper. Firstly, we construct a Lighting-Aware Sub-Network to analyze lighting distribution, and introduce intensity loss to measure the probability of scene illumination. This approach enhances the model’s adaptability to lighting conditions. Secondly, we introduce self-attention learning to design a multi-state joint feature extraction module (MSJFEM) that fully utilizes the contextual information among input keys. It guides the learning of a dynamic attention matrix to strengthen the capacity for visual representation. Finally, we design a Difference-Aware Propagation Module (DAPM) to extract and integrate edge details from the source images while supplementing differential information. The experiments across three benchmark datasets reveal that the proposed approach exhibits satisfactory performance compared to existing methods.
{"title":"Self-Attention Progressive Network for Infrared and Visible Image Fusion","authors":"Shuying Li, Muyi Han, Yuemei Qin, Qiang Li","doi":"10.3390/rs16183370","DOIUrl":"https://doi.org/10.3390/rs16183370","url":null,"abstract":"Visible and infrared image fusion is a strategy that effectively extracts and fuses information from different sources. However, most existing methods largely neglect the issue of lighting imbalance, which makes the same fusion models inapplicable to different scenes. Several methods obtain low-level features from visible and infrared images at an early stage of input or shallow feature extraction. However, these methods do not explore how low-level features provide a foundation for recognizing and utilizing the complementarity and common information between the two types of images. As a result, the complementarity and common information between the images is not fully analyzed and discussed. To address these issues, we propose a Self-Attention Progressive Network for the fusion of infrared and visible images in this paper. Firstly, we construct a Lighting-Aware Sub-Network to analyze lighting distribution, and introduce intensity loss to measure the probability of scene illumination. This approach enhances the model’s adaptability to lighting conditions. Secondly, we introduce self-attention learning to design a multi-state joint feature extraction module (MSJFEM) that fully utilizes the contextual information among input keys. It guides the learning of a dynamic attention matrix to strengthen the capacity for visual representation. Finally, we design a Difference-Aware Propagation Module (DAPM) to extract and integrate edge details from the source images while supplementing differential information. The experiments across three benchmark datasets reveal that the proposed approach exhibits satisfactory performance compared to existing methods.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"400 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jing Ning, Yunjun Yao, Joshua B. Fisher, Yufu Li, Xiaotong Zhang, Bo Jiang, Jia Xu, Ruiyang Yu, Lu Liu, Xueyi Zhang, Zijing Xie, Jiahui Fan, Luna Zhang
As a major agricultural hazard, drought frequently occurs due to a reduction in precipitation resulting in a continuously propagating soil moisture (SM) deficit. Assessment of the high spatial-resolution SM-derived drought index is crucial for monitoring agricultural drought. In this study, we generated a downscaled random forest SM dataset (RF-SM) and calculated the soil water deficit index (RF-SM-SWDI) at 30 m for agricultural drought monitoring. The results showed that the RF-SM dataset exhibited better consistency with in situ SM observations in the detection of extremes than did the SM products, including SMAP, SMOS, NCA-LDAS, and ESA CCI, for different land cover types in the U.S. and yielded a satisfactory performance, with the lowest root mean square error (RMSE, below 0.055 m3/m3) and the highest coefficient of determination (R2, above 0.8) for most observation networks, based on the number of sites. A vegetation health index (VHI), derived from a Landsat 8 optical remote sensing dataset, was also generated for comparison. The results illustrated that the RF-SM-SWDI and VHI exhibited high correlations (R ≥ 0.5) at approximately 70% of the stations. Furthermore, we mapped spatiotemporal drought monitoring indices in California. The RF-SM-SWDI provided drought conditions with more detailed spatial information than did the short-term drought blend (STDB) released by the U.S. Drought Monitor, which demonstrated the expected response of seasonal drought trends, while differences from the VHI were observed mainly in forest areas. Therefore, downscaled SM and SWDI, with a spatial resolution of 30 m, are promising for monitoring agricultural field drought within different contexts, and additional reliable factors could be incorporated to better guide agricultural management practices.
作为一种主要的农业灾害,干旱的发生往往是由于降水量减少导致土壤水分(SM)持续不足。评估高空间分辨率土壤水分衍生干旱指数对于监测农业干旱至关重要。在本研究中,我们生成了降尺度随机森林土壤水分数据集(RF-SM),并计算了 30 米处的土壤水分亏缺指数(RF-SM-SWDI),用于农业干旱监测。结果表明,对于美国不同的土地覆被类型,RF-SM 数据集在探测极端事件方面与原地 SM 观测数据的一致性要好于 SMAP、SMOS、NCA-LDAS 和 ESA CCI 等 SM 产品,并且性能令人满意,在大多数观测网络中,根据站点数量,RF-SM 数据集的均方根误差(RMSE,低于 0.055 m3/m3)最小,判定系数(R2,高于 0.8)最高。此外,还通过 Landsat 8 光学遥感数据集生成了植被健康指数(VHI),以进行比较。结果表明,RF-SM-SWDI 和 VHI 在约 70% 的站点表现出高度相关性(R ≥ 0.5)。此外,我们还绘制了加利福尼亚州的时空干旱监测指数图。与美国干旱监测机构发布的短期干旱混合指数(STDB)相比,RF-SM-SWDI 提供了更详细的干旱状况空间信息,显示了季节性干旱趋势的预期响应,而与 VHI 的差异主要出现在森林地区。因此,空间分辨率为 30 米的降尺度 SM 和 SWDI 有望在不同环境下监测农田干旱,并可纳入更多可靠因素,以更好地指导农业管理实践。
{"title":"Soil Moisture-Derived SWDI at 30 m Based on Multiple Satellite Datasets for Agricultural Drought Monitoring","authors":"Jing Ning, Yunjun Yao, Joshua B. Fisher, Yufu Li, Xiaotong Zhang, Bo Jiang, Jia Xu, Ruiyang Yu, Lu Liu, Xueyi Zhang, Zijing Xie, Jiahui Fan, Luna Zhang","doi":"10.3390/rs16183372","DOIUrl":"https://doi.org/10.3390/rs16183372","url":null,"abstract":"As a major agricultural hazard, drought frequently occurs due to a reduction in precipitation resulting in a continuously propagating soil moisture (SM) deficit. Assessment of the high spatial-resolution SM-derived drought index is crucial for monitoring agricultural drought. In this study, we generated a downscaled random forest SM dataset (RF-SM) and calculated the soil water deficit index (RF-SM-SWDI) at 30 m for agricultural drought monitoring. The results showed that the RF-SM dataset exhibited better consistency with in situ SM observations in the detection of extremes than did the SM products, including SMAP, SMOS, NCA-LDAS, and ESA CCI, for different land cover types in the U.S. and yielded a satisfactory performance, with the lowest root mean square error (RMSE, below 0.055 m3/m3) and the highest coefficient of determination (R2, above 0.8) for most observation networks, based on the number of sites. A vegetation health index (VHI), derived from a Landsat 8 optical remote sensing dataset, was also generated for comparison. The results illustrated that the RF-SM-SWDI and VHI exhibited high correlations (R ≥ 0.5) at approximately 70% of the stations. Furthermore, we mapped spatiotemporal drought monitoring indices in California. The RF-SM-SWDI provided drought conditions with more detailed spatial information than did the short-term drought blend (STDB) released by the U.S. Drought Monitor, which demonstrated the expected response of seasonal drought trends, while differences from the VHI were observed mainly in forest areas. Therefore, downscaled SM and SWDI, with a spatial resolution of 30 m, are promising for monitoring agricultural field drought within different contexts, and additional reliable factors could be incorporated to better guide agricultural management practices.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"52 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianye Yuan, Haofei Wang, Minghao Li, Xiaohan Wang, Weiwei Song, Song Li, Wei Gong
Fire detection is crucial due to the exorbitant annual toll on both human lives and the economy resulting from fire-related incidents. To enhance forest fire detection in complex environments, we propose a new algorithm called FD-Net for various environments. Firstly, to improve detection performance, we introduce a Fire Attention (FA) mechanism that utilizes the position information from feature maps. Secondly, to prevent geometric distortion during image cropping, we propose a Three-Scale Pooling (TSP) module. Lastly, we fine-tune the YOLOv5 network and incorporate a new Fire Fusion (FF) module to enhance the network’s precision in identifying fire targets. Through qualitative and quantitative comparisons, we found that FD-Net outperforms current state-of-the-art algorithms in performance on both fire and fire-and-smoke datasets. This further demonstrates FD-Net’s effectiveness for application in fire detection.
{"title":"FD-Net: A Single-Stage Fire Detection Framework for Remote Sensing in Complex Environments","authors":"Jianye Yuan, Haofei Wang, Minghao Li, Xiaohan Wang, Weiwei Song, Song Li, Wei Gong","doi":"10.3390/rs16183382","DOIUrl":"https://doi.org/10.3390/rs16183382","url":null,"abstract":"Fire detection is crucial due to the exorbitant annual toll on both human lives and the economy resulting from fire-related incidents. To enhance forest fire detection in complex environments, we propose a new algorithm called FD-Net for various environments. Firstly, to improve detection performance, we introduce a Fire Attention (FA) mechanism that utilizes the position information from feature maps. Secondly, to prevent geometric distortion during image cropping, we propose a Three-Scale Pooling (TSP) module. Lastly, we fine-tune the YOLOv5 network and incorporate a new Fire Fusion (FF) module to enhance the network’s precision in identifying fire targets. Through qualitative and quantitative comparisons, we found that FD-Net outperforms current state-of-the-art algorithms in performance on both fire and fire-and-smoke datasets. This further demonstrates FD-Net’s effectiveness for application in fire detection.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"50 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pallavi Govekar, Christopher Griffin, Owen Embury, Jonathan Mittaz, Helen Mary Beggs, Christopher J. Merchant
As a contribution to the Integrated Marine Observing System (IMOS), the Bureau of Meteorology introduces new reprocessed Himawari-8 satellite-derived Sea Surface Temperature (SST) products. The Radiative Transfer Model and a Bayesian cloud clearing method is used to retrieve SSTs every 10 min from the geostationary satellite Himawari-8. An empirical Sensor Specific Error Statistics (SSES) model, introduced herein, is applied to calculate bias and standard deviation for the retrieved SSTs. The SST retrieval and compositing method, along with validation results, are discussed. The monthly statistics for comparisons of Himawari-8 Level 2 Product (L2P) skin SST against in situ SST quality monitoring (iQuam) in situ SST datasets, adjusted for thermal stratification, showed a mean bias of −0.2/−0.1 K and a standard deviation of 0.4–0.7 K for daytime/night-time after bias correction, where satellite zenith angles were less than 60° and the quality level was greater than 2. For ease of use, these native resolution SST data have been composited using a method introduced herein that retains retrieved measurements, to hourly, 4-hourly and daily SST products, and projected onto the rectangular IMOS 0.02 degree grid. On average, 4-hourly products cover ≈10% more of the IMOS domain, while one-night composites cover ≈25% more of the IMOS domain than a typical 1 h composite. All available Himawari-8 data have been reprocessed for the September 2015–December 2022 period. The 10 min temporal resolution of the newly developed Himawari-8 SST data enables a daily composite with enhanced spatial coverage, effectively filling in SST gaps caused by transient clouds occlusion. Anticipated benefits of the new Himawari-8 products include enhanced data quality for applications like IMOS OceanCurrent and investigations into marine thermal stress, marine heatwaves, and ocean upwelling in near-coastal regions.
{"title":"Himawari-8 Sea Surface Temperature Products from the Australian Bureau of Meteorology","authors":"Pallavi Govekar, Christopher Griffin, Owen Embury, Jonathan Mittaz, Helen Mary Beggs, Christopher J. Merchant","doi":"10.3390/rs16183381","DOIUrl":"https://doi.org/10.3390/rs16183381","url":null,"abstract":"As a contribution to the Integrated Marine Observing System (IMOS), the Bureau of Meteorology introduces new reprocessed Himawari-8 satellite-derived Sea Surface Temperature (SST) products. The Radiative Transfer Model and a Bayesian cloud clearing method is used to retrieve SSTs every 10 min from the geostationary satellite Himawari-8. An empirical Sensor Specific Error Statistics (SSES) model, introduced herein, is applied to calculate bias and standard deviation for the retrieved SSTs. The SST retrieval and compositing method, along with validation results, are discussed. The monthly statistics for comparisons of Himawari-8 Level 2 Product (L2P) skin SST against in situ SST quality monitoring (iQuam) in situ SST datasets, adjusted for thermal stratification, showed a mean bias of −0.2/−0.1 K and a standard deviation of 0.4–0.7 K for daytime/night-time after bias correction, where satellite zenith angles were less than 60° and the quality level was greater than 2. For ease of use, these native resolution SST data have been composited using a method introduced herein that retains retrieved measurements, to hourly, 4-hourly and daily SST products, and projected onto the rectangular IMOS 0.02 degree grid. On average, 4-hourly products cover ≈10% more of the IMOS domain, while one-night composites cover ≈25% more of the IMOS domain than a typical 1 h composite. All available Himawari-8 data have been reprocessed for the September 2015–December 2022 period. The 10 min temporal resolution of the newly developed Himawari-8 SST data enables a daily composite with enhanced spatial coverage, effectively filling in SST gaps caused by transient clouds occlusion. Anticipated benefits of the new Himawari-8 products include enhanced data quality for applications like IMOS OceanCurrent and investigations into marine thermal stress, marine heatwaves, and ocean upwelling in near-coastal regions.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"44 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wang Li, Fangsong Yang, Jiayi Yang, Renzhong Zhang, Juan Lin, Dongsheng Zhao, Craig M. Hancock
The atmospheric gravity waves (AGWs) generated by severe typhoons can facilitate the transfer of energy from the troposphere to the ionosphere, resulting in medium-scale traveling ionospheric disturbances (MSTIDs). However, the complex three-dimensional nature of MSTIDs over oceanic regions presents challenges for detection using ground-based Global Navigation Satellite System (GNSS) networks. This study employs a hybrid approach combining space-based and ground-based techniques to investigate the spatiotemporal characteristics of ionospheric perturbations during Typhoon Doksuri. Plane maps depict significant plasma fluctuations extending outward from the typhoon’s gale wind zone on 24 July, reaching distances of up to 1800 km from the typhoon’s center, while space weather conditions remained relatively calm. These ionospheric perturbations propagated at velocities between 173 m/s and 337 m/s, consistent with AGW features and associated propagation speeds. Vertical mapping reveals that energy originating from Typhoon Doksuri propagated upward through a 500 km layer, resulting in substantial enhancements of plasma density and temperature in the topside ionosphere. Notably, the topside horizontal density gradient was 1.5 to 2 times greater than that observed in the bottom-side ionosphere. Both modeling and observational data convincingly demonstrate that the weak background winds favored the generation of AGWs associated with Typhoon Doksuri, influencing the development of distinct MSTIDs.
{"title":"Morphological Features of Severe Ionospheric Weather Associated with Typhoon Doksuri in 2023","authors":"Wang Li, Fangsong Yang, Jiayi Yang, Renzhong Zhang, Juan Lin, Dongsheng Zhao, Craig M. Hancock","doi":"10.3390/rs16183375","DOIUrl":"https://doi.org/10.3390/rs16183375","url":null,"abstract":"The atmospheric gravity waves (AGWs) generated by severe typhoons can facilitate the transfer of energy from the troposphere to the ionosphere, resulting in medium-scale traveling ionospheric disturbances (MSTIDs). However, the complex three-dimensional nature of MSTIDs over oceanic regions presents challenges for detection using ground-based Global Navigation Satellite System (GNSS) networks. This study employs a hybrid approach combining space-based and ground-based techniques to investigate the spatiotemporal characteristics of ionospheric perturbations during Typhoon Doksuri. Plane maps depict significant plasma fluctuations extending outward from the typhoon’s gale wind zone on 24 July, reaching distances of up to 1800 km from the typhoon’s center, while space weather conditions remained relatively calm. These ionospheric perturbations propagated at velocities between 173 m/s and 337 m/s, consistent with AGW features and associated propagation speeds. Vertical mapping reveals that energy originating from Typhoon Doksuri propagated upward through a 500 km layer, resulting in substantial enhancements of plasma density and temperature in the topside ionosphere. Notably, the topside horizontal density gradient was 1.5 to 2 times greater than that observed in the bottom-side ionosphere. Both modeling and observational data convincingly demonstrate that the weak background winds favored the generation of AGWs associated with Typhoon Doksuri, influencing the development of distinct MSTIDs.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"1 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kunbo Liu, Shuai Liu, Kai Tan, Mingbo Yin, Pengjie Tao
Salt marshes provide diverse habitats for a wide range of creatures and play a key defensive and buffering role in resisting extreme marine hazards for coastal communities. Accurately obtaining the terrains of salt marshes is crucial for the comprehensive management and conservation of coastal resources and ecology. However, dense vegetation coverage, periodic tide inundation, and pervasive ditch distribution create challenges for measuring or estimating salt marsh terrains. These environmental factors make most existing techniques and methods ineffective in terms of data acquisition resolution, accuracy, and efficiency. Drone multi-line light detection and ranging (LiDAR) has offered a fire-new perspective in the 3D point cloud data acquisition and potentially exhibited great superiority in accurately deriving salt marsh terrains. The prerequisite for terrain characterization from drone multi-line LiDAR data is point cloud filtering, which means that ground points must be discriminated from the non-ground points. Existing filtering methods typically rely on either LiDAR geometric or intensity features. These methods may not perform well in salt marshes with dense, diverse, and complex vegetation. This study proposes a new filtering method for drone multi-line LiDAR point clouds in salt marshes based on the artificial neural network (ANN) machine learning model. First, a series of spatial–spectral features at the individual (e.g., elevation, distance, and intensity) and neighborhood (e.g., eigenvalues, linearity, and sphericity) scales are derived from the original data. Then, the derived spatial–spectral features are selected to remove the related and redundant ones for optimizing the performance of the ANN model. Finally, the reserved features are integrated as input variables in the ANN model to characterize their nonlinear relationships with the point categories (ground or non-ground) at different perspectives. A case study of two typical salt marshes at the mouth of the Yangtze River, using a drone 6-line LiDAR, demonstrates the effectiveness and generalization of the proposed filtering method. The average G-mean and AUC achieved were 0.9441 and 0.9450, respectively, outperforming traditional geometric information-based methods and other advanced machine learning methods, as well as the deep learning model (RandLA-Net). Additionally, the integration of spatial–spectral features at individual–neighborhood scales results in better filtering outcomes than using either single-type or single-scale features. The proposed method offers an innovative strategy for drone LiDAR point cloud filtering and salt marsh terrain derivation under the novel solution of deeply integrating geometric and radiometric data.
盐沼为多种生物提供了多样化的栖息地,在抵御极端海洋灾害方面为沿海社区发挥着重要的防御和缓冲作用。准确获取盐沼地形对于沿海资源和生态的综合管理和保护至关重要。然而,茂密的植被覆盖、周期性的潮水淹没和无处不在的沟渠分布,给盐沼地形的测量或估算带来了挑战。这些环境因素使得大多数现有技术和方法在数据采集分辨率、准确性和效率方面效果不佳。无人机多线光探测与测距(LiDAR)为三维点云数据采集提供了一个全新的视角,在准确推导盐沼地形方面可能表现出巨大的优势。利用无人机多线激光雷达数据进行地形特征描述的前提是点云过滤,这意味着必须将地面点与非地面点区分开来。现有的过滤方法通常依赖于激光雷达的几何特征或强度特征。这些方法在植被茂密、多样且复杂的盐沼中可能效果不佳。本研究基于人工神经网络(ANN)机器学习模型,提出了一种新的盐沼无人机多线激光雷达点云过滤方法。首先,从原始数据中导出一系列单个(如高程、距离和强度)和邻域(如特征值、线性度和球度)尺度的空间光谱特征。然后,对得出的空间光谱特征进行筛选,去除相关和冗余特征,以优化 ANN 模型的性能。最后,将保留的特征作为输入变量整合到 ANN 模型中,以描述它们与不同视角下的点类别(地面或非地面)之间的非线性关系。利用无人机 6 线激光雷达对长江口两片典型盐碱地进行了案例研究,证明了所提滤波方法的有效性和普适性。所获得的平均 G 均值和 AUC 分别为 0.9441 和 0.9450,优于传统的基于几何信息的方法和其他先进的机器学习方法,以及深度学习模型(RandLA-Net)。此外,与使用单一类型或单一尺度的特征相比,整合单个邻域尺度的空间光谱特征能带来更好的过滤效果。所提出的方法为无人机激光雷达点云滤波和盐沼地形推导提供了一种创新策略,是几何数据和辐射数据深度整合的新颖解决方案。
{"title":"ANN-Based Filtering of Drone LiDAR in Coastal Salt Marshes Using Spatial–Spectral Features","authors":"Kunbo Liu, Shuai Liu, Kai Tan, Mingbo Yin, Pengjie Tao","doi":"10.3390/rs16183373","DOIUrl":"https://doi.org/10.3390/rs16183373","url":null,"abstract":"Salt marshes provide diverse habitats for a wide range of creatures and play a key defensive and buffering role in resisting extreme marine hazards for coastal communities. Accurately obtaining the terrains of salt marshes is crucial for the comprehensive management and conservation of coastal resources and ecology. However, dense vegetation coverage, periodic tide inundation, and pervasive ditch distribution create challenges for measuring or estimating salt marsh terrains. These environmental factors make most existing techniques and methods ineffective in terms of data acquisition resolution, accuracy, and efficiency. Drone multi-line light detection and ranging (LiDAR) has offered a fire-new perspective in the 3D point cloud data acquisition and potentially exhibited great superiority in accurately deriving salt marsh terrains. The prerequisite for terrain characterization from drone multi-line LiDAR data is point cloud filtering, which means that ground points must be discriminated from the non-ground points. Existing filtering methods typically rely on either LiDAR geometric or intensity features. These methods may not perform well in salt marshes with dense, diverse, and complex vegetation. This study proposes a new filtering method for drone multi-line LiDAR point clouds in salt marshes based on the artificial neural network (ANN) machine learning model. First, a series of spatial–spectral features at the individual (e.g., elevation, distance, and intensity) and neighborhood (e.g., eigenvalues, linearity, and sphericity) scales are derived from the original data. Then, the derived spatial–spectral features are selected to remove the related and redundant ones for optimizing the performance of the ANN model. Finally, the reserved features are integrated as input variables in the ANN model to characterize their nonlinear relationships with the point categories (ground or non-ground) at different perspectives. A case study of two typical salt marshes at the mouth of the Yangtze River, using a drone 6-line LiDAR, demonstrates the effectiveness and generalization of the proposed filtering method. The average G-mean and AUC achieved were 0.9441 and 0.9450, respectively, outperforming traditional geometric information-based methods and other advanced machine learning methods, as well as the deep learning model (RandLA-Net). Additionally, the integration of spatial–spectral features at individual–neighborhood scales results in better filtering outcomes than using either single-type or single-scale features. The proposed method offers an innovative strategy for drone LiDAR point cloud filtering and salt marsh terrain derivation under the novel solution of deeply integrating geometric and radiometric data.","PeriodicalId":48993,"journal":{"name":"Remote Sensing","volume":"60 1","pages":""},"PeriodicalIF":5.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142200470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}