Justyna Górniak-Zimroz, K. Romańczukiewicz, Magdalena Sitarska, A. Szrek
Light pollution significantly interferes with animal and human life and should, therefore, be included in the factors that threaten ecosystems. The main aim of this research is to develop a methodology for monitoring environmental and social elements subjected to light pollution in anthropogenic areas. This research is based on yearly and monthly photographs acquired from the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-Orbiting Partnership (Suomi NPP) satellite; land cover data from the CORINE Land Cover (CLC) program; and environmental data from the European Environment Agency (EEA) and the World Database on Protected Areas (WDPA). The processing of input data for further analyses, the testing of the methodology and the interpretation of the final results were performed in GIS-type software (ArcGIS Pro). Light pollution in the investigated area was analyzed with the use of maps generated for the years 2014 and 2019. The environmental and social elements were spatially identified in five light pollution classes. The research results demonstrate that the proposed methodology allows for the identification of environmental and social elements that emit light, as well as those that are subjected to light pollution. The methodology used in this work allows us to observe changes resulting from light pollution (decreasing or increasing the intensity). Owing to the use of publicly available data, the methodology can be applied to light pollution monitoring as part of spatial planning in anthropogenic areas. The proposed methodology makes it possible to cover the area exposed to light pollution and to observe (almost online) the environmental and social changes resulting from reductions in light emitted by anthropogenic areas.
光污染严重干扰动物和人类的生活,因此应被列入威胁生态系统的因素。这项研究的主要目的是开发一种方法,用于监测人为地区受光污染影响的环境和社会因素。这项研究基于从 Suomi 国家极地轨道伙伴关系(Suomi NPP)卫星上的可见红外成像辐射计套件(VIIRS)获取的年度和月度照片、CORINE 土地覆盖(CLC)计划提供的土地覆盖数据,以及欧洲环境署(EEA)和世界保护区数据库(WDPA)提供的环境数据。用于进一步分析的输入数据的处理、方法的测试和最终结果的解释均在地理信息系统类软件(ArcGIS Pro)中进行。利用 2014 年和 2019 年生成的地图对调查区域的光污染进行了分析。在五个光污染等级中对环境和社会要素进行了空间识别。研究结果表明,所提出的方法可以识别出发光的环境和社会要素,以及受到光污染影响的环境和社会要素。这项工作中使用的方法使我们能够观察到光污染造成的变化(强度的降低或增加)。由于使用了可公开获得的数据,该方法可应用于光污染监测,作为人为区域空间规划的一部分。所提出的方法可以覆盖光污染区域,并(几乎在线)观测人为区域光辐射减少所带来的环境和社会变化。
{"title":"Light-Pollution-Monitoring Method for Selected Environmental and Social Elements","authors":"Justyna Górniak-Zimroz, K. Romańczukiewicz, Magdalena Sitarska, A. Szrek","doi":"10.3390/rs16050774","DOIUrl":"https://doi.org/10.3390/rs16050774","url":null,"abstract":"Light pollution significantly interferes with animal and human life and should, therefore, be included in the factors that threaten ecosystems. The main aim of this research is to develop a methodology for monitoring environmental and social elements subjected to light pollution in anthropogenic areas. This research is based on yearly and monthly photographs acquired from the Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-Orbiting Partnership (Suomi NPP) satellite; land cover data from the CORINE Land Cover (CLC) program; and environmental data from the European Environment Agency (EEA) and the World Database on Protected Areas (WDPA). The processing of input data for further analyses, the testing of the methodology and the interpretation of the final results were performed in GIS-type software (ArcGIS Pro). Light pollution in the investigated area was analyzed with the use of maps generated for the years 2014 and 2019. The environmental and social elements were spatially identified in five light pollution classes. The research results demonstrate that the proposed methodology allows for the identification of environmental and social elements that emit light, as well as those that are subjected to light pollution. The methodology used in this work allows us to observe changes resulting from light pollution (decreasing or increasing the intensity). Owing to the use of publicly available data, the methodology can be applied to light pollution monitoring as part of spatial planning in anthropogenic areas. The proposed methodology makes it possible to cover the area exposed to light pollution and to observe (almost online) the environmental and social changes resulting from reductions in light emitted by anthropogenic areas.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"43 4","pages":"774"},"PeriodicalIF":0.0,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140440629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad A. A. Abdelgawad, Ray C. C. Cheung, Hong Yan
Hyperspectral imaging captures detailed spectral data for remote sensing. However, due to the limited spatial resolution of hyperspectral sensors, each pixel of a hyperspectral image (HSI) may contain information from multiple materials. Although the hyperspectral unmixing (HU) process involves estimating endmembers, identifying pure spectral components, and estimating pixel abundances, existing algorithms mostly focus on just one or two tasks. Blind source separation (BSS) based on nonnegative matrix factorization (NMF) algorithms identify endmembers and their abundances at each pixel of HSI simultaneously. Although they perform well, the factorization results are unstable, require high computational costs, and are difficult to interpret from the original HSI. CUR matrix decomposition selects specific columns and rows from a dataset to represent it as a product of three small submatrices, resulting in interpretable low-rank factorization. In this paper, we propose a new blind HU framework based on CUR factorization called CUR-HU that performs the entire HU process by exploiting the low-rank structure of given HSIs. CUR-HU incorporates several techniques to perform the HU process with a performance comparable to state-of-the-art methods but with higher computational efficiency. We adopt a deterministic sampling method to select the most informative pixels and spectrum components in HSIs. We use an incremental QR decomposition method to reduce computation complexity and estimate the number of endmembers. Various experiments on synthetic and real HSIs are conducted to evaluate the performance of CUR-HU. CUR-HU performs comparably to state-of-the-art methods for estimating the number of endmembers and abundance maps, but it outperforms other methods for estimating the endmembers and the computational efficiency. It has a 9.4 to 249.5 times speedup over different methods for different real HSIs.
高光谱成像可捕捉到用于遥感的详细光谱数据。然而,由于高光谱传感器的空间分辨率有限,高光谱图像(HSI)的每个像素可能包含来自多种材料的信息。虽然高光谱非混合(HU)过程包括估算内含物、识别纯光谱成分和估算像素丰度,但现有的算法大多只关注其中的一项或两项任务。基于非负矩阵因式分解(NMF)算法的盲源分离(BSS)可同时识别 HSI 每个像素的内含成分及其丰度。虽然这些算法性能良好,但因式分解结果不稳定,计算成本高,而且难以从原始 HSI 中解读。CUR 矩阵分解从数据集中选择特定的列和行,将其表示为三个小的子矩阵的乘积,从而得到可解释的低秩因式分解。在本文中,我们提出了一种基于 CUR 因式分解的全新盲 HU 框架,称为 CUR-HU,它通过利用给定 HSI 的低秩结构来执行整个 HU 流程。CUR-HU 融合了多种技术来执行 HU 过程,其性能与最先进的方法相当,但计算效率更高。我们采用确定性采样方法来选择 HSI 中信息量最大的像素和频谱成分。我们使用增量 QR 分解法来降低计算复杂度和估算内含物的数量。为了评估 CUR-HU 的性能,我们在合成和真实的 HSI 上进行了各种实验。CUR-HU 在估计内含物数量和丰度图方面的表现与最先进的方法相当,但在估计内含物和计算效率方面优于其他方法。对于不同的实际恒星指数,它的速度比不同的方法快 9.4 到 249.5 倍。
{"title":"Efficient Blind Hyperspectral Unmixing Framework Based on CUR Decomposition (CUR-HU)","authors":"Muhammad A. A. Abdelgawad, Ray C. C. Cheung, Hong Yan","doi":"10.3390/rs16050766","DOIUrl":"https://doi.org/10.3390/rs16050766","url":null,"abstract":"Hyperspectral imaging captures detailed spectral data for remote sensing. However, due to the limited spatial resolution of hyperspectral sensors, each pixel of a hyperspectral image (HSI) may contain information from multiple materials. Although the hyperspectral unmixing (HU) process involves estimating endmembers, identifying pure spectral components, and estimating pixel abundances, existing algorithms mostly focus on just one or two tasks. Blind source separation (BSS) based on nonnegative matrix factorization (NMF) algorithms identify endmembers and their abundances at each pixel of HSI simultaneously. Although they perform well, the factorization results are unstable, require high computational costs, and are difficult to interpret from the original HSI. CUR matrix decomposition selects specific columns and rows from a dataset to represent it as a product of three small submatrices, resulting in interpretable low-rank factorization. In this paper, we propose a new blind HU framework based on CUR factorization called CUR-HU that performs the entire HU process by exploiting the low-rank structure of given HSIs. CUR-HU incorporates several techniques to perform the HU process with a performance comparable to state-of-the-art methods but with higher computational efficiency. We adopt a deterministic sampling method to select the most informative pixels and spectrum components in HSIs. We use an incremental QR decomposition method to reduce computation complexity and estimate the number of endmembers. Various experiments on synthetic and real HSIs are conducted to evaluate the performance of CUR-HU. CUR-HU performs comparably to state-of-the-art methods for estimating the number of endmembers and abundance maps, but it outperforms other methods for estimating the endmembers and the computational efficiency. It has a 9.4 to 249.5 times speedup over different methods for different real HSIs.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"3 8","pages":"766"},"PeriodicalIF":0.0,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140440725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qi Zhang, Wenjin Sun, Huaihai Guo, Changming Dong, Hong Zheng
In recent decades, satellites have played a pivotal role in observing ocean dynamics, providing diverse datasets with varying spatial resolutions. Notably, within these datasets, sea surface height (SSH) data typically exhibit low resolution, while sea surface temperature (SST) data have significantly higher resolution. This study introduces a Transfer Learning-enhanced Generative Adversarial Network (TLGAN) for reconstructing high-resolution SSH fields through the fusion of heterogeneous SST data. In contrast to alternative deep learning approaches that involve directly stacking SSH and SST data as input channels in neural networks, our methodology utilizes bifurcated blocks comprising Residual Dense Module and Residual Feature Distillation Module to extract features from SSH and SST data, respectively. A pixelshuffle module-based upscaling block is then concatenated to map these features into a common latent space. Employing a hybrid strategy involving adversarial training and transfer learning, we overcome the limitation that SST and SSH data should share the same time dimension and achieve significant resolution enhancement in SSH reconstruction. Experimental results demonstrate that, when compared to interpolation method, TLGAN effectively reduces reconstruction errors and fusing SST data could significantly enhance in generating more realistic and physically plausible results.
{"title":"A Transfer Learning-Enhanced Generative Adversarial Network for Downscaling Sea Surface Height through Heterogeneous Data Fusion","authors":"Qi Zhang, Wenjin Sun, Huaihai Guo, Changming Dong, Hong Zheng","doi":"10.3390/rs16050763","DOIUrl":"https://doi.org/10.3390/rs16050763","url":null,"abstract":"In recent decades, satellites have played a pivotal role in observing ocean dynamics, providing diverse datasets with varying spatial resolutions. Notably, within these datasets, sea surface height (SSH) data typically exhibit low resolution, while sea surface temperature (SST) data have significantly higher resolution. This study introduces a Transfer Learning-enhanced Generative Adversarial Network (TLGAN) for reconstructing high-resolution SSH fields through the fusion of heterogeneous SST data. In contrast to alternative deep learning approaches that involve directly stacking SSH and SST data as input channels in neural networks, our methodology utilizes bifurcated blocks comprising Residual Dense Module and Residual Feature Distillation Module to extract features from SSH and SST data, respectively. A pixelshuffle module-based upscaling block is then concatenated to map these features into a common latent space. Employing a hybrid strategy involving adversarial training and transfer learning, we overcome the limitation that SST and SSH data should share the same time dimension and achieve significant resolution enhancement in SSH reconstruction. Experimental results demonstrate that, when compared to interpolation method, TLGAN effectively reduces reconstruction errors and fusing SST data could significantly enhance in generating more realistic and physically plausible results.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"24 4","pages":"763"},"PeriodicalIF":0.0,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140441288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Changjing Wang, Hongmin Zhou, Guodong Zhang, Jianguo Duan, Moxiao Lin
Owing to advancements in satellite remote sensing technology, the acquisition of global land surface parameters, notably, the leaf area index (LAI), has become increasingly accessible. The Sentinel-2 (S2) satellite plays an important role in the monitoring of ecological environments and resource management. The prevalent use of the 20 m spatial resolution band in S2-based inversion models imposes significant limitations on the applicability of S2 data in applications requiring finer spatial resolution. Furthermore, although a substantial body of research on LAI retrieval using S2 data concentrates on agricultural landscapes, studies dedicated to forest ecosystems, although increasing, remain relatively less prevalent. This study aims to establish a viable methodology for retrieving 10 m resolution LAI data in forested regions. The empirical model of the soil adjusted vegetation index (SAVI), the backpack neural network based on simulated annealing (SA-BP) algorithm, and the variational heteroscedastic Gaussian process regression (VHGPR) model are established in this experiment based on the LAI data measured and the corresponding 10 m spatial resolution S2 satellite surface reflectance data in the Saihanba Forestry Center (SFC). The LAI retrieval performance of the three models is then validated using field data, and the error sources of the best performing VHGPR models (R2 of 0.8696 and RMSE of 0.5078) are further analyzed. Moreover, the VHGPR model stands out for its capacity to quantify the uncertainty in LAI estimation, presenting a notable advantage in assessing the significance of input data, eliminating redundant bands, and being well suited for uncertainty estimation. This feature is particularly valuable in generating accurate LAI products, especially in regions characterized by diverse forest compositions.
由于卫星遥感技术的进步,获取全球地表参数,特别是叶面积指数(LAI)变得越来越容易。哨兵-2(S2)卫星在生态环境监测和资源管理方面发挥着重要作用。基于 S2 的反演模型普遍使用 20 米空间分辨率波段,这对 S2 数据在需要更精细空间分辨率的应用中的适用性造成了很大限制。此外,尽管利用 S2 数据进行 LAI 检索的大量研究都集中在农业景观方面,但专门针对森林生态系统的研究虽然在增加,但仍然相对较少。本研究旨在建立一套可行的方法,用于检索森林地区 10 米分辨率的 LAI 数据。本实验基于赛罕坝林业中心(SFC)测得的 LAI 数据和相应的 10 米空间分辨率 S2 卫星表面反射率数据,建立了土壤调整植被指数(SAVI)经验模型、基于模拟退火(SA-BP)算法的背包神经网络和变异异速高斯过程回归(VHGPR)模型。然后利用实地数据验证了三个模型的 LAI 检索性能,并进一步分析了性能最佳的 VHGPR 模型(R2 为 0.8696,RMSE 为 0.5078)的误差来源。此外,VHGPR 模型还能量化 LAI 估算中的不确定性,在评估输入数据的重要性、消除冗余带和进行不确定性估算方面具有显著优势。这一特点对于生成精确的 LAI 产品尤为重要,尤其是在森林成分多样化的地区。
{"title":"High Spatial Resolution Leaf Area Index Estimation for Woodland in Saihanba Forestry Center, China","authors":"Changjing Wang, Hongmin Zhou, Guodong Zhang, Jianguo Duan, Moxiao Lin","doi":"10.3390/rs16050764","DOIUrl":"https://doi.org/10.3390/rs16050764","url":null,"abstract":"Owing to advancements in satellite remote sensing technology, the acquisition of global land surface parameters, notably, the leaf area index (LAI), has become increasingly accessible. The Sentinel-2 (S2) satellite plays an important role in the monitoring of ecological environments and resource management. The prevalent use of the 20 m spatial resolution band in S2-based inversion models imposes significant limitations on the applicability of S2 data in applications requiring finer spatial resolution. Furthermore, although a substantial body of research on LAI retrieval using S2 data concentrates on agricultural landscapes, studies dedicated to forest ecosystems, although increasing, remain relatively less prevalent. This study aims to establish a viable methodology for retrieving 10 m resolution LAI data in forested regions. The empirical model of the soil adjusted vegetation index (SAVI), the backpack neural network based on simulated annealing (SA-BP) algorithm, and the variational heteroscedastic Gaussian process regression (VHGPR) model are established in this experiment based on the LAI data measured and the corresponding 10 m spatial resolution S2 satellite surface reflectance data in the Saihanba Forestry Center (SFC). The LAI retrieval performance of the three models is then validated using field data, and the error sources of the best performing VHGPR models (R2 of 0.8696 and RMSE of 0.5078) are further analyzed. Moreover, the VHGPR model stands out for its capacity to quantify the uncertainty in LAI estimation, presenting a notable advantage in assessing the significance of input data, eliminating redundant bands, and being well suited for uncertainty estimation. This feature is particularly valuable in generating accurate LAI products, especially in regions characterized by diverse forest compositions.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"10 14","pages":"764"},"PeriodicalIF":0.0,"publicationDate":"2024-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140441561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing demand for unmanned aerial vehicles (UAVs), the number of UAVs in the airspace and the risk of mid-air collisions caused by UAVs are increasing. Therefore, detect and avoid (DAA) technology for UAVs has become a crucial element for mid-air collision avoidance. This study presents a collision avoidance approach for UAVs equipped with a monocular camera to detect small fixed-wing intruders. The proposed system can detect any size of UAV over a long range. The development process consists of three phases: long-distance object detection, object region estimation, and collision risk assessment and collision avoidance. For long-distance object detection, an optical flow-based background subtraction method is utilized to detect an intruder far away from the host. A mask region-based convolutional neural network (Mask R-CNN) model is trained to estimate the region of the intruder in the image. Finally, the collision risk assessment adopts the area expansion rate and bearing angle of the intruder in the images to conduct mid-air collision avoidance based on visual flight rules (VFRs) and conflict areas. The proposed collision avoidance approach is verified by both simulations and experiments. The results show that the system can successfully detect different sizes of fixed-wing intruders, estimate their regions, and assess the risk of collision at least 10 s in advance before the expected collision would happen.
{"title":"Vision-Based Mid-Air Object Detection and Avoidance Approach for Small Unmanned Aerial Vehicles with Deep Learning and Risk Assessment","authors":"Ying-Chih Lai, Tzu-Yun Lin","doi":"10.3390/rs16050756","DOIUrl":"https://doi.org/10.3390/rs16050756","url":null,"abstract":"With the increasing demand for unmanned aerial vehicles (UAVs), the number of UAVs in the airspace and the risk of mid-air collisions caused by UAVs are increasing. Therefore, detect and avoid (DAA) technology for UAVs has become a crucial element for mid-air collision avoidance. This study presents a collision avoidance approach for UAVs equipped with a monocular camera to detect small fixed-wing intruders. The proposed system can detect any size of UAV over a long range. The development process consists of three phases: long-distance object detection, object region estimation, and collision risk assessment and collision avoidance. For long-distance object detection, an optical flow-based background subtraction method is utilized to detect an intruder far away from the host. A mask region-based convolutional neural network (Mask R-CNN) model is trained to estimate the region of the intruder in the image. Finally, the collision risk assessment adopts the area expansion rate and bearing angle of the intruder in the images to conduct mid-air collision avoidance based on visual flight rules (VFRs) and conflict areas. The proposed collision avoidance approach is verified by both simulations and experiments. The results show that the system can successfully detect different sizes of fixed-wing intruders, estimate their regions, and assess the risk of collision at least 10 s in advance before the expected collision would happen.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"8 2","pages":"756"},"PeriodicalIF":0.0,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140442584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we obtained the intensity and orbital angular momentum (OAM) spectral distribution of the scattering fields of vortex electromagnetic beams illuminating electrically large targets composed of different materials. We used the angular spectral decomposition method to decompose a vortex beam into plane waves in the spectral domain at different elevations and azimuths. We combined this method with the physical optics algorithm to calculate the scattering field distribution. The OAM spectra of the scattering field along different observation radii were analyzed using the spiral spectrum expansion method. The numerical results indicate that for beams with different parameters (such as polarization, topological charge, half-cone angle, and frequency) and targets with different characteristics (such as composition), the scattering field intensity distribution and OAM spectral characteristics varied considerably. When the beam parameters change, the results of scattering from different materials show similar changing trends. Compared with beams scattered by uncoated metal and dielectric targets, the scattering field of the coating target can better maintain the shape and OAM mode of beams from the incident field. The scattering characteristics of metal targets were the most sensitive to beam-parameter changes. The relationship between the beam parameters, target parameters, the scattering field intensity, and the OAM spectra of the scattering field was constructed, confirming that the spiral spectrum of the scattering field carries the target information. These findings can be used in remote sensing engineering to supplement existing radar imaging, laying the foundation for further identification of beam or target parameters.
{"title":"Scattering Field Intensity and Orbital Angular Momentum Spectral Distribution of Vortex Electromagnetic Beams Scattered by Electrically Large Targets Comprising Different Materials","authors":"Minghao Sun, Song-hua Liu, Lixin Guo","doi":"10.3390/rs16050754","DOIUrl":"https://doi.org/10.3390/rs16050754","url":null,"abstract":"In this study, we obtained the intensity and orbital angular momentum (OAM) spectral distribution of the scattering fields of vortex electromagnetic beams illuminating electrically large targets composed of different materials. We used the angular spectral decomposition method to decompose a vortex beam into plane waves in the spectral domain at different elevations and azimuths. We combined this method with the physical optics algorithm to calculate the scattering field distribution. The OAM spectra of the scattering field along different observation radii were analyzed using the spiral spectrum expansion method. The numerical results indicate that for beams with different parameters (such as polarization, topological charge, half-cone angle, and frequency) and targets with different characteristics (such as composition), the scattering field intensity distribution and OAM spectral characteristics varied considerably. When the beam parameters change, the results of scattering from different materials show similar changing trends. Compared with beams scattered by uncoated metal and dielectric targets, the scattering field of the coating target can better maintain the shape and OAM mode of beams from the incident field. The scattering characteristics of metal targets were the most sensitive to beam-parameter changes. The relationship between the beam parameters, target parameters, the scattering field intensity, and the OAM spectra of the scattering field was constructed, confirming that the spiral spectrum of the scattering field carries the target information. These findings can be used in remote sensing engineering to supplement existing radar imaging, laying the foundation for further identification of beam or target parameters.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"97 ","pages":"754"},"PeriodicalIF":0.0,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140445416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Randa Qashoa, Vithurshan Suthakar, Gabriel Chianelli, Perushan Kunalakantha, Regina S. K. Lee
As the number of resident space objects (RSOs) orbiting Earth increases, the risk of collision increases, and mitigating this risk requires the detection, identification, characterization, and tracking of as many RSOs as possible in view at any given time, an area of research referred to as Space Situational Awareness (SSA). In order to develop algorithms for RSO detection and characterization, starfield images containing RSOs are needed. Such images can be obtained from star trackers, which have traditionally been used for attitude determination. Despite their low resolution, star tracker images have the potential to be useful for SSA. Using star trackers in this dual-purpose manner offers the benefit of leveraging existing star tracker technology already in orbit, eliminating the need for new and costly equipment to be launched into space. In August 2022, we launched a CubeSat-class payload, Resident Space Object Near-space Astrometric Research (RSONAR), on a stratospheric balloon. The primary objective of the payload was to demonstrate a dual-purpose star tracker for imaging and analyzing RSOs from a space-like environment, aiding in the field of SSA. Building on the experience and lessons learned from the 2022 campaign, we developed a next-generation dual-purpose camera in a 4U-inspired CubeSat platform, named RSONAR II. This payload was successfully launched in August 2023. With the RSONAR II payload, we developed a real-time, multi-purpose imaging system with two main cameras of varying cost that can adjust imaging parameters in real-time to evaluate the effectiveness of each configuration for RSO imaging. We also performed onboard RSO detection and attitude determination to verify the performance of our algorithms. Additionally, we implemented a downlink capability to verify payload performance during flight. To add a wider variety of images for testing our algorithms, we altered the resolution of one of the cameras throughout the mission. In this paper, we demonstrate a dual-purpose star tracker system for future SSA missions and compare two different sensor options for RSO imaging.
{"title":"Technology Demonstration of Space Situational Awareness (SSA) Mission on Stratospheric Balloon Platform","authors":"Randa Qashoa, Vithurshan Suthakar, Gabriel Chianelli, Perushan Kunalakantha, Regina S. K. Lee","doi":"10.3390/rs16050749","DOIUrl":"https://doi.org/10.3390/rs16050749","url":null,"abstract":"As the number of resident space objects (RSOs) orbiting Earth increases, the risk of collision increases, and mitigating this risk requires the detection, identification, characterization, and tracking of as many RSOs as possible in view at any given time, an area of research referred to as Space Situational Awareness (SSA). In order to develop algorithms for RSO detection and characterization, starfield images containing RSOs are needed. Such images can be obtained from star trackers, which have traditionally been used for attitude determination. Despite their low resolution, star tracker images have the potential to be useful for SSA. Using star trackers in this dual-purpose manner offers the benefit of leveraging existing star tracker technology already in orbit, eliminating the need for new and costly equipment to be launched into space. In August 2022, we launched a CubeSat-class payload, Resident Space Object Near-space Astrometric Research (RSONAR), on a stratospheric balloon. The primary objective of the payload was to demonstrate a dual-purpose star tracker for imaging and analyzing RSOs from a space-like environment, aiding in the field of SSA. Building on the experience and lessons learned from the 2022 campaign, we developed a next-generation dual-purpose camera in a 4U-inspired CubeSat platform, named RSONAR II. This payload was successfully launched in August 2023. With the RSONAR II payload, we developed a real-time, multi-purpose imaging system with two main cameras of varying cost that can adjust imaging parameters in real-time to evaluate the effectiveness of each configuration for RSO imaging. We also performed onboard RSO detection and attitude determination to verify the performance of our algorithms. Additionally, we implemented a downlink capability to verify payload performance during flight. To add a wider variety of images for testing our algorithms, we altered the resolution of one of the cameras throughout the mission. In this paper, we demonstrate a dual-purpose star tracker system for future SSA missions and compare two different sensor options for RSO imaging.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"21 2","pages":"749"},"PeriodicalIF":0.0,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140444420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kedong Wang, M. Jia, Xiaohai Zhang, Chuanpeng Zhao, Rong Zhang, Zongming Wang
Mangrove forests play a vital role in maintaining ecological balance in coastal regions. Accurately assessing changes in the ecosystem service value (ESV) of these mangrove forests requires more precise distribution data and an appropriate set of evaluation methods. In this study, we accurately mapped the spatial distribution data and patterns of mangrove forests in Guangxi province in 2016 and 2020, using 10 m spatial resolution Sentinel-2 imagery, and conducted a comprehensive evaluation of ESV provided by mangrove forests. The results showed that (1) from 2016 to 2020, mangrove forests in Guangxi demonstrated a positive development trend and were undergoing a process of recovery. The area of mangrove forests in Guangxi increased from 6245.15 ha in 2016 to 6750.01 ha in 2020, with a net increase of 504.81 ha, which was mainly concentrated in Lianzhou Bay, Tieshan Harbour, and Dandou Bay; (2) the ESV of mangrove forests was USD 363.78 million in 2016 and USD 390.74 million in 2020; (3) the value of fishery, soil conservation, wave absorption, and pollution purification comprises the largest proportions of the ESV of mangrove forests. This study provides valuable insights and information to enhance our understanding of the relationship between the spatial pattern of mangrove forests and their ecosystem service value.
{"title":"Evaluating Ecosystem Service Value Changes in Mangrove Forests in Guangxi, China, from 2016 to 2020","authors":"Kedong Wang, M. Jia, Xiaohai Zhang, Chuanpeng Zhao, Rong Zhang, Zongming Wang","doi":"10.3390/rs16030494","DOIUrl":"https://doi.org/10.3390/rs16030494","url":null,"abstract":"Mangrove forests play a vital role in maintaining ecological balance in coastal regions. Accurately assessing changes in the ecosystem service value (ESV) of these mangrove forests requires more precise distribution data and an appropriate set of evaluation methods. In this study, we accurately mapped the spatial distribution data and patterns of mangrove forests in Guangxi province in 2016 and 2020, using 10 m spatial resolution Sentinel-2 imagery, and conducted a comprehensive evaluation of ESV provided by mangrove forests. The results showed that (1) from 2016 to 2020, mangrove forests in Guangxi demonstrated a positive development trend and were undergoing a process of recovery. The area of mangrove forests in Guangxi increased from 6245.15 ha in 2016 to 6750.01 ha in 2020, with a net increase of 504.81 ha, which was mainly concentrated in Lianzhou Bay, Tieshan Harbour, and Dandou Bay; (2) the ESV of mangrove forests was USD 363.78 million in 2016 and USD 390.74 million in 2020; (3) the value of fishery, soil conservation, wave absorption, and pollution purification comprises the largest proportions of the ESV of mangrove forests. This study provides valuable insights and information to enhance our understanding of the relationship between the spatial pattern of mangrove forests and their ecosystem service value.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"45 12","pages":"494"},"PeriodicalIF":0.0,"publicationDate":"2024-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140492263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-16DOI: 10.48550/arXiv.2401.08787
Wenwen Li, Chia-Yu Hsu, Sizhe Wang, Yezhou Yang, Hyunho Lee, Anna K. Liljedahl, C. Witharana, Yili Yang, Brendan M. Rogers, S. Arundel, Matthew B. Jones, Kenton McHenry, Patricia Solis
This paper assesses trending AI foundation models, especially emerging computer vision foundation models and their performance in natural landscape feature segmentation. While the term foundation model has quickly garnered interest from the geospatial domain, its definition remains vague. Hence, this paper will first introduce AI foundation models and their defining characteristics. Built upon the tremendous success achieved by Large Language Models (LLMs) as the foundation models for language tasks, this paper discusses the challenges of building foundation models for geospatial artificial intelligence (GeoAI) vision tasks. To evaluate the performance of large AI vision models, especially Meta’s Segment Anything Model (SAM), we implemented different instance segmentation pipelines that minimize the changes to SAM to leverage its power as a foundation model. A series of prompt strategies were developed to test SAM’s performance regarding its theoretical upper bound of predictive accuracy, zero-shot performance, and domain adaptability through fine-tuning. The analysis used two permafrost feature datasets, ice-wedge polygons and retrogressive thaw slumps because (1) these landform features are more challenging to segment than man-made features due to their complicated formation mechanisms, diverse forms, and vague boundaries; (2) their presence and changes are important indicators for Arctic warming and climate change. The results show that although promising, SAM still has room for improvement to support AI-augmented terrain mapping. The spatial and domain generalizability of this finding is further validated using a more general dataset EuroCrops for agricultural field mapping. Finally, we discuss future research directions that strengthen SAM’s applicability in challenging geospatial domains.
本文评估了人工智能基础模型的发展趋势,尤其是新兴的计算机视觉基础模型及其在自然景观特征分割中的表现。虽然基础模型一词在地理空间领域迅速引起了人们的兴趣,但其定义仍然模糊不清。因此,本文将首先介绍人工智能基础模型及其定义特征。基于大型语言模型(LLM)作为语言任务基础模型所取得的巨大成功,本文将讨论为地理空间人工智能(GeoAI)视觉任务构建基础模型所面临的挑战。为了评估大型人工智能视觉模型,特别是 Meta 的 "任意分割模型"(SAM)的性能,我们实施了不同的实例分割管道,尽量减少对 SAM 的改动,以充分利用其作为基础模型的能力。我们开发了一系列提示策略,以测试 SAM 在预测准确性的理论上限、零误差性能以及通过微调实现的领域适应性方面的性能。分析使用了两个永久冻土特征数据集,即冰缘多边形和逆行融雪坍塌,因为(1)这些地貌特征由于其复杂的形成机制、多样的形式和模糊的边界,比人造特征更难分割;(2)它们的存在和变化是北极变暖和气候变化的重要指标。研究结果表明,虽然 SAM 有很好的前景,但仍有改进的余地,以支持人工智能增强地形测绘。这一发现的空间和领域通用性通过使用更通用的农田绘图数据集 EuroCrops 得到了进一步验证。最后,我们讨论了未来的研究方向,以加强 SAM 在具有挑战性的地理空间领域的适用性。
{"title":"Segment Anything Model Can Not Segment Anything: Assessing AI Foundation Model's Generalizability in Permafrost Mapping","authors":"Wenwen Li, Chia-Yu Hsu, Sizhe Wang, Yezhou Yang, Hyunho Lee, Anna K. Liljedahl, C. Witharana, Yili Yang, Brendan M. Rogers, S. Arundel, Matthew B. Jones, Kenton McHenry, Patricia Solis","doi":"10.48550/arXiv.2401.08787","DOIUrl":"https://doi.org/10.48550/arXiv.2401.08787","url":null,"abstract":"This paper assesses trending AI foundation models, especially emerging computer vision foundation models and their performance in natural landscape feature segmentation. While the term foundation model has quickly garnered interest from the geospatial domain, its definition remains vague. Hence, this paper will first introduce AI foundation models and their defining characteristics. Built upon the tremendous success achieved by Large Language Models (LLMs) as the foundation models for language tasks, this paper discusses the challenges of building foundation models for geospatial artificial intelligence (GeoAI) vision tasks. To evaluate the performance of large AI vision models, especially Meta’s Segment Anything Model (SAM), we implemented different instance segmentation pipelines that minimize the changes to SAM to leverage its power as a foundation model. A series of prompt strategies were developed to test SAM’s performance regarding its theoretical upper bound of predictive accuracy, zero-shot performance, and domain adaptability through fine-tuning. The analysis used two permafrost feature datasets, ice-wedge polygons and retrogressive thaw slumps because (1) these landform features are more challenging to segment than man-made features due to their complicated formation mechanisms, diverse forms, and vague boundaries; (2) their presence and changes are important indicators for Arctic warming and climate change. The results show that although promising, SAM still has room for improvement to support AI-augmented terrain mapping. The spatial and domain generalizability of this finding is further validated using a more general dataset EuroCrops for agricultural field mapping. Finally, we discuss future research directions that strengthen SAM’s applicability in challenging geospatial domains.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"35 5","pages":"797"},"PeriodicalIF":0.0,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140505768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Han, X. Qu, Xiaopeng Yang, Zhengyan Zhang, Wolin Li
Distributed array radar achieves high angular resolution and measurement accuracy, which could provide a solution to suppress digital radio frequency memory (DRFM) repeater jamming. However, owing to the large aperture of a distributed radar, the far-field plane wave assumption is no longer satisfied. Consequently, traditional adaptive beamforming methods cannot work effectively due to mismatched steering vectors. To address this issue, a DRFM repeater jamming suppression method based on joint range-angle sparse recovery and beamforming for distributed array radar is proposed in this paper. First, the steering vectors of the distributed array are reconstructed according to the spherical wave model under near-field conditions. Then, a joint range-angle sparse dictionary is generated using reconstructed steering vectors, and the range-angle position of jamming is estimated using the weighted L1-norm singular value decomposition (W-L1-SVD) algorithm. Finally, beamforming with joint range-angle nulling is implemented based on the linear constrained minimum variance (LCMV) algorithm for jamming suppression. The performance and effectiveness of proposed method is validated by simulations and experiments on an actual ground-based distributed array radar system.
{"title":"DRFM Repeater Jamming Suppression Method Based on Joint Range-Angle Sparse Recovery and Beamforming for Distributed Array Radar","authors":"B. Han, X. Qu, Xiaopeng Yang, Zhengyan Zhang, Wolin Li","doi":"10.3390/rs15133449","DOIUrl":"https://doi.org/10.3390/rs15133449","url":null,"abstract":"Distributed array radar achieves high angular resolution and measurement accuracy, which could provide a solution to suppress digital radio frequency memory (DRFM) repeater jamming. However, owing to the large aperture of a distributed radar, the far-field plane wave assumption is no longer satisfied. Consequently, traditional adaptive beamforming methods cannot work effectively due to mismatched steering vectors. To address this issue, a DRFM repeater jamming suppression method based on joint range-angle sparse recovery and beamforming for distributed array radar is proposed in this paper. First, the steering vectors of the distributed array are reconstructed according to the spherical wave model under near-field conditions. Then, a joint range-angle sparse dictionary is generated using reconstructed steering vectors, and the range-angle position of jamming is estimated using the weighted L1-norm singular value decomposition (W-L1-SVD) algorithm. Finally, beamforming with joint range-angle nulling is implemented based on the linear constrained minimum variance (LCMV) algorithm for jamming suppression. The performance and effectiveness of proposed method is validated by simulations and experiments on an actual ground-based distributed array radar system.","PeriodicalId":20944,"journal":{"name":"Remote. Sens.","volume":"1 1","pages":"3449"},"PeriodicalIF":0.0,"publicationDate":"2023-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73213585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}