首页 > 最新文献

IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing最新文献

英文 中文
Hyperspectral Image Classification Based on Double-Hop Graph Attention Multiview Fusion Network 基于双跳图关注多视图融合网络的高光谱图像分类
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-24 DOI: 10.1109/JSTARS.2024.3486283
Ying Cui;Li Luo;Lu Wang;Liwei Chen;Shan Gao;Chunhui Zhao;Cheng Tang
Hyperspectral image (HSI) is pivotal in ground object classification, owing to its rich spatial and spectral information. Recently, convolutional neural networks and graph neural networks have become hotspots in HSI classification. Although various methods have been developed, the problem of detail loss may still exist when extracting complex features within homogenous regions. To solve this issue, in this article, we proposed a double-hop graph attention multiview fusion network. This model is adept at pinpointing precise attention features by integrating a double-hop graph with the graph attention network, thereby enhancing the aggregation of multilevel node information and surmounting the limitations of a restricted receptive field. Furthermore, the spectral-coordinate attention module (SCAM) is presented to seize more nuanced spectral and spatial attention features. SCAM harnesses the coordinate attention mechanism for in-depth pixel-level global spectral–spatial view. Coupled with the multiscale Gabor texture view, we forge a multiview fusion network that meticulously highlights edge details across varying scales and captures beneficial features. Our experimental validation across four renowned benchmark HSI datasets showcases our model's superiority, outstripping comparative methods in classification accuracy with limited labeled samples.
高光谱图像(HSI)具有丰富的空间和光谱信息,在地面物体分类中具有举足轻重的地位。近年来,卷积神经网络和图神经网络成为高光谱图像分类的热点。虽然目前已开发出多种方法,但在提取同质区域内的复杂特征时,仍可能存在细节丢失的问题。为解决这一问题,我们在本文中提出了双跳图注意多视图融合网络。该模型通过将双跳图与图注意网络相结合,善于精确定位注意特征,从而加强了多级节点信息的聚合,克服了受限感受野的限制。此外,还提出了光谱坐标注意模块(SCAM),以抓住更细微的光谱和空间注意特征。光谱坐标注意模块利用坐标注意机制深入研究像素级的全球光谱空间视图。与多尺度 Gabor 纹理视图相结合,我们构建了一个多视图融合网络,该网络能细致地突出不同尺度的边缘细节,并捕捉有益的特征。我们在四个著名的基准 HSI 数据集上进行的实验验证证明了我们模型的优越性,在有限的标注样本下,我们的分类准确率超过了同类方法。
{"title":"Hyperspectral Image Classification Based on Double-Hop Graph Attention Multiview Fusion Network","authors":"Ying Cui;Li Luo;Lu Wang;Liwei Chen;Shan Gao;Chunhui Zhao;Cheng Tang","doi":"10.1109/JSTARS.2024.3486283","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3486283","url":null,"abstract":"Hyperspectral image (HSI) is pivotal in ground object classification, owing to its rich spatial and spectral information. Recently, convolutional neural networks and graph neural networks have become hotspots in HSI classification. Although various methods have been developed, the problem of detail loss may still exist when extracting complex features within homogenous regions. To solve this issue, in this article, we proposed a double-hop graph attention multiview fusion network. This model is adept at pinpointing precise attention features by integrating a double-hop graph with the graph attention network, thereby enhancing the aggregation of multilevel node information and surmounting the limitations of a restricted receptive field. Furthermore, the spectral-coordinate attention module (SCAM) is presented to seize more nuanced spectral and spatial attention features. SCAM harnesses the coordinate attention mechanism for in-depth pixel-level global spectral–spatial view. Coupled with the multiscale Gabor texture view, we forge a multiview fusion network that meticulously highlights edge details across varying scales and captures beneficial features. Our experimental validation across four renowned benchmark HSI datasets showcases our model's superiority, outstripping comparative methods in classification accuracy with limited labeled samples.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"20080-20097"},"PeriodicalIF":4.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10735087","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cloud Detection and Sea Surface Temperature Retrieval by HY-1C COCTS Observations HY-1C COCTS 观测的云层探测和海面温度检索
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-24 DOI: 10.1109/JSTARS.2024.3485890
Ninghui Li;Lei Guan;Jonathon S. Wright
Sea surface temperature (SST) is a vital oceanic parameter that significantly influences air–sea heat flux and momentum exchange. SST datasets are crucial for identifying and describing both short-term and long-term climate perturbations in the ocean. This article focuses on cloud detection and SST retrievals in the Western Pacific Ocean, using observations obtained by the Chinese Ocean Color and Temperature Scanner (COCTS) onboard the Haiyang-1C satellite. To distinguish between clear-sky and overcast regions, reflectance after sun glint correction and brightness temperature are used as inputs for an alternative decision tree (ADTree). The accuracy of cloud detection is 93.85% for daytime and 91.98% for nighttime, respectively. Application of the cloud detection algorithm improves the accuracy and data availability (spatiotemporal coverage) of SST retrievals. We implement a nonlinear algorithm to retrieve the SST and validate these retrieved values against buoy measurements of SST. Comparisons are conducted for measurements within ±1 h and 0.01° × 0.01° of the retrieval. During the day, the bias and standard deviation (SD) are −0.01 °C and 0.63 °C, respectively, while at night, they stand at −0.08 °C and 0.71 °C, respectively. Furthermore, the intercomparison between the SST products derived from the moderate-resolution imaging spectroradiometer (MODIS) onboard Terra and the results are conducted. During the day, the bias and SD are 0.03 °C and 0.42 °C, respectively, whereas at night, they are 0.25 °C and 0.76 °C, respectively. This article improves the accuracy and applicability of the SST retrieved from the COCTS thermal infrared channels.
海洋表面温度(SST)是一个重要的海洋参数,对海气热通量和动量交换有重大影响。SST 数据集对于识别和描述海洋短期和长期气候扰动至关重要。本文利用 "海洋一号C "卫星上的中国海洋色温扫描仪(COCTS)获得的观测数据,重点研究西太平洋的云探测和 SST 检索。为了区分晴天和阴天区域,使用了太阳闪光校正后的反射率和亮度温度作为替代决策树(ADTree)的输入。白天和夜间的云检测准确率分别为 93.85% 和 91.98%。云检测算法的应用提高了 SST 检索的准确性和数据可用性(时空覆盖范围)。我们采用一种非线性算法来检索海温,并将这些检索值与浮标测量的海温值进行验证。比较的对象是±1 小时和 0.01° × 0.01° 范围内的测量值。白天的偏差和标准偏差(SD)分别为-0.01 ℃和 0.63 ℃,夜间则分别为-0.08 ℃和 0.71 ℃。此外,还对 Terra 上的中分辨率成像分光辐射计(MODIS)得出的 SST 产品和结果进行了相互比较。白天的偏差和标差分别为 0.03 ℃ 和 0.42 ℃,而夜间的偏差和标差分别为 0.25 ℃ 和 0.76 ℃。这篇文章提高了从 COCTS 热红外通道获取的 SST 的准确性和适用性。
{"title":"Cloud Detection and Sea Surface Temperature Retrieval by HY-1C COCTS Observations","authors":"Ninghui Li;Lei Guan;Jonathon S. Wright","doi":"10.1109/JSTARS.2024.3485890","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3485890","url":null,"abstract":"Sea surface temperature (SST) is a vital oceanic parameter that significantly influences air–sea heat flux and momentum exchange. SST datasets are crucial for identifying and describing both short-term and long-term climate perturbations in the ocean. This article focuses on cloud detection and SST retrievals in the Western Pacific Ocean, using observations obtained by the Chinese Ocean Color and Temperature Scanner (COCTS) onboard the Haiyang-1C satellite. To distinguish between clear-sky and overcast regions, reflectance after sun glint correction and brightness temperature are used as inputs for an alternative decision tree (ADTree). The accuracy of cloud detection is 93.85% for daytime and 91.98% for nighttime, respectively. Application of the cloud detection algorithm improves the accuracy and data availability (spatiotemporal coverage) of SST retrievals. We implement a nonlinear algorithm to retrieve the SST and validate these retrieved values against buoy measurements of SST. Comparisons are conducted for measurements within ±1 h and 0.01° × 0.01° of the retrieval. During the day, the bias and standard deviation (SD) are −0.01 °C and 0.63 °C, respectively, while at night, they stand at −0.08 °C and 0.71 °C, respectively. Furthermore, the intercomparison between the SST products derived from the moderate-resolution imaging spectroradiometer (MODIS) onboard Terra and the results are conducted. During the day, the bias and SD are 0.03 °C and 0.42 °C, respectively, whereas at night, they are 0.25 °C and 0.76 °C, respectively. This article improves the accuracy and applicability of the SST retrieved from the COCTS thermal infrared channels.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19853-19863"},"PeriodicalIF":4.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10734229","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical Sampling Representation Detector for Ship Detection in SAR Images 用于合成孔径雷达图像中舰船探测的层次采样表示探测器
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-24 DOI: 10.1109/JSTARS.2024.3485734
Ming Tong;Shenghua Fan;Jiu Jiang;Chu He
Ship detection achieves great significance in remote sensing of synthetic aperture radar (SAR) and many efforts have been done in recent years. However, distinguishing ship targets precisely from the interference of multiplicative non-Gaussian coherent speckle is still a challenging task due to the discreteness, variability, and nonlinearity of ship scattering features. A detection framework based on hierarchical sampling representation is introduced to alleviate the phenomenon in this article. First, ships in SAR images exhibit multiplicative non-Gaussian coherent speckle, which introduces nonlinear characteristics under the imaging mechanism of SAR. Therefore, a statistical feature learning module is proposed with a learnable design to describe the nonlinear representations and expand the feature space. Second, our method designs a convex-hull representation to fit the irregular contours of ships represented by strong scattering points. Third, in order to supervise and optimize the regression of convex-hull representation, a sparse low-rank reassignment module is employed to evaluate the positive samples with SAR mechanism and reassign ones of high quality, which produces better results. Furthermore, experimental results on three authoritative SAR-oriented datasets for ship detection application present the comprehensive performance of our method.
船舶探测在合成孔径雷达(SAR)遥感中具有重要意义,近年来人们已经做了很多努力。然而,由于船舶散射特征的离散性、可变性和非线性,从乘法非高斯相干斑点的干扰中精确区分船舶目标仍然是一项具有挑战性的任务。本文介绍了一种基于分层采样表示的检测框架来缓解这一现象。首先,合成孔径雷达图像中的船舶表现出乘法非高斯相干斑点,这在合成孔径雷达成像机制下引入了非线性特征。因此,本文提出了一个统计特征学习模块,通过可学习设计来描述非线性表征并扩展特征空间。其次,我们的方法设计了一种凸船体表示法,以拟合由强散射点表示的不规则船舶轮廓。第三,为了监督和优化凸船体表示的回归,我们采用了稀疏低秩重配模块,利用 SAR 机制评估正样本,并重配高质量样本,从而获得更好的结果。此外,在三个面向合成孔径雷达的权威数据集上进行的船舶检测应用实验结果表明了我们方法的综合性能。
{"title":"Hierarchical Sampling Representation Detector for Ship Detection in SAR Images","authors":"Ming Tong;Shenghua Fan;Jiu Jiang;Chu He","doi":"10.1109/JSTARS.2024.3485734","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3485734","url":null,"abstract":"Ship detection achieves great significance in remote sensing of synthetic aperture radar (SAR) and many efforts have been done in recent years. However, distinguishing ship targets precisely from the interference of multiplicative non-Gaussian coherent speckle is still a challenging task due to the discreteness, variability, and nonlinearity of ship scattering features. A detection framework based on hierarchical sampling representation is introduced to alleviate the phenomenon in this article. First, ships in SAR images exhibit multiplicative non-Gaussian coherent speckle, which introduces nonlinear characteristics under the imaging mechanism of SAR. Therefore, a statistical feature learning module is proposed with a learnable design to describe the nonlinear representations and expand the feature space. Second, our method designs a convex-hull representation to fit the irregular contours of ships represented by strong scattering points. Third, in order to supervise and optimize the regression of convex-hull representation, a sparse low-rank reassignment module is employed to evaluate the positive samples with SAR mechanism and reassign ones of high quality, which produces better results. Furthermore, experimental results on three authoritative SAR-oriented datasets for ship detection application present the comprehensive performance of our method.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19530-19547"},"PeriodicalIF":4.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10733998","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploiting Discriminating Features for Fine-Grained Ship Detection in Optical Remote Sensing Images 利用判别特征在光学遥感图像中进行精细船舶探测
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-24 DOI: 10.1109/JSTARS.2024.3486210
Ying Liu;Jin Liu;Xingye Li;Lai Wei;Zhongdai Wu;Bing Han;Wenjuan Dai
Fine-grained remote sensing ship detection is crucial in a variety of fields, such as ship safety, marine environmental protection, and maritime traffic management. Despite recent progress, current research suffers from the following three major challenges: insufficient features representation, conflicts in shared features, and inappropriate anchor labeling strategy, which significantly impede accurate fine-grained ship detection. To address these issues, we propose FineShipNet as a solution. Specifically, we first propose a novel blend synchronization module, which aims to facilitate the coutilization of semantic information from top-level and bottom-level features and minimize information redundancy. Subsequently, the blend feature maps are fed into a novel polarized feature focusing module, which decouples the features used in classification and regression to create task-specific discriminating features maps. Meanwhile, we adopt the adaptive harmony anchor labeling and propose a novel metric, harmony score, to choose high-quality anchors that can effectively capture the discriminating features of the target. Extensive experiments on four fine-grained remote sensing ship datasets (HRSC2016, DOSR, FGSD2021, and ShipRSImageNet) demonstrate that our FineShipNet outperforms current state-of-the-art object detection methods, achieving superior performance with mean average precision scores of 81.3%, 68.5%, 85.7%, and 63.9%, respectively.
精细遥感船舶探测在船舶安全、海洋环境保护和海上交通管理等多个领域都至关重要。尽管最近取得了一些进展,但目前的研究仍面临以下三大挑战:特征表示不足、共享特征冲突和不恰当的锚点标记策略,这些问题严重阻碍了精确的精细船舶检测。为了解决这些问题,我们提出了 FineShipNet 作为解决方案。具体来说,我们首先提出了一个新颖的混合同步模块,旨在促进顶层和底层特征的语义信息整合,并最大限度地减少信息冗余。随后,混合特征图被送入一个新颖的极化特征聚焦模块,该模块将用于分类和回归的特征解耦,以创建特定任务的判别特征图。同时,我们采用了自适应和谐锚标签,并提出了一种新的指标--和谐得分,以选择能有效捕捉目标判别特征的高质量锚。在四个细粒度遥感船舶数据集(HRSC2016、DOSR、FGSD2021 和 ShipRSImageNet)上的广泛实验表明,我们的 FineShipNet 优于目前最先进的目标检测方法,取得了优异的性能,平均精度分别为 81.3%、68.5%、85.7% 和 63.9%。
{"title":"Exploiting Discriminating Features for Fine-Grained Ship Detection in Optical Remote Sensing Images","authors":"Ying Liu;Jin Liu;Xingye Li;Lai Wei;Zhongdai Wu;Bing Han;Wenjuan Dai","doi":"10.1109/JSTARS.2024.3486210","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3486210","url":null,"abstract":"Fine-grained remote sensing ship detection is crucial in a variety of fields, such as ship safety, marine environmental protection, and maritime traffic management. Despite recent progress, current research suffers from the following three major challenges: insufficient features representation, conflicts in shared features, and inappropriate anchor labeling strategy, which significantly impede accurate fine-grained ship detection. To address these issues, we propose FineShipNet as a solution. Specifically, we first propose a novel blend synchronization module, which aims to facilitate the coutilization of semantic information from top-level and bottom-level features and minimize information redundancy. Subsequently, the blend feature maps are fed into a novel polarized feature focusing module, which decouples the features used in classification and regression to create task-specific discriminating features maps. Meanwhile, we adopt the adaptive harmony anchor labeling and propose a novel metric, harmony score, to choose high-quality anchors that can effectively capture the discriminating features of the target. Extensive experiments on four fine-grained remote sensing ship datasets (HRSC2016, DOSR, FGSD2021, and ShipRSImageNet) demonstrate that our FineShipNet outperforms current state-of-the-art object detection methods, achieving superior performance with mean average precision scores of 81.3%, 68.5%, 85.7%, and 63.9%, respectively.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"20098-20115"},"PeriodicalIF":4.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10733997","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Arctic Sea Ice Concentration Prediction Using Spatial Attention Deep Learning 利用空间注意力深度学习预测北极海冰浓度
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-24 DOI: 10.1109/JSTARS.2024.3486187
Haoqi Gu;Lianchong Zhang;Mengjiao Qin;Sensen Wu;Zhenhong Du
With the accelerating impact of global warming, the changes of Arctic sea ice has become a focal point of research. Due to the spatial heterogeneity and the complexity of its evolution, long-term prediction of Arctic sea ice remains a challenge. In this article, a spatial attention U-Net (SAU-Net) method integrated with a gated spatial attention mechanism is proposed. Extracting and enhancing the spatial features from the historical atmospheric and SIC data, this method improves the accuracy of Arctic sea ice prediction. During the test periods (2018–2020), our method can skillfully predict the Arctic sea ice up to 12 months, outperforming the naive U-Net, linear trend models, and dynamical models, especially in extreme sea ice scenarios. The importance of different atmospheric factors affecting sea ice prediction are also analyzed for further exploration.
随着全球变暖影响的加速,北极海冰的变化已成为研究的焦点。由于北极海冰的空间异质性及其演变的复杂性,对其进行长期预测仍是一项挑战。本文提出了一种集成了门控空间注意力机制的空间注意力 U-Net (SAU-Net)方法。该方法从历史大气和 SIC 数据中提取并增强空间特征,提高了北极海冰预测的准确性。在测试期间(2018-2020 年),我们的方法可以熟练预测长达 12 个月的北极海冰,优于天真 U-Net、线性趋势模型和动力学模型,尤其是在极端海冰情况下。此外,还分析了不同大气因素对海冰预测的重要影响,以供进一步探讨。
{"title":"Arctic Sea Ice Concentration Prediction Using Spatial Attention Deep Learning","authors":"Haoqi Gu;Lianchong Zhang;Mengjiao Qin;Sensen Wu;Zhenhong Du","doi":"10.1109/JSTARS.2024.3486187","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3486187","url":null,"abstract":"With the accelerating impact of global warming, the changes of Arctic sea ice has become a focal point of research. Due to the spatial heterogeneity and the complexity of its evolution, long-term prediction of Arctic sea ice remains a challenge. In this article, a spatial attention U-Net (SAU-Net) method integrated with a gated spatial attention mechanism is proposed. Extracting and enhancing the spatial features from the historical atmospheric and SIC data, this method improves the accuracy of Arctic sea ice prediction. During the test periods (2018–2020), our method can skillfully predict the Arctic sea ice up to 12 months, outperforming the naive U-Net, linear trend models, and dynamical models, especially in extreme sea ice scenarios. The importance of different atmospheric factors affecting sea ice prediction are also analyzed for further exploration.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19565-19574"},"PeriodicalIF":4.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10734000","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Improved UAV RGB Image Processing Method for Quantitative Remote Sensing of Marine Green Macroalgae 用于海洋绿色大型藻类定量遥感的改进型无人机 RGB 图像处理方法
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-24 DOI: 10.1109/JSTARS.2024.3486045
Jinghu Li;Qianguo Xing;Liqiao Tian;Yingzhuo Hou;Xiangyang Zheng;Maham Arif;Lin Li;Shanshan Jiang;Jiannan Cai;Jun Chen;Yingcheng Lu;Dingfeng Yu;Jindong Xu
Red–green–blue (RGB) images (or videos) captured by consumer-level uncrewedaerial vehicle (UAV) cameras are widely used in high-resolution remote observations. However, digital number (DN) values of these RGB images usually have a nonlinear relationship with the incident radiance, which reduces the accuracy of quantitative remote sensing of macroalgae. To solve this problem, we proposed an improved processing procedure for UAV RGB images (or videos) based on camera response functions (CRFs). The CRF was utilized to convert the DN values into energy values (E values), which demonstrate a linear relationship with the incident radiance. When the DN values were replaced by their corresponding E values to calculate the reflectance of green macroalgae under different illumination intensities, the errors in reflectance were reduced by ∼21%; for the corresponding green macroalgae indices, such as the red–green band virtual baseline floating green algae height (RG-FAH), the E-value-based RG-FAH demonstrates more resistance to the impacts of sun glints; and the E values were further applied to estimate the coverage portion of macroalgae (POM, %) in RGB videos; the illumination-induced deviations of the POM were effectively reduced by up to 33.06%, showing an advantage in quantitative estimation of macroalgae biomass. The results of applications to UAV RGB images show that the E values have significant suitability in estimating POM across diverse green macroalgae species and various algae indices, suggesting promising potentials of the proposed processing procedure with E-based photo and/or video RGB images in monitoring aquatic plants and environment.
消费级无人飞行器(UAV)相机拍摄的红绿蓝(RGB)图像(或视频)被广泛用于高分辨率遥感观测。然而,这些 RGB 图像的数字编号(DN)值通常与入射辐射度呈非线性关系,从而降低了对大型藻类进行定量遥感的准确性。为解决这一问题,我们提出了一种基于相机响应函数(CRF)的无人机 RGB 图像(或视频)改进处理程序。利用 CRF 将 DN 值转换为能量值(E 值),E 值与入射辐射率呈线性关系。用相应的 E 值代替 DN 值计算不同光照强度下大型绿藻的反射率时,反射率的误差减少了 21%;对于相应的大型绿藻指数,如红绿波段虚拟基线浮游绿藻高度(RG-FAH),基于 E 值的 RG-FAH 更能抵御太阳光闪烁的影响;进一步应用 E 值估算 RGB 视频中大型绿藻的覆盖率(POM,%),光照引起的 POM 偏差有效降低了 33.06% ,显示出在定量估算大型藻类生物量方面的优势。无人机 RGB 图像的应用结果表明,E 值在估算各种绿色大型藻类的 POM 和各种藻类指数方面具有显著的适用性。
{"title":"An Improved UAV RGB Image Processing Method for Quantitative Remote Sensing of Marine Green Macroalgae","authors":"Jinghu Li;Qianguo Xing;Liqiao Tian;Yingzhuo Hou;Xiangyang Zheng;Maham Arif;Lin Li;Shanshan Jiang;Jiannan Cai;Jun Chen;Yingcheng Lu;Dingfeng Yu;Jindong Xu","doi":"10.1109/JSTARS.2024.3486045","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3486045","url":null,"abstract":"Red–green–blue (RGB) images (or videos) captured by consumer-level uncrewedaerial vehicle (UAV) cameras are widely used in high-resolution remote observations. However, digital number (DN) values of these RGB images usually have a nonlinear relationship with the incident radiance, which reduces the accuracy of quantitative remote sensing of macroalgae. To solve this problem, we proposed an improved processing procedure for UAV RGB images (or videos) based on camera response functions (CRFs). The CRF was utilized to convert the DN values into energy values (\u0000<italic>E</i>\u0000 values), which demonstrate a linear relationship with the incident radiance. When the DN values were replaced by their corresponding \u0000<italic>E</i>\u0000 values to calculate the reflectance of green macroalgae under different illumination intensities, the errors in reflectance were reduced by ∼21%; for the corresponding green macroalgae indices, such as the red–green band virtual baseline floating green algae height (RG-FAH), the \u0000<italic>E</i>\u0000-value-based RG-FAH demonstrates more resistance to the impacts of sun glints; and the \u0000<italic>E</i>\u0000 values were further applied to estimate the coverage portion of macroalgae (POM, %) in RGB videos; the illumination-induced deviations of the POM were effectively reduced by up to 33.06%, showing an advantage in quantitative estimation of macroalgae biomass. The results of applications to UAV RGB images show that the \u0000<italic>E</i>\u0000 values have significant suitability in estimating POM across diverse green macroalgae species and various algae indices, suggesting promising potentials of the proposed processing procedure with \u0000<italic>E</i>\u0000-based photo and/or video RGB images in monitoring aquatic plants and environment.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19864-19883"},"PeriodicalIF":4.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10734187","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention Guided Semisupervised Generative Transfer Learning for Hyperspectral Image Analysis 用于高光谱图像分析的注意力引导半监督生成迁移学习
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-23 DOI: 10.1109/JSTARS.2024.3485528
Anan Yaghmour;Saurabh Prasad;Melba M. Crawford
In geospatial image analysis, domain shifts caused by differences between datasets often undermine the performance of deep learning models due to their limited generalization ability. This issue is particularly pronounced in hyperspectral imagery, given the high dimensionality of the per-pixel reflectance vectors and the complexity of the resulting deep learning models. We introduce a semisupervised domain adaptation technique that improves on the adversarial discriminative framework, incorporating a novel multiclass discriminator to address low discriminability and negative transfer issues from which current approaches suffer. Significantly, our method addresses mode collapse by incorporating limited labeled data from the target domain for targeted guidance during adaptation. In addition, we integrate an attention mechanism that focuses on challenging spatial regions for the target mode. We tested our approach on three unique hyperspectral remote sensing datasets to demonstrate its efficacy in diverse conditions (e.g., cloud shadows, atmospheric variability, and terrain). This strategy improves discrimination and reduces negative transfer in domain adaptation for geospatial image analysis.
在地理空间图像分析中,由于数据集之间的差异造成的领域偏移往往会削弱深度学习模型的性能,因为它们的泛化能力有限。考虑到每像素反射向量的高维度以及由此产生的深度学习模型的复杂性,这一问题在高光谱图像中尤为突出。我们引入了一种半监督领域适应技术,该技术改进了对抗性判别框架,纳入了一种新型多类判别器,以解决当前方法所存在的低判别性和负迁移问题。值得注意的是,我们的方法通过在适应过程中纳入目标领域的有限标注数据来提供有针对性的指导,从而解决了模式崩溃问题。此外,我们还整合了一种关注机制,该机制关注目标模式具有挑战性的空间区域。我们在三个独特的高光谱遥感数据集上测试了我们的方法,以证明其在不同条件下(如云影、大气变异和地形)的有效性。这种策略提高了地理空间图像分析领域适应的辨别能力,减少了负迁移。
{"title":"Attention Guided Semisupervised Generative Transfer Learning for Hyperspectral Image Analysis","authors":"Anan Yaghmour;Saurabh Prasad;Melba M. Crawford","doi":"10.1109/JSTARS.2024.3485528","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3485528","url":null,"abstract":"In geospatial image analysis, domain shifts caused by differences between datasets often undermine the performance of deep learning models due to their limited generalization ability. This issue is particularly pronounced in hyperspectral imagery, given the high dimensionality of the per-pixel reflectance vectors and the complexity of the resulting deep learning models. We introduce a semisupervised domain adaptation technique that improves on the adversarial discriminative framework, incorporating a novel multiclass discriminator to address low discriminability and negative transfer issues from which current approaches suffer. Significantly, our method addresses mode collapse by incorporating limited labeled data from the target domain for targeted guidance during adaptation. In addition, we integrate an attention mechanism that focuses on challenging spatial regions for the target mode. We tested our approach on three unique hyperspectral remote sensing datasets to demonstrate its efficacy in diverse conditions (e.g., cloud shadows, atmospheric variability, and terrain). This strategy improves discrimination and reduces negative transfer in domain adaptation for geospatial image analysis.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19884-19899"},"PeriodicalIF":4.7,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10731899","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142636328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Adaptive Sparse Iterative Reweighted Super-Resolution Method for Forward-Looking Radar Imaging 用于前视雷达成像的快速自适应稀疏迭代重加权超分辨率方法
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-23 DOI: 10.1109/JSTARS.2024.3485091
Jiawei Luo;Yulin Huang;Ruitao Li;Deqing Mao;Yongchao Zhang;Yin Zhang;Jianyu Yang
Recently, a sparse super-resolution method based on $L_{1}$ iterative reweighted norm (IRN) has been proposed to improve the azimuth resolution of forward-looking radar. However, this method suffers from poor adaptability and high computational complexity due to its noise-sensitive user-parameter and the necessity for high-dimensional matrix inversion. To this end, a fast adaptive $L_{1}$-IRN sparse super-resolution method is derived in this article, allowing for the user-parameter-free and efficient sparse imaging of forward-looking radar. First, we establish the super-resolution model of forward-looking radar and analyze the user parameter selection problem in the conventional $L_{1}$-IRN method. Second, based on Bayesian theory, adaptive iterative weights of different azimuths are derived by transforming the sparse estimation problem into a maximum posterior (MAP) estimation problem. Finally, by using QR decomposition and Sherman–Morrison formula, the dimensionality of the echo and antenna pattern involved in the iteration is reduced to further diminish the computational complexity. Compared to the existing $L_{1}$-IRN method, the proposed method eliminates the need for any user parameters, and the computational complexity has been reduced from ${O}({JN}^{3})$ to ${O}({JN}^{2}{a})$. Simulation and measured data demonstrate the superiority of the proposed method.
最近,有人提出了一种基于 $L_{1}$ 迭代加权法(IRN)的稀疏超分辨率方法,用于提高前视雷达的方位分辨率。然而,由于用户参数对噪声敏感,而且必须进行高维矩阵反演,因此这种方法存在适应性差、计算复杂度高等问题。为此,本文推导了一种快速自适应 $L_{1}$-IRN 稀疏超分辨率方法,可实现前视雷达的无用户参数和高效稀疏成像。首先,我们建立了前视雷达的超分辨率模型,并分析了传统 $L_{1}$-IRN 方法中的用户参数选择问题。其次,基于贝叶斯理论,通过将稀疏估计问题转化为最大后验(MAP)估计问题,推导出不同方位角的自适应迭代权重。最后,通过使用 QR 分解和 Sherman-Morrison 公式,降低了迭代中涉及的回波和天线模式的维度,从而进一步降低了计算复杂度。与现有的 $L_{1}$-IRN 方法相比,所提出的方法无需任何用户参数,计算复杂度从 ${O}({JN}^{3})$ 降至 ${O}({JN}^{2}{a})$。仿真和测量数据证明了所提方法的优越性。
{"title":"Fast Adaptive Sparse Iterative Reweighted Super-Resolution Method for Forward-Looking Radar Imaging","authors":"Jiawei Luo;Yulin Huang;Ruitao Li;Deqing Mao;Yongchao Zhang;Yin Zhang;Jianyu Yang","doi":"10.1109/JSTARS.2024.3485091","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3485091","url":null,"abstract":"Recently, a sparse super-resolution method based on \u0000<inline-formula><tex-math>$L_{1}$</tex-math></inline-formula>\u0000 iterative reweighted norm (IRN) has been proposed to improve the azimuth resolution of forward-looking radar. However, this method suffers from poor adaptability and high computational complexity due to its noise-sensitive user-parameter and the necessity for high-dimensional matrix inversion. To this end, a fast adaptive \u0000<inline-formula><tex-math>$L_{1}$</tex-math></inline-formula>\u0000-IRN sparse super-resolution method is derived in this article, allowing for the user-parameter-free and efficient sparse imaging of forward-looking radar. First, we establish the super-resolution model of forward-looking radar and analyze the user parameter selection problem in the conventional \u0000<inline-formula><tex-math>$L_{1}$</tex-math></inline-formula>\u0000-IRN method. Second, based on Bayesian theory, adaptive iterative weights of different azimuths are derived by transforming the sparse estimation problem into a maximum posterior (MAP) estimation problem. Finally, by using QR decomposition and Sherman–Morrison formula, the dimensionality of the echo and antenna pattern involved in the iteration is reduced to further diminish the computational complexity. Compared to the existing \u0000<inline-formula><tex-math>$L_{1}$</tex-math></inline-formula>\u0000-IRN method, the proposed method eliminates the need for any user parameters, and the computational complexity has been reduced from \u0000<inline-formula><tex-math>${O}({JN}^{3})$</tex-math></inline-formula>\u0000 to \u0000<inline-formula><tex-math>${O}({JN}^{2}{a})$</tex-math></inline-formula>\u0000. Simulation and measured data demonstrate the superiority of the proposed method.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19503-19517"},"PeriodicalIF":4.7,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10729867","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSFNet: A Feature-Fusion Framework for Persistent Scatterer Selection in Multitemporal InSAR PSFNet:用于多时相 InSAR 中持久散射体选择的特征融合框架
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-23 DOI: 10.1109/JSTARS.2024.3485168
Sijia Chen;Changjun Zhao;Mi Jiang;Hanwen Yu
In the field of multitemporal interferometric synthetic aperture radar (MT-InSAR), the selection of persistent scatterer (PS) is crucial for acquiring ground deformation product. To obtain precise ground deformation, pixels with as high signal-to-noise ratio (SNR) as possible should be selected, while pixels with low SNR should be avoided. To this end, we propose a novel framework, referred to as the PS feature-fusion network (PSFNet), for efficient PS selection. Specifically, we propose a data-driven two-branch network consisting of a ResUNet with spatial and channel attention, as well as a TANet with 3-D convolutional layers and a time-step attention block (T-Attention block), which can use not only spatial features of SAR image but also time-series phase features when selecting PS pixels. In particular, a time-step attention mechanism is proposed for accommodating to interferometric pairs with different SNRs to enhance the feature representation ability of the network. The proposed method was tested using the Sentinel-1 images, showing that it can select more PSs with higher quality compared with StaMPS. In addition, the prediction time of PSFNet requires only 0.26% of the running time of StaMPS, which greatly improves the efficiency of PSFNet for practical applications.
在多时相干涉合成孔径雷达(MT-InSAR)领域,持久散射体(PS)的选择对于获取地面形变产品至关重要。要获得精确的地面形变,应尽可能选择信噪比(SNR)高的像素,而避免选择信噪比低的像素。为此,我们提出了一种新型框架,即 PS 特征融合网络(PSFNet),用于高效选择 PS。具体来说,我们提出了一个数据驱动的双分支网络,包括一个具有空间和通道注意的 ResUNet,以及一个具有三维卷积层和时间步长注意块(T-Attention 块)的 TANet,该网络在选择 PS 像素时不仅可以使用 SAR 图像的空间特征,还可以使用时间序列相位特征。特别是,为适应不同信噪比的干涉测量对,提出了一种时间步长注意机制,以增强网络的特征表示能力。利用哨兵-1 图像对所提出的方法进行了测试,结果表明,与 StaMPS 相比,该方法可以选择更多质量更高的 PS。此外,PSFNet 的预测时间仅为 StaMPS 运行时间的 0.26%,大大提高了 PSFNet 在实际应用中的效率。
{"title":"PSFNet: A Feature-Fusion Framework for Persistent Scatterer Selection in Multitemporal InSAR","authors":"Sijia Chen;Changjun Zhao;Mi Jiang;Hanwen Yu","doi":"10.1109/JSTARS.2024.3485168","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3485168","url":null,"abstract":"In the field of multitemporal interferometric synthetic aperture radar (MT-InSAR), the selection of persistent scatterer (PS) is crucial for acquiring ground deformation product. To obtain precise ground deformation, pixels with as high signal-to-noise ratio (SNR) as possible should be selected, while pixels with low SNR should be avoided. To this end, we propose a novel framework, referred to as the PS feature-fusion network (PSFNet), for efficient PS selection. Specifically, we propose a data-driven two-branch network consisting of a ResUNet with spatial and channel attention, as well as a TANet with 3-D convolutional layers and a time-step attention block (T-Attention block), which can use not only spatial features of SAR image but also time-series phase features when selecting PS pixels. In particular, a time-step attention mechanism is proposed for accommodating to interferometric pairs with different SNRs to enhance the feature representation ability of the network. The proposed method was tested using the Sentinel-1 images, showing that it can select more PSs with higher quality compared with StaMPS. In addition, the prediction time of PSFNet requires only 0.26% of the running time of StaMPS, which greatly improves the efficiency of PSFNet for practical applications.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19972-19985"},"PeriodicalIF":4.7,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10729861","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142691821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Real-Time SAR Ship Detection Method Based on Improved CenterNet for Navigational Intent Prediction 基于改进的中心网的实时 SAR 船舶探测方法,用于导航意图预测
IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-23 DOI: 10.1109/JSTARS.2024.3485222
Xiao Tang;Jiufeng Zhang;Yunzhi Xia;Enkun Cui;Weining Zhao;Qiong Chen
Utilizing massive spatio-temporal sequence data and real-time synthetic aperture radar (SAR) ship target monitoring technology, it is possible to effectively predict the future trajectories and intents of ships. While real-time monitoring technology validates and adjusts spatio-temporal sequence prediction models, it still faces challenges, such as manual anchor box sizing and slow inference speeds due to large computational parameters. To address this challenge, a SAR ship target real-time detection method based on CenterNet is introduced in this article. The proposed method comprises the following steps. First, to improve the feature extraction capability of the original CenterNet network, we introduce a feature pyramid fusion structure and replace upsampled deconvolution with Deformable Convolution Networks (DCNets), which enable richer feature map outputs. Then, to identify nearshore and small target ships better, BiFormer attention mechanism and spatial pyramid pooling module are incorporated to enlarge the receptive field of network. Finally, to improve accuracy and convergence speed, we optimize the Focal loss of the heatmap and utilize Smooth L1 loss for width, height, and center point offsets, which enhance detection accuracy and generalization. Performance evaluations on two SAR image ship datasets, HRSID and SSDD, validate the method's effectiveness, achieving Average Precision (AP) values of 82.87% and 94.25%, representing improvements of 5.26% and 4.04% in AP compared to the original models, with detection speeds of 49 FPS on both datasets. These results underscore the superiority of the improved CenterNet method over other representative methods for SAR ship detection in overall performance.
利用海量时空序列数据和实时合成孔径雷达(SAR)船舶目标监测技术,可以有效预测船舶的未来轨迹和意图。虽然实时监测技术可以验证和调整时空序列预测模型,但它仍然面临着一些挑战,例如人工锚箱大小和由于计算参数过大而导致的推理速度缓慢。针对这一挑战,本文介绍了一种基于中心网的搜救船目标实时检测方法。该方法包括以下几个步骤。首先,为了提高原始 CenterNet 网络的特征提取能力,我们引入了特征金字塔融合结构,并用可变形卷积网络(DCNets)取代了上采样解卷积,从而实现了更丰富的特征图输出。然后,为了更好地识别近岸和小型目标船只,加入了 BiFormer 注意机制和空间金字塔池化模块,以扩大网络的感受野。最后,为了提高精度和收敛速度,我们优化了热图的 Focal loss(焦点损失),并利用 Smooth L1 loss(平滑 L1 损失)来处理宽度、高度和中心点偏移,从而提高了检测精度和泛化能力。在两个合成孔径雷达图像船舶数据集 HRSID 和 SSDD 上进行的性能评估验证了该方法的有效性,平均精度 (AP) 值分别达到 82.87% 和 94.25%,与原始模型相比,AP 值分别提高了 5.26% 和 4.04%,两个数据集的检测速度均为 49 FPS。这些结果凸显了改进后的 CenterNet 方法在总体性能上优于其他具有代表性的 SAR 船舶检测方法。
{"title":"A Real-Time SAR Ship Detection Method Based on Improved CenterNet for Navigational Intent Prediction","authors":"Xiao Tang;Jiufeng Zhang;Yunzhi Xia;Enkun Cui;Weining Zhao;Qiong Chen","doi":"10.1109/JSTARS.2024.3485222","DOIUrl":"https://doi.org/10.1109/JSTARS.2024.3485222","url":null,"abstract":"Utilizing massive spatio-temporal sequence data and real-time synthetic aperture radar (SAR) ship target monitoring technology, it is possible to effectively predict the future trajectories and intents of ships. While real-time monitoring technology validates and adjusts spatio-temporal sequence prediction models, it still faces challenges, such as manual anchor box sizing and slow inference speeds due to large computational parameters. To address this challenge, a SAR ship target real-time detection method based on CenterNet is introduced in this article. The proposed method comprises the following steps. First, to improve the feature extraction capability of the original CenterNet network, we introduce a feature pyramid fusion structure and replace upsampled deconvolution with Deformable Convolution Networks (DCNets), which enable richer feature map outputs. Then, to identify nearshore and small target ships better, BiFormer attention mechanism and spatial pyramid pooling module are incorporated to enlarge the receptive field of network. Finally, to improve accuracy and convergence speed, we optimize the Focal loss of the heatmap and utilize Smooth L1 loss for width, height, and center point offsets, which enhance detection accuracy and generalization. Performance evaluations on two SAR image ship datasets, HRSID and SSDD, validate the method's effectiveness, achieving Average Precision (AP) values of 82.87% and 94.25%, representing improvements of 5.26% and 4.04% in AP compared to the original models, with detection speeds of 49 FPS on both datasets. These results underscore the superiority of the improved CenterNet method over other representative methods for SAR ship detection in overall performance.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":"17 ","pages":"19467-19477"},"PeriodicalIF":4.7,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10729880","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142600228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1