首页 > 最新文献

IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society最新文献

英文 中文
Physics-Aware Neural Framework for Multidepth Soil Carbon Mapping 多深度土壤碳制图的物理感知神经网络框架
Bishal Roy;Vasit Sagan;Haireti Alifu;Jocelyn Saxton;Cagri Gul;Nadia Shakoor
Depth-resolved estimation of soil organic carbon (SOC) remains challenging because optical measurements originate at the surface while carbon dynamics vary vertically. We propose a physics-aware uncrewed aerial vehicle (UAV) framework that integrates multispectral imagery (MSI) and hyperspectral imagery (HSI) to estimate SOC concentration (%) across five depths. The experiment was conducted at Plantheaven Farms, Missouri, with ten sorghum genotypes across three replicates. Feature construction combined spectral derivatives from HSI with texture features from MSI, compressed via principal component analysis (PCA). Physics-based regularization was implemented through: 1) a second-difference penalty to enforce vertical smoothness and 2) a profile-integral consistency constraint to preserve whole-profile balance. Four model configurations evaluated on local data showed progressive improvements: MSI-only, MSI + HSI, MSI + HSI with smoothness, and MSI + HSI with full physics constraints. In addition, transfer learning from the open soil spectral library (OSSL) was tested to address data limitations. Model fitting on the available data achieved ${R} ^{2} = 0.72$ at 0–30 cm, with physics-aware constraints notably improving vertical coherence. The physics-aware model reduced variance and improved plausibility. In-sample, transfer learning achieved ${R} ^{2}=0.60$ at 0–30 cm, with conservative interpretation below 90 cm due to reduced optical sensitivity. Exploratory genotype patterns suggested higher surface SOC percent for PI 656 029 and PI 656 057, and lower values for PI 276 837 and PI 656 044.
土壤有机碳(SOC)的深度分辨估计仍然具有挑战性,因为光学测量起源于表面,而碳动态垂直变化。我们提出了一个物理感知的无人机(UAV)框架,该框架集成了多光谱图像(MSI)和高光谱图像(HSI)来估计五个深度的SOC浓度(%)。试验在密苏里州plantheheaven农场进行,10种高粱基因型跨越3个重复。特征构建将HSI的光谱导数与MSI的纹理特征结合起来,通过主成分分析(PCA)进行压缩。基于物理的正则化通过以下方式实现:1)二次差惩罚来加强垂直平滑;2)剖面积分一致性约束来保持整个剖面平衡。根据本地数据评估的四种模型配置显示出渐进式的改进:仅MSI、MSI + HSI、MSI + HSI具有平滑性,以及MSI + HSI具有完全物理约束。此外,为了解决数据的局限性,还对开放土壤光谱库(OSSL)进行了迁移学习测试。对现有数据的模型拟合在0-30 cm处达到${R} ^{2} = 0.72$,物理感知约束显著提高了垂直相干性。物理感知模型减少了方差,提高了合理性。样本内迁移学习在0-30 cm处达到${R} ^{2}=0.60$,由于光学灵敏度降低,在90 cm以下具有保守解释。基因型分析表明,皮656029和皮656057的表面有机碳含量较高,皮276 837和皮656044的表面有机碳含量较低。
{"title":"Physics-Aware Neural Framework for Multidepth Soil Carbon Mapping","authors":"Bishal Roy;Vasit Sagan;Haireti Alifu;Jocelyn Saxton;Cagri Gul;Nadia Shakoor","doi":"10.1109/LGRS.2025.3632815","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3632815","url":null,"abstract":"Depth-resolved estimation of soil organic carbon (SOC) remains challenging because optical measurements originate at the surface while carbon dynamics vary vertically. We propose a physics-aware uncrewed aerial vehicle (UAV) framework that integrates multispectral imagery (MSI) and hyperspectral imagery (HSI) to estimate SOC concentration (%) across five depths. The experiment was conducted at Plantheaven Farms, Missouri, with ten sorghum genotypes across three replicates. Feature construction combined spectral derivatives from HSI with texture features from MSI, compressed via principal component analysis (PCA). Physics-based regularization was implemented through: 1) a second-difference penalty to enforce vertical smoothness and 2) a profile-integral consistency constraint to preserve whole-profile balance. Four model configurations evaluated on local data showed progressive improvements: MSI-only, MSI + HSI, MSI + HSI with smoothness, and MSI + HSI with full physics constraints. In addition, transfer learning from the open soil spectral library (OSSL) was tested to address data limitations. Model fitting on the available data achieved <inline-formula> <tex-math>${R} ^{2} = 0.72$ </tex-math></inline-formula> at 0–30 cm, with physics-aware constraints notably improving vertical coherence. The physics-aware model reduced variance and improved plausibility. In-sample, transfer learning achieved <inline-formula> <tex-math>${R} ^{2}=0.60$ </tex-math></inline-formula> at 0–30 cm, with conservative interpretation below 90 cm due to reduced optical sensitivity. Exploratory genotype patterns suggested higher surface SOC percent for PI 656 029 and PI 656 057, and lower values for PI 276 837 and PI 656 044.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight Attention Mechanism With Feature Differences for Efficient Change Detection in Remote Sensing 基于特征差异的轻量级关注机制在遥感中的高效变化检测
Jangsoo Park;EunSeong Lee;Jongseok Lee;Seoung-Jun Oh;Donggyu Sim
This letter presents a low-complexity attention module for fast change detection. The proposed module computes the absolute difference between bitemporal features extracted by a Siamese backbone network and sequentially applies spatial and channel attention to generate key change representations. Spatial attention emphasizes important spatial locations using representative values from channelwise pooling, while channel attention highlights discriminative feature responses using values from spatialwise pooling. By leveraging low-dimensional representative features, the module significantly reduces computational cost. Additionally, its dual-attention structure-driven by feature differences-enhances both spatial localization and semantic relevance of changes. Compared to the change-guided network (CGNet), the proposed method reduces multiply-accumulate operations (MACs) by 53.81% with only a 0.15% drop in ${F}1$ -score, demonstrating high efficiency with minimal performance degradation. These results suggest that the proposed method is suitable for large-scale or real-time remote sensing (RS) applications where computational efficiency is essential.
本文介绍了一种用于快速变化检测的低复杂度关注模块。该模块计算由Siamese骨干网提取的双时特征之间的绝对差值,并依次应用空间和通道关注来生成关键变化表示。空间注意使用通道池化的代表性值来强调重要的空间位置,而通道注意使用空间池化的值来强调判别性特征响应。通过利用低维代表性特征,该模块显著降低了计算成本。此外,由特征差异驱动的双注意结构增强了变化的空间定位和语义关联。与变化引导网络(CGNet)相比,该方法减少了53.81%的乘法累积操作(mac),而${F}1$ -score仅下降0.15%,在最小的性能下降下显示出高效率。这些结果表明,该方法适用于对计算效率要求很高的大规模或实时遥感(RS)应用。
{"title":"Lightweight Attention Mechanism With Feature Differences for Efficient Change Detection in Remote Sensing","authors":"Jangsoo Park;EunSeong Lee;Jongseok Lee;Seoung-Jun Oh;Donggyu Sim","doi":"10.1109/LGRS.2025.3633179","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3633179","url":null,"abstract":"This letter presents a low-complexity attention module for fast change detection. The proposed module computes the absolute difference between bitemporal features extracted by a Siamese backbone network and sequentially applies spatial and channel attention to generate key change representations. Spatial attention emphasizes important spatial locations using representative values from channelwise pooling, while channel attention highlights discriminative feature responses using values from spatialwise pooling. By leveraging low-dimensional representative features, the module significantly reduces computational cost. Additionally, its dual-attention structure-driven by feature differences-enhances both spatial localization and semantic relevance of changes. Compared to the change-guided network (CGNet), the proposed method reduces multiply-accumulate operations (MACs) by 53.81% with only a 0.15% drop in <inline-formula> <tex-math>${F}1$ </tex-math></inline-formula>-score, demonstrating high efficiency with minimal performance degradation. These results suggest that the proposed method is suitable for large-scale or real-time remote sensing (RS) applications where computational efficiency is essential.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Lightweight Method of Cloud-Sky Surface Upward Longwave Radiation Real-Time Estimation for FY-4A Geostationary Satellite FY-4A静止卫星云天表面向上长波辐射实时估计的轻量化方法
Qiang Na;Biao Cao;Wanchun Zhang;Limeng Zheng;Xi Zhang;Ziyi Yang;Qinhuo Liu
Satellite-derived surface upward longwave radiation (SULR) is essential for monitoring the global surface radiation budget, ecological processes, and climate change. However, the widely used SULR products derived from thermal infrared (TIR) remote sensing exhibit spatial discontinuities because TIR signals cannot penetrate cloud cover. Conventional cloud-sky SULR estimation approaches often utilize post-processed reanalysis data as inputs, which could not meet the real-time requirement of the operational system. This study proposes a lightweight cloud-sky SULR real-time estimation method for the Fengyun-4A (FY-4A) geostationary satellite using a Light Gradient Boosting Machine (LightGBM) model. The daytime cloud-sky SULR is estimated by applying the established relationship between auxiliary variables and clear-sky SULR to cloudy conditions, while the nighttime cloud-sky SULR values are estimated by applying the determined relationship between input variables and a publicly accessible, gap-filled SULR product. The model inputs include: 1) spatial-temporal location record data; 2) multiple surface characteristic parameters generated from previous-year data; and 3) two categories of operational FY-4A radiation products, with both components being available in real-time. Validation against six Heihe Watershed Allied Telemetry Experimental Research (HiWATER) sites demonstrates that the reconstructed cloud-sky SULR achieves acceptable root mean square error (RMSE) and mean bias error (MBE) values of 33.4 W/m2 (1.5 W/m2) for daytime and 25.2 W/m2 (4.7 W/m2) for nighttime conditions. Therefore, the proposed lightweight method could improve the spatial coverage of the current FY-4A SULR product and further promote real-time SULR-related applications.
卫星衍生的地表向上长波辐射(SULR)对于监测全球地表辐射收支、生态过程和气候变化至关重要。然而,广泛使用的热红外(TIR)遥感衍生的SULR产品由于TIR信号不能穿透云层而表现出空间不连续性。传统的云天SULR估计方法往往采用后处理的再分析数据作为输入,不能满足作战系统的实时性要求。本文提出了一种基于光梯度增强机(LightGBM)模型的风云- 4a (FY-4A)地球静止卫星轻量云天SULR实时估计方法。白天的云天SULR是通过在多云条件下应用辅助变量与晴空SULR之间建立的关系来估计的,而夜间的云天SULR值是通过应用输入变量与可公开访问的、空白填补的SULR产品之间确定的关系来估计的。模型输入包括:1)时空位置记录数据;2)由往年数据生成的多个地表特征参数;3)两类可操作的FY-4A辐射产品,两种组件均可实时获取。在六个黑河流域联合遥测实验研究(HiWATER)站点的验证表明,重建的云天SULR在白天条件下达到了可接受的均方根误差(RMSE)和平均偏差误差(MBE)值,分别为33.4 W/m2 (1.5 W/m2)和25.2 W/m2 (4.7 W/m2)。因此,提出的轻量化方法可以提高现有FY-4A SULR产品的空间覆盖范围,进一步促进实时SULR相关应用。
{"title":"A Lightweight Method of Cloud-Sky Surface Upward Longwave Radiation Real-Time Estimation for FY-4A Geostationary Satellite","authors":"Qiang Na;Biao Cao;Wanchun Zhang;Limeng Zheng;Xi Zhang;Ziyi Yang;Qinhuo Liu","doi":"10.1109/LGRS.2025.3632860","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3632860","url":null,"abstract":"Satellite-derived surface upward longwave radiation (SULR) is essential for monitoring the global surface radiation budget, ecological processes, and climate change. However, the widely used SULR products derived from thermal infrared (TIR) remote sensing exhibit spatial discontinuities because TIR signals cannot penetrate cloud cover. Conventional cloud-sky SULR estimation approaches often utilize post-processed reanalysis data as inputs, which could not meet the real-time requirement of the operational system. This study proposes a lightweight cloud-sky SULR real-time estimation method for the Fengyun-4A (FY-4A) geostationary satellite using a Light Gradient Boosting Machine (LightGBM) model. The daytime cloud-sky SULR is estimated by applying the established relationship between auxiliary variables and clear-sky SULR to cloudy conditions, while the nighttime cloud-sky SULR values are estimated by applying the determined relationship between input variables and a publicly accessible, gap-filled SULR product. The model inputs include: 1) spatial-temporal location record data; 2) multiple surface characteristic parameters generated from previous-year data; and 3) two categories of operational FY-4A radiation products, with both components being available in real-time. Validation against six Heihe Watershed Allied Telemetry Experimental Research (HiWATER) sites demonstrates that the reconstructed cloud-sky SULR achieves acceptable root mean square error (RMSE) and mean bias error (MBE) values of 33.4 W/m2 (1.5 W/m2) for daytime and 25.2 W/m2 (4.7 W/m2) for nighttime conditions. Therefore, the proposed lightweight method could improve the spatial coverage of the current FY-4A SULR product and further promote real-time SULR-related applications.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KECS-Net: Knowledge-Embedded CSwin-UNet With Slicing-Aided Hypersegmentation for Infrared Small Target Detection KECS-Net:基于切片辅助超分割的知识嵌入式CSwin-UNet红外小目标检测
Lingxiao Li;Linlin Liu;Dan Huang;Sen Wang;Xutao Wang;Yunan He;Zhuqiang Zhong
Infrared small target detection (IRSTD) remains a long challenging problem in infrared imaging technology. To enhance detection performance while more effectively exploiting target-specific characteristics, a novel U-shaped segmentation network called knowledge-embedded CSwin-UNet (KECS-Net) is proposed in this letter. KECS-Net first incorporates a CSwin transformer module into the encoder of the UNet backbone, enabling the extraction of multiscale features from infrared targets within an expanded receptive field, while achieving higher computational efficiency compared to the original Swin transformer. Besides, a multiscale local contrast enhancement module (MLCEM) is introduced, which utilizes hand-crafted dilated convolution operators to amplify locally salient target responses and suppress background noise, thereby guiding the model to focus on potential target regions. Finally, a slicing-aided hypersegmentation (SAHS) method is also designed to resize and rescale the output image, increasing the relative size of small targets and improving segmentation accuracy during inference. Extensive experiments on three benchmark datasets demonstrate that the proposed KECS-Net outperforms the state-of-the-art (SOTA) methods in both quantitative metrics and visual quality. Relevant code will be available at https://github.com/Lilingxiao-image/KECS-Net
红外小目标检测是红外成像技术中一个长期具有挑战性的问题。为了提高检测性能,同时更有效地利用目标特异性特征,本文提出了一种新的u形分割网络,称为知识嵌入式CSwin-UNet (KECS-Net)。KECS-Net首先将CSwin变压器模块集成到UNet骨干的编码器中,能够在扩展的接受域中从红外目标中提取多尺度特征,同时与原始Swin变压器相比,实现更高的计算效率。此外,还引入了多尺度局部对比度增强模块(MLCEM),该模块利用手工制作的扩展卷积算子放大局部显著目标响应,抑制背景噪声,从而引导模型聚焦于潜在的目标区域。最后,设计了一种切片辅助超分割(SAHS)方法来调整输出图像的大小和缩放,增加小目标的相对大小,提高推理过程中的分割精度。在三个基准数据集上的大量实验表明,所提出的KECS-Net在定量指标和视觉质量方面都优于最先进的(SOTA)方法。相关代码可在https://github.com/Lilingxiao-image/KECS-Net上获得
{"title":"KECS-Net: Knowledge-Embedded CSwin-UNet With Slicing-Aided Hypersegmentation for Infrared Small Target Detection","authors":"Lingxiao Li;Linlin Liu;Dan Huang;Sen Wang;Xutao Wang;Yunan He;Zhuqiang Zhong","doi":"10.1109/LGRS.2025.3632827","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3632827","url":null,"abstract":"Infrared small target detection (IRSTD) remains a long challenging problem in infrared imaging technology. To enhance detection performance while more effectively exploiting target-specific characteristics, a novel U-shaped segmentation network called knowledge-embedded CSwin-UNet (KECS-Net) is proposed in this letter. KECS-Net first incorporates a CSwin transformer module into the encoder of the UNet backbone, enabling the extraction of multiscale features from infrared targets within an expanded receptive field, while achieving higher computational efficiency compared to the original Swin transformer. Besides, a multiscale local contrast enhancement module (MLCEM) is introduced, which utilizes hand-crafted dilated convolution operators to amplify locally salient target responses and suppress background noise, thereby guiding the model to focus on potential target regions. Finally, a slicing-aided hypersegmentation (SAHS) method is also designed to resize and rescale the output image, increasing the relative size of small targets and improving segmentation accuracy during inference. Extensive experiments on three benchmark datasets demonstrate that the proposed KECS-Net outperforms the state-of-the-art (SOTA) methods in both quantitative metrics and visual quality. Relevant code will be available at <uri>https://github.com/Lilingxiao-image/KECS-Net</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessment of Long-Term Elevation Accuracy Consistency for ICESat-2/ATLAS Using Crossover Observations 基于交叉观测的ICESat-2/ATLAS长期高程精度一致性评估
Tao Wang;Yong Fang;Shuangcheng Zhang;Bincai Cao;Qi Liu
The ice, cloud, and land elevation Satellite-2 (ICESat-2) has been operating continuously in orbit for nearly seven years. Its accuracy is crucial for ensuring the reliability of scientific applications. However, a few external studies have been conducted to assess the long-term consistency of ICESat-2 elevation measurements. In this letter, we evaluate the consistency of elevation accuracy through footprint-level crossover observations. This approach first extracts crossovers by averaging elevations within each ~12 m footprint, then analyzes their elevation differences using statistical and time-series approaches, and finally employs airborne LiDAR data for external validation. The results indicate that ICESat-2 elevation data exhibit excellent internal consistency over bare land areas from 2019 to 2024, with more than 40000 footprint-level crossovers, a mean elevation bias of 0.02 m, and a standard deviation of 0.22 m. The long-term drift of the elevation data is approximately 1.1 mm/yr, well within the mission’s scientific requirement of 4 mm/yr. Compared with airborne LiDAR, ICESat-2 maintains high external accuracy over long-term observations, with an overall root mean square error (RMSE) less than 0.38 m across 377 beam tracks. Overall, this study provides new and independent assessment of the consistency of ICESat-2 elevation data to date.
冰、云、陆高程卫星2号(ICESat-2)已经在轨道上连续运行了近7年。它的准确性对于确保科学应用的可靠性至关重要。但是,已经进行了一些外部研究,以评估ICESat-2高程测量的长期一致性。在这封信中,我们通过足迹水平交叉观测来评估高程精度的一致性。该方法首先通过平均每个~12 m足迹内的高程来提取交叉点,然后使用统计和时间序列方法分析它们的高程差异,最后使用机载激光雷达数据进行外部验证。结果表明,2019 - 2024年,ICESat-2高程数据在裸地区域表现出良好的内部一致性,共进行了40000多次足迹级交叉,平均高程偏差为0.02 m,标准差为0.22 m。高程数据的长期漂移约为1.1毫米/年,完全在任务的科学要求4毫米/年之内。与机载激光雷达相比,ICESat-2在长期观测中保持了较高的外部精度,在377条波束轨道上的总体均方根误差(RMSE)小于0.38 m。总的来说,这项研究为迄今为止ICESat-2高程数据的一致性提供了新的和独立的评估。
{"title":"Assessment of Long-Term Elevation Accuracy Consistency for ICESat-2/ATLAS Using Crossover Observations","authors":"Tao Wang;Yong Fang;Shuangcheng Zhang;Bincai Cao;Qi Liu","doi":"10.1109/LGRS.2025.3632918","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3632918","url":null,"abstract":"The ice, cloud, and land elevation Satellite-2 (ICESat-2) has been operating continuously in orbit for nearly seven years. Its accuracy is crucial for ensuring the reliability of scientific applications. However, a few external studies have been conducted to assess the long-term consistency of ICESat-2 elevation measurements. In this letter, we evaluate the consistency of elevation accuracy through footprint-level crossover observations. This approach first extracts crossovers by averaging elevations within each ~12 m footprint, then analyzes their elevation differences using statistical and time-series approaches, and finally employs airborne LiDAR data for external validation. The results indicate that ICESat-2 elevation data exhibit excellent internal consistency over bare land areas from 2019 to 2024, with more than 40000 footprint-level crossovers, a mean elevation bias of 0.02 m, and a standard deviation of 0.22 m. The long-term drift of the elevation data is approximately 1.1 mm/yr, well within the mission’s scientific requirement of 4 mm/yr. Compared with airborne LiDAR, ICESat-2 maintains high external accuracy over long-term observations, with an overall root mean square error (RMSE) less than 0.38 m across 377 beam tracks. Overall, this study provides new and independent assessment of the consistency of ICESat-2 elevation data to date.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HFSM: A Hierarchical Feature Structure-Driven Method for Multisource Sonar Image Registration of Subsea Pipelines 基于层次特征结构驱动的海底管道多源声纳图像配准方法
Jingyao Zhang;Xuerong Cui;Juan Li;Song Dai;Bin Jiang;Lei Li
Subsea pipelines are prone to exposure due to natural factors such as earthquakes and vortices, which necessitates regular condition monitoring. Multibeam echo sounders (MBESs) can provide high-precision seabed topographic information, while side-scan sonar (SSS) excels at capturing high-resolution seabed texture features. The integration of these two data sources can complement each other, thereby improving the detection accuracy of subsea pipelines. To achieve effective fusion, high-precision spatial registration is required. However, existing registration algorithms still face challenges such as uneven feature point distribution, dependence on prior knowledge, and unstable matching. This letter proposes a multisource sonar image registration algorithm for subsea pipelines, named a hierarchical feature structure-driven method for multisource sonar image registration of subsea pipelines (HFSM). First, the method designs a grid-based multiscale corner detection (MS-CD), which effectively enhances the spatial distribution balance of feature points. Next, a multiwindow geometric–texture joint feature descriptor (MW-GTD) is proposed, which combines direction-sensitive curvature and spatial shadow distribution features within different scale windows. Finally, a multilayer coarse-to-fine guided matching (ML-CFGM) strategy is introduced to enhance the matching stability of images in feature-sparse regions and realize multilayer feature matching. The superiority of the proposed method is validated with real-world data, providing technical support for the efficient registration of MBES and SSS images and subsea pipeline detection.
由于地震和漩涡等自然因素,海底管道容易暴露,需要定期进行状态监测。多波束回声测深仪(mess)可以提供高精度的海底地形信息,而侧扫声纳(SSS)则擅长捕获高分辨率的海底纹理特征。这两种数据源的融合可以相互补充,从而提高海底管道的检测精度。为了实现有效的融合,需要高精度的空间配准。然而,现有的配准算法仍然存在特征点分布不均匀、依赖先验知识、匹配不稳定等问题。本文提出了一种海底管道多源声呐图像配准算法,命名为海底管道多源声呐图像配准的分层特征结构驱动方法(HFSM)。首先,该方法设计了一种基于网格的多尺度角点检测(MS-CD)方法,有效增强了特征点的空间分布平衡性;其次,提出了一种多窗口几何纹理联合特征描述子(MW-GTD),该特征描述子结合了不同尺度窗口内的方向敏感曲率和空间阴影分布特征。最后,引入多层粗精制导匹配(ML-CFGM)策略,增强图像在特征稀疏区域的匹配稳定性,实现多层特征匹配。通过实际数据验证了该方法的优越性,为MBES和SSS图像的高效配准以及海底管道检测提供了技术支持。
{"title":"HFSM: A Hierarchical Feature Structure-Driven Method for Multisource Sonar Image Registration of Subsea Pipelines","authors":"Jingyao Zhang;Xuerong Cui;Juan Li;Song Dai;Bin Jiang;Lei Li","doi":"10.1109/LGRS.2025.3632889","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3632889","url":null,"abstract":"Subsea pipelines are prone to exposure due to natural factors such as earthquakes and vortices, which necessitates regular condition monitoring. Multibeam echo sounders (MBESs) can provide high-precision seabed topographic information, while side-scan sonar (SSS) excels at capturing high-resolution seabed texture features. The integration of these two data sources can complement each other, thereby improving the detection accuracy of subsea pipelines. To achieve effective fusion, high-precision spatial registration is required. However, existing registration algorithms still face challenges such as uneven feature point distribution, dependence on prior knowledge, and unstable matching. This letter proposes a multisource sonar image registration algorithm for subsea pipelines, named a hierarchical feature structure-driven method for multisource sonar image registration of subsea pipelines (HFSM). First, the method designs a grid-based multiscale corner detection (MS-CD), which effectively enhances the spatial distribution balance of feature points. Next, a multiwindow geometric–texture joint feature descriptor (MW-GTD) is proposed, which combines direction-sensitive curvature and spatial shadow distribution features within different scale windows. Finally, a multilayer coarse-to-fine guided matching (ML-CFGM) strategy is introduced to enhance the matching stability of images in feature-sparse regions and realize multilayer feature matching. The superiority of the proposed method is validated with real-world data, providing technical support for the efficient registration of MBES and SSS images and subsea pipeline detection.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAILDet: Wavelet-Preserved Lightweight One-Stage Detector for Tiny Objects in Remote Sensing SAILDet:用于遥感微小目标的小波保存轻型单级探测器
Jiaqi Ma;Hui Wang;Tianyou Wang;Haotian Li;Ruixue Xiao
Current convolutional neural network (CNN)-based tiny object detectors in remote sensing commonly face a resolution transform bottleneck, characterized by irreversible feature information loss during downsampling and reconstruction distortions during upsampling. To address this issue, we propose a lightweight one-stage detector, small-object-aware intelligent lightweight detector (SAILDet). Its core principle is to preserve information fidelity at the source rather than compensating for its loss in downstream stages. This is achieved through a paired design that employs Haar wavelet downsampling (HWD) to retain high-frequency details at the source and Content-Aware ReAssembly of FEatures (CARAFE) to perform artifact-free, fine-grained upsampling, thereby establishing a high-fidelity feature processing loop. Experiments on the DOTA dataset demonstrate that, compared to the baseline model, SAILDet reduces GFLOPs and parameters by 11.7% and 13.0%, respectively, while improving mAP@50–95 from 0.263 to 0.266 and mAP@50 from 0.411 to 0.422. In addition, consistent gains are also observed on AI-TOD, reinforcing that directly optimizing the resolution-transform operators is more effective than downstream compensation.
当前基于卷积神经网络(CNN)的遥感微小目标探测器普遍面临着分辨率转换瓶颈,其特点是下采样过程中特征信息的不可逆丢失和上采样过程中的重构失真。为了解决这个问题,我们提出了一种轻量级的单级检测器,小对象感知智能轻量级检测器(SAILDet)。其核心原则是在信息源处保持信息的保真度,而不是在下游阶段补偿信息的损失。这是通过采用Haar小波下采样(HWD)的配对设计来实现的,该设计使用Haar小波下采样(HWD)来保留源处的高频细节,并使用CARAFE (CARAFE)来执行无伪像、细粒度的上采样,从而建立高保真度的特征处理循环。在DOTA数据集上的实验表明,与基线模型相比,SAILDet将GFLOPs和参数分别降低了11.7%和13.0%,同时将mAP@50 -95从0.263提高到0.266,将mAP@50从0.411提高到0.422。此外,在AI-TOD上也观察到一致的增益,这强化了直接优化分辨率变换算子比下游补偿更有效。
{"title":"SAILDet: Wavelet-Preserved Lightweight One-Stage Detector for Tiny Objects in Remote Sensing","authors":"Jiaqi Ma;Hui Wang;Tianyou Wang;Haotian Li;Ruixue Xiao","doi":"10.1109/LGRS.2025.3631843","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3631843","url":null,"abstract":"Current convolutional neural network (CNN)-based tiny object detectors in remote sensing commonly face a resolution transform bottleneck, characterized by irreversible feature information loss during downsampling and reconstruction distortions during upsampling. To address this issue, we propose a lightweight one-stage detector, small-object-aware intelligent lightweight detector (SAILDet). Its core principle is to preserve information fidelity at the source rather than compensating for its loss in downstream stages. This is achieved through a paired design that employs Haar wavelet downsampling (HWD) to retain high-frequency details at the source and Content-Aware ReAssembly of FEatures (CARAFE) to perform artifact-free, fine-grained upsampling, thereby establishing a high-fidelity feature processing loop. Experiments on the DOTA dataset demonstrate that, compared to the baseline model, SAILDet reduces GFLOPs and parameters by 11.7% and 13.0%, respectively, while improving mAP@50–95 from 0.263 to 0.266 and mAP@50 from 0.411 to 0.422. In addition, consistent gains are also observed on AI-TOD, reinforcing that directly optimizing the resolution-transform operators is more effective than downstream compensation.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SA-RTDETR: A High-Precision Real-Time Detection Transformer Based on Complex Scenarios for SAR Object Detection SA-RTDETR:基于复杂场景SAR目标检测的高精度实时检测变压器
Zhaoyu Liu;Wei Chen;Lixia Yang
To address core challenges in synthetic aperture radar (SAR) image target detection, including complex background interference, weak small-target features, and multiscale target coexistence, this study proposes the synthetic aperture-optimized real-time detection transformer (SA-RTDETR) model. The framework incorporates three core modules to enhance detection efficacy. First, the bidirectional receptive field boosting module synergistically integrates local details with global contextual information and substantially improves discriminative feature extraction while preserving spatial resolution. Second, the deformable attention-based intrascale feature interaction module employs adaptive sampling of critical scattering regions to address localization difficulties of small targets in SAR imagery. Third, the attention upsampling module mitigates detail loss and aliasing artifacts inherent in traditional interpolation methods through feature compensation strategies. Experimental results on the SARDet-100K dataset demonstrate that SA-RTDETR achieves 90.1% mAP@50, 56.0% mAP@50-95, and 84.7% recall rate representing improvements of 2.7%, 2.6%, and 2.2% over the baseline model, respectively. The end-to-end architecture enables high-precision SAR image analysis and offers considerable potential for military reconnaissance and maritime surveillance applications. The SA-RTDETR model establishes a novel technical paradigm for reliable all-weather remote sensing target detection by harmonizing feature robustness, scale adaptability, and operational efficiency.
针对合成孔径雷达(SAR)图像目标检测中存在的复杂背景干扰、弱小目标特征和多尺度目标共存等核心问题,提出了合成孔径优化实时检测变压器(SA-RTDETR)模型。该框架包含三个核心模块,以提高检测效率。首先,双向感受野增强模块将局部细节与全局上下文信息协同集成,在保持空间分辨率的同时显著提高了判别特征提取。其次,基于形变注意力的尺度内特征交互模块采用关键散射区域的自适应采样,解决了SAR图像中小目标的定位难题。第三,注意力上采样模块通过特征补偿策略减轻了传统插值方法固有的细节损失和混叠现象。在SARDet-100K数据集上的实验结果表明,SA-RTDETR的召回率达到了90.1% mAP@50、56.0% mAP@50-95和84.7%,分别比基线模型提高了2.7%、2.6%和2.2%。端到端架构实现了高精度SAR图像分析,并为军事侦察和海上监视应用提供了相当大的潜力。SA-RTDETR模型通过协调特征鲁棒性、规模适应性和操作效率,为全天候遥感目标的可靠探测建立了一种新的技术范式。
{"title":"SA-RTDETR: A High-Precision Real-Time Detection Transformer Based on Complex Scenarios for SAR Object Detection","authors":"Zhaoyu Liu;Wei Chen;Lixia Yang","doi":"10.1109/LGRS.2025.3632153","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3632153","url":null,"abstract":"To address core challenges in synthetic aperture radar (SAR) image target detection, including complex background interference, weak small-target features, and multiscale target coexistence, this study proposes the synthetic aperture-optimized real-time detection transformer (SA-RTDETR) model. The framework incorporates three core modules to enhance detection efficacy. First, the bidirectional receptive field boosting module synergistically integrates local details with global contextual information and substantially improves discriminative feature extraction while preserving spatial resolution. Second, the deformable attention-based intrascale feature interaction module employs adaptive sampling of critical scattering regions to address localization difficulties of small targets in SAR imagery. Third, the attention upsampling module mitigates detail loss and aliasing artifacts inherent in traditional interpolation methods through feature compensation strategies. Experimental results on the SARDet-100K dataset demonstrate that SA-RTDETR achieves 90.1% mAP@50, 56.0% mAP@50-95, and 84.7% recall rate representing improvements of 2.7%, 2.6%, and 2.2% over the baseline model, respectively. The end-to-end architecture enables high-precision SAR image analysis and offers considerable potential for military reconnaissance and maritime surveillance applications. The SA-RTDETR model establishes a novel technical paradigm for reliable all-weather remote sensing target detection by harmonizing feature robustness, scale adaptability, and operational efficiency.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An End-to-End Sea Clutter Suppression Method Using Wavelet Convolution-Enhanced Attentional Complex-Valued Neural Network 基于小波卷积增强注意复值神经网络的端到端海杂波抑制方法
Haoxuan Xu;Meiguo Gao
Marine radar is widely employed in ocean monitoring systems. However, sea clutter significantly impairs radar data interpretability and degrades maritime target detection performance. Effective clutter suppression methods are thus essential to enhance target characteristics for improved detection. However, environmental sea clutter often exhibits complex statistical characteristics, causing traditional model-based methods to suffer from performance degradation. To address this challenge, this letter proposes a sea clutter suppression method based on a complex-valued neural network (CVNN). First, the network incorporates a wavelet convolution (WTConv) block to expand the receptive field. Second, complex-valued convolutional blocks integrated with an attention mechanism are designed to enhance latent feature extraction. Finally, the model’s performance is rigorously validated using real-measured data. Experimental results demonstrate that the proposed model achieves superior clutter suppression performance.
船用雷达在海洋监测系统中有着广泛的应用。然而,海杂波极大地削弱了雷达数据的可解释性,降低了海上目标探测性能。因此,有效的杂波抑制方法对于提高目标特性以改进检测至关重要。然而,环境海杂波往往表现出复杂的统计特征,导致传统的基于模型的方法性能下降。为了解决这一挑战,本文提出了一种基于复值神经网络(CVNN)的海杂波抑制方法。首先,该网络采用小波卷积(WTConv)块来扩展接受域。其次,设计了结合注意机制的复值卷积块来增强潜在特征的提取。最后,利用实测数据对模型的性能进行了严格验证。实验结果表明,该模型具有较好的杂波抑制性能。
{"title":"An End-to-End Sea Clutter Suppression Method Using Wavelet Convolution-Enhanced Attentional Complex-Valued Neural Network","authors":"Haoxuan Xu;Meiguo Gao","doi":"10.1109/LGRS.2025.3631806","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3631806","url":null,"abstract":"Marine radar is widely employed in ocean monitoring systems. However, sea clutter significantly impairs radar data interpretability and degrades maritime target detection performance. Effective clutter suppression methods are thus essential to enhance target characteristics for improved detection. However, environmental sea clutter often exhibits complex statistical characteristics, causing traditional model-based methods to suffer from performance degradation. To address this challenge, this letter proposes a sea clutter suppression method based on a complex-valued neural network (CVNN). First, the network incorporates a wavelet convolution (WTConv) block to expand the receptive field. Second, complex-valued convolutional blocks integrated with an attention mechanism are designed to enhance latent feature extraction. Finally, the model’s performance is rigorously validated using real-measured data. Experimental results demonstrate that the proposed model achieves superior clutter suppression performance.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RSNet-Lite: A Lightweight Perception Subnetwork for Remote Sensing Object Detection RSNet-Lite:一种用于遥感目标检测的轻量级感知子网
Haotian Li;Jiaqi Ma;Wenna Guo;Xiaoxia Li;Xiaohui Qin;Zhenhua Ma
With the rapid development of applications such as unmanned aerial vehicle (UAV)-based remote sensing, smart cities, and intelligent transportation, small-object detection has become increasingly important in the field of object recognition. However, existing methods often struggle to balance detection accuracy and inference efficiency under large-scale variations, dense small-object distributions, and complex background interference. To address these challenges, this letter proposes a lightweight perception subnetwork, RSNet-Lite. The network integrates a multiscale attention mechanism to enhance small-object perception, dynamic convolution, and long-range spatial modeling units to improve feature representation, and lightweight convolution with efficient sampling strategies to significantly reduce computational complexity. As a result, RSNet-Lite achieves real-time inference while maintaining high detection accuracy, striking a balance between speed and performance. Finally, the proposed method is validated on the Aerial Image–Tiny Object Detection (AI-TOD) and Vision Meets Drone (VisDrone) datasets, demonstrating its effectiveness and strong potential for small-object detection tasks.
随着无人机遥感、智慧城市、智能交通等应用的快速发展,小目标检测在目标识别领域的地位日益重要。然而,在大规模变化、密集小目标分布和复杂背景干扰下,现有方法往往难以平衡检测精度和推理效率。为了解决这些挑战,这封信提出了一个轻量级感知子网,RSNet-Lite。该网络集成了多尺度注意机制来增强小目标感知,动态卷积和远程空间建模单元来改善特征表示,轻量级卷积和高效采样策略来显著降低计算复杂度。因此,RSNet-Lite在保持高检测精度的同时实现了实时推理,在速度和性能之间取得了平衡。最后,在航空图像微小目标检测(AI-TOD)和视觉与无人机(VisDrone)数据集上对该方法进行了验证,证明了该方法在小目标检测任务中的有效性和强大潜力。
{"title":"RSNet-Lite: A Lightweight Perception Subnetwork for Remote Sensing Object Detection","authors":"Haotian Li;Jiaqi Ma;Wenna Guo;Xiaoxia Li;Xiaohui Qin;Zhenhua Ma","doi":"10.1109/LGRS.2025.3631871","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3631871","url":null,"abstract":"With the rapid development of applications such as unmanned aerial vehicle (UAV)-based remote sensing, smart cities, and intelligent transportation, small-object detection has become increasingly important in the field of object recognition. However, existing methods often struggle to balance detection accuracy and inference efficiency under large-scale variations, dense small-object distributions, and complex background interference. To address these challenges, this letter proposes a lightweight perception subnetwork, RSNet-Lite. The network integrates a multiscale attention mechanism to enhance small-object perception, dynamic convolution, and long-range spatial modeling units to improve feature representation, and lightweight convolution with efficient sampling strategies to significantly reduce computational complexity. As a result, RSNet-Lite achieves real-time inference while maintaining high detection accuracy, striking a balance between speed and performance. Finally, the proposed method is validated on the Aerial Image–Tiny Object Detection (AI-TOD) and Vision Meets Drone (VisDrone) datasets, demonstrating its effectiveness and strong potential for small-object detection tasks.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1