首页 > 最新文献

IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society最新文献

英文 中文
ARTEA: A Multistage Adaptive Preprocessing Algorithm for Subsurface Target Enhancement in Ground Penetrating Radar 探地雷达地下目标增强多阶段自适应预处理算法
Wenqiang Ding;Changying Ma;Xintong Dong;Xuan Li
The heterogeneity of subsurface media induces multipath scattering and dielectric loss in ground penetrating radar (GPR) signal propagation, which results in wavefront distortion and signal attenuation. These effects degrade B-scan profiles by blurring target signatures, hindering automated feature extraction, and reducing the clarity of regions of interest (ROI). To address these issues, we propose the adaptive region target enhancement algorithm (ARTEA), a multistage preprocessing framework. ARTEA integrates dynamic range compression, continuous-scale normalization guided by adaptive sigma maps, and a frequency-domain refinement step. By dynamically adjusting parameters according to local signal characteristics, ARTEA is designed to achieve an effective tradeoff between artifact suppression and target preservation. Experiments on both synthetic and field GPR data demonstrate that ARTEA can enhance target contrast and structural fidelity while suppressing artifacts and preserving essential target features.
地下介质的非均质性导致探地雷达信号在传播过程中产生多径散射和介质损耗,从而导致波前畸变和信号衰减。这些影响通过模糊目标签名、阻碍自动特征提取和降低感兴趣区域(ROI)的清晰度来降低b扫描配置文件。为了解决这些问题,我们提出了一种多阶段预处理框架——自适应区域目标增强算法(ARTEA)。ARTEA集成了动态范围压缩,由自适应西格玛图引导的连续尺度归一化和频域细化步骤。ARTEA通过根据局部信号特征动态调整参数,实现了伪影抑制和目标保存的有效平衡。在合成和实地GPR数据上的实验表明,ARTEA可以在抑制伪影和保留目标基本特征的同时增强目标对比度和结构保真度。
{"title":"ARTEA: A Multistage Adaptive Preprocessing Algorithm for Subsurface Target Enhancement in Ground Penetrating Radar","authors":"Wenqiang Ding;Changying Ma;Xintong Dong;Xuan Li","doi":"10.1109/LGRS.2025.3634350","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3634350","url":null,"abstract":"The heterogeneity of subsurface media induces multipath scattering and dielectric loss in ground penetrating radar (GPR) signal propagation, which results in wavefront distortion and signal attenuation. These effects degrade B-scan profiles by blurring target signatures, hindering automated feature extraction, and reducing the clarity of regions of interest (ROI). To address these issues, we propose the adaptive region target enhancement algorithm (ARTEA), a multistage preprocessing framework. ARTEA integrates dynamic range compression, continuous-scale normalization guided by adaptive sigma maps, and a frequency-domain refinement step. By dynamically adjusting parameters according to local signal characteristics, ARTEA is designed to achieve an effective tradeoff between artifact suppression and target preservation. Experiments on both synthetic and field GPR data demonstrate that ARTEA can enhance target contrast and structural fidelity while suppressing artifacts and preserving essential target features.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Lightweight Multifeature Hybrid Mamba for Remote Sensing Image Scene Classification 一种用于遥感影像场景分类的轻型多特征混合曼巴算法
Huihui Dong;Jingcao Li;Zongfang Ma;Zhijie Li;Mengkun Liu;Xiaohui Wei;Licheng Jiao
Remote sensing (RS) image scene classification has wide applications in the field of RS. Although the existing methods have achieved remarkable performance, there are still limitations in feature extraction and lightweight design. Current multibranch models, although performing well, have large parameter counts and high computational costs, making them difficult to deploy on resource-constrained edge devices, such as uncrewed aerial vehicles (UAVs). On the other hand, lightweight models like StarNet, having less parameter, but rely on elementwise multiplication to generate features and lack the capture of explicit long-range spatial feature, resulting in insufficient classification accuracy. To address these issues, this letter proposes a lightweight mamba-based hybrid network, namely LMHMamba, whose core is an innovative lightweight multifeature hybrid Mamba (LMHM) module. This module combines the advantage of StarNet in implicitly generating high-dimensional nonlinear features, introduces a lightweight state-space module to enhance spatial feature learning capabilities, and then uses local and global attention modules to emphasize local and global features. This enables effective multidimensional feature fusion while maintaining low parameter. We validate the performance of LMHMamba model on three RS scene classification datasets and compare it with mainstream lightweight models and the latest methods. Experimental results show that LMHMamba achieves advanced levels in both classification accuracy and computational efficiency, significantly outperforming the existing lightweight models, providing an efficient solution for edge deployment. Code is available at https://github.com/yizhilanmaodhh/LMHMamba
遥感图像场景分类在遥感领域有着广泛的应用,现有方法虽然取得了显著的成绩,但在特征提取和轻量化设计等方面仍存在局限性。目前的多分支模型虽然性能良好,但参数数量大,计算成本高,难以部署在资源受限的边缘设备上,如无人驾驶飞行器(uav)。另一方面,像StarNet这样的轻量级模型,参数较少,但依赖于元素乘法来生成特征,缺乏明确的远程空间特征的捕获,导致分类精度不足。为了解决这些问题,这封信提出了一个轻量级的基于曼巴的混合网络,即LMHMamba,其核心是一个创新的轻量级多功能混合曼巴(LMHM)模块。该模块结合StarNet在隐式生成高维非线性特征方面的优势,引入轻量级状态空间模块增强空间特征学习能力,利用局部和全局关注模块强调局部和全局特征。这使得有效的多维特征融合,同时保持低参数。在三个RS场景分类数据集上验证了LMHMamba模型的性能,并与主流轻量化模型和最新方法进行了比较。实验结果表明,LMHMamba在分类精度和计算效率方面都达到了先进水平,显著优于现有的轻量化模型,为边缘部署提供了高效的解决方案。代码可从https://github.com/yizhilanmaodhh/LMHMamba获得
{"title":"A Lightweight Multifeature Hybrid Mamba for Remote Sensing Image Scene Classification","authors":"Huihui Dong;Jingcao Li;Zongfang Ma;Zhijie Li;Mengkun Liu;Xiaohui Wei;Licheng Jiao","doi":"10.1109/LGRS.2025.3634398","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3634398","url":null,"abstract":"Remote sensing (RS) image scene classification has wide applications in the field of RS. Although the existing methods have achieved remarkable performance, there are still limitations in feature extraction and lightweight design. Current multibranch models, although performing well, have large parameter counts and high computational costs, making them difficult to deploy on resource-constrained edge devices, such as uncrewed aerial vehicles (UAVs). On the other hand, lightweight models like StarNet, having less parameter, but rely on elementwise multiplication to generate features and lack the capture of explicit long-range spatial feature, resulting in insufficient classification accuracy. To address these issues, this letter proposes a lightweight mamba-based hybrid network, namely LMHMamba, whose core is an innovative lightweight multifeature hybrid Mamba (LMHM) module. This module combines the advantage of StarNet in implicitly generating high-dimensional nonlinear features, introduces a lightweight state-space module to enhance spatial feature learning capabilities, and then uses local and global attention modules to emphasize local and global features. This enables effective multidimensional feature fusion while maintaining low parameter. We validate the performance of LMHMamba model on three RS scene classification datasets and compare it with mainstream lightweight models and the latest methods. Experimental results show that LMHMamba achieves advanced levels in both classification accuracy and computational efficiency, significantly outperforming the existing lightweight models, providing an efficient solution for edge deployment. Code is available at <uri>https://github.com/yizhilanmaodhh/LMHMamba</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WCEDNet: A Weighted Cascaded Encoder–Decoder Network for Hyperspectral Change Detection Based on Spatial–Spectral Difference Features 基于空间-光谱差分特征的高光谱变化检测加权级联编码器-解码器网络
Bo Zhang;Yaxiong Chen;Ruilin Yao;Shengwu Xiong
The core of hyperspectral change detection lies in accurately capturing spectral feature differences across different temporal phases to determine whether surface objects have changed. Since spectral variations of different ground objects often manifest more prominently in specific wavelength bands, we design a weighted cascaded encoder–decoder network (WCEDNet) based on spatial–spectral difference features for hyperspectral change detection. First, unlike conventional change detection frameworks based on siamese networks, our proposed single-branch approach focuses more intensively on extracting spatial–spectral difference features. Second, the weighted cascaded structure introduced in the encoder stage enables differential attention to different bands, enhancing focus on spectral bands with high responsiveness. Furthermore, we have developed a spatial–spectral cross-attention (SSCA) module to model intrafeature correlations within spatial and spectral domains. Our method was evaluated on three challenging hyperspectral change detection datasets, and experimental results demonstrate its superior performance compared to competitive models. The detailed code has been open-sourced at https://github.com/WUTCM-Lab/WCEDNet
高光谱变化检测的核心在于准确捕捉不同时间相位的光谱特征差异,从而判断地表物体是否发生了变化。针对不同地物光谱变化在特定波段表现更为明显的特点,设计了一种基于空间光谱差异特征的加权级联编码器-解码器网络(WCEDNet)用于高光谱变化检测。首先,与传统的基于暹罗网络的变化检测框架不同,我们提出的单分支方法更侧重于提取空间光谱差异特征。其次,在编码器阶段引入的加权级联结构使得对不同波段的关注有所不同,增强了对高响应性的光谱波段的关注。此外,我们还开发了一个空间-光谱交叉关注(SSCA)模块来模拟空间和光谱域内的特征内相关性。在三个具有挑战性的高光谱变化检测数据集上对我们的方法进行了评估,实验结果表明,与竞争模型相比,我们的方法具有更好的性能。详细的代码已经在https://github.com/WUTCM-Lab/WCEDNet上开源
{"title":"WCEDNet: A Weighted Cascaded Encoder–Decoder Network for Hyperspectral Change Detection Based on Spatial–Spectral Difference Features","authors":"Bo Zhang;Yaxiong Chen;Ruilin Yao;Shengwu Xiong","doi":"10.1109/LGRS.2025.3634345","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3634345","url":null,"abstract":"The core of hyperspectral change detection lies in accurately capturing spectral feature differences across different temporal phases to determine whether surface objects have changed. Since spectral variations of different ground objects often manifest more prominently in specific wavelength bands, we design a weighted cascaded encoder–decoder network (WCEDNet) based on spatial–spectral difference features for hyperspectral change detection. First, unlike conventional change detection frameworks based on siamese networks, our proposed single-branch approach focuses more intensively on extracting spatial–spectral difference features. Second, the weighted cascaded structure introduced in the encoder stage enables differential attention to different bands, enhancing focus on spectral bands with high responsiveness. Furthermore, we have developed a spatial–spectral cross-attention (SSCA) module to model intrafeature correlations within spatial and spectral domains. Our method was evaluated on three challenging hyperspectral change detection datasets, and experimental results demonstrate its superior performance compared to competitive models. The detailed code has been open-sourced at <uri>https://github.com/WUTCM-Lab/WCEDNet</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Multiscale Feature Refinement Detector for Small Objects With Ambiguous Boundaries 边界模糊小目标的多尺度特征细化检测器
Weihua Shen;Yalin Li;Xiaohua Chen;Chunzhi Li
There are multiple challenges in small object detection (SOD), including limited instances, insufficient features, diverse scales, uneven distribution, ambiguous boundaries, and complex backgrounds. These issues often lead to high false detection rates and hinder model generalization and convergence. This study proposes a multiscale object detection algorithm that enhances the detection of subtle features by improving the change detection to DH throughout and incorporating a minimum point distance intersection-over-union loss. The enhanced DH improves target representation, enabling more precise localization and classification of small objects. Meanwhile, the new loss (NL) function stabilizes bounding box regression by adaptively adjusting auxiliary bounding box scales. Evaluations on two benchmark datasets demonstrate that our method achieves a 2.6% increase in mAP50 and a 1.8% improvement in mAP50:95 on the satellite imagery multivehicles dataset (SIMD) and a 1.9% increase in mAP50:95 on the DIOR dataset. Furthermore, the model reduces the number of parameters by 2.5% and the computational cost by 1.4%, demonstrating its potential for real-time detection applications.
小目标检测存在着实例有限、特征不足、尺度多样、分布不均匀、边界模糊、背景复杂等问题。这些问题往往导致高误检率,阻碍模型的泛化和收敛。本研究提出了一种多尺度目标检测算法,该算法通过改进对DH的变化检测并结合最小点距相交-过并损失来增强对细微特征的检测。增强的DH改进了目标表示,可以更精确地定位和分类小目标。同时,新的损失(NL)函数通过自适应调整辅助边界盒尺度来稳定边界盒回归。对两个基准数据集的评估表明,我们的方法在卫星图像多车辆数据集(SIMD)上的mAP50提高了2.6%,mAP50:95提高了1.8%,在DIOR数据集上的mAP50:95提高了1.9%。此外,该模型将参数数量减少了2.5%,计算成本减少了1.4%,显示了其在实时检测应用中的潜力。
{"title":"A Multiscale Feature Refinement Detector for Small Objects With Ambiguous Boundaries","authors":"Weihua Shen;Yalin Li;Xiaohua Chen;Chunzhi Li","doi":"10.1109/LGRS.2025.3633285","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3633285","url":null,"abstract":"There are multiple challenges in small object detection (SOD), including limited instances, insufficient features, diverse scales, uneven distribution, ambiguous boundaries, and complex backgrounds. These issues often lead to high false detection rates and hinder model generalization and convergence. This study proposes a multiscale object detection algorithm that enhances the detection of subtle features by improving the change detection to DH throughout and incorporating a minimum point distance intersection-over-union loss. The enhanced DH improves target representation, enabling more precise localization and classification of small objects. Meanwhile, the new loss (NL) function stabilizes bounding box regression by adaptively adjusting auxiliary bounding box scales. Evaluations on two benchmark datasets demonstrate that our method achieves a 2.6% increase in mAP50 and a 1.8% improvement in mAP50:95 on the satellite imagery multivehicles dataset (SIMD) and a 1.9% increase in mAP50:95 on the DIOR dataset. Furthermore, the model reduces the number of parameters by 2.5% and the computational cost by 1.4%, demonstrating its potential for real-time detection applications.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geospatial Domain Adaptation With Truncated Parameter-Efficient Fine-Tuning 截断参数高效微调的地理空间域自适应
Kwonyoung Kim;Jungin Park;Kwanghoon Sohn
Parameter-efficient fine-tuning (PEFT) adapts large pretrained foundation models to downstream tasks, such as remote sensing scene classification, by learning a small set of additional parameters while keeping the pretrained parameters frozen. While PEFT offers substantial training efficiency over full fine-tuning (FT), it still incurs high inference costs due to reliance on both pretrained and task-specific parameters. To address this limitation, we propose a novel PEFT approach with model truncation, termed truncated parameter-efficient fine-tuning (TruncPEFT), enabling efficiency gains to persist during inference. Observing that predictions from final and intermediate layers often exhibit high agreement, we truncate a set of final layers and replace them with a lightweight attention module. Additionally, we introduce a token dropping strategy to mitigate interclass interference, reducing the model’s sensitivity to visual similarities between different classes in remote sensing data. Extensive experiments on seven remote sensing scene classification datasets demonstrate the effectiveness of the proposed method, significantly improving training, inference, and GPU memory efficiencies while achieving comparable or even better performance than prior PEFT methods and full FT.
参数有效微调(PEFT)通过学习一小组附加参数,同时保持预训练参数不变,使大型预训练基础模型适应下游任务,如遥感场景分类。虽然PEFT比完全微调(FT)提供了大量的训练效率,但由于依赖于预训练和任务特定参数,它仍然会产生很高的推理成本。为了解决这一限制,我们提出了一种具有模型截断的新型PEFT方法,称为截断参数有效微调(truncated parameter-efficient fine-tuning, TruncPEFT),使效率增益在推理期间持续存在。观察到来自最终层和中间层的预测通常表现出很高的一致性,我们截断了一组最终层,并用轻量级注意力模块代替它们。此外,我们引入了一种令牌丢弃策略来减轻类间干扰,降低了模型对遥感数据中不同类之间视觉相似性的敏感性。在7个遥感场景分类数据集上的大量实验证明了该方法的有效性,显著提高了训练、推理和GPU内存效率,同时实现了与之前的PEFT方法和全FT方法相当甚至更好的性能。
{"title":"Geospatial Domain Adaptation With Truncated Parameter-Efficient Fine-Tuning","authors":"Kwonyoung Kim;Jungin Park;Kwanghoon Sohn","doi":"10.1109/LGRS.2025.3633718","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3633718","url":null,"abstract":"Parameter-efficient fine-tuning (PEFT) adapts large pretrained foundation models to downstream tasks, such as remote sensing scene classification, by learning a small set of additional parameters while keeping the pretrained parameters frozen. While PEFT offers substantial training efficiency over full fine-tuning (FT), it still incurs high inference costs due to reliance on both pretrained and task-specific parameters. To address this limitation, we propose a novel PEFT approach with model truncation, termed truncated parameter-efficient fine-tuning (TruncPEFT), enabling efficiency gains to persist during inference. Observing that predictions from final and intermediate layers often exhibit high agreement, we truncate a set of final layers and replace them with a lightweight attention module. Additionally, we introduce a token dropping strategy to mitigate interclass interference, reducing the model’s sensitivity to visual similarities between different classes in remote sensing data. Extensive experiments on seven remote sensing scene classification datasets demonstrate the effectiveness of the proposed method, significantly improving training, inference, and GPU memory efficiencies while achieving comparable or even better performance than prior PEFT methods and full FT.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Sequential Doppler Offset (SDO) Method for Locating Targets Causing Azimuth Fractional Ambiguity in Spaceborne HRWS-SAR 星载HRWS-SAR中方位模糊目标的序多普勒偏移定位方法
Yanyan Zhang;Akira Hirose;Ryo Natsuaki
Advanced Land Observing Satellite-4 (ALOS-4) is a spaceborne high-resolution and wide-swath synthetic aperture radar (HRWS-SAR) that uses a variable pulse repetition interval (VPRI) technique to achieve continuous wide imaging. In some ALOS-4 images, azimuth fractional ambiguity caused by the VPRI is observed, and it differs from the usual integer ambiguity, resulting from interchannel errors in that it occurs at smaller intervals. In this letter, we propose a sequential Doppler offset (SDO) method for locating the original target (OT) that causes azimuth fractional ambiguity. First, the ratio of the interval of integer ambiguity to that of fractional ambiguity is obtained, which is used to generate SAR images with different Doppler center frequencies. Second, the coherence between the sum image of the generated images and the image with a zero Doppler center frequency is calculated. Third, some points with coherence greater than a threshold are selected based on the coherence. Finally, the final OT is obtained by detecting the filtered selected points. Some experiments are conducted based on ALOS-4 L1.2 data, and the results demonstrate that the method locates the OT accurately. In short, the proposed method provides a starting point for fractional ambiguity suppression in HRWS-SAR.
先进陆地观测卫星-4 (ALOS-4)是一种星载高分辨率宽幅合成孔径雷达(HRWS-SAR),使用可变脉冲重复间隔(VPRI)技术实现连续宽幅成像。在一些ALOS-4图像中,观测到由VPRI引起的方位角分数模糊,它不同于通常由信道间误差引起的整数模糊,它发生的间隔更小。在这封信中,我们提出了一种顺序多普勒偏移(SDO)方法来定位导致方位角模糊的原始目标(OT)。首先,得到整数模糊度间隔与分数模糊度间隔的比值,利用该比值生成不同多普勒中心频率的SAR图像;其次,计算生成图像的和图像与零多普勒中心频率图像之间的相干性;第三,根据相干性选取相干性大于阈值的点;最后,对滤波后的选定点进行检测,得到最终的OT。基于ALOS-4 L1.2数据进行了实验,结果表明该方法能够准确定位OT。简而言之,该方法为HRWS-SAR中的分数模糊抑制提供了一个起点。
{"title":"A Sequential Doppler Offset (SDO) Method for Locating Targets Causing Azimuth Fractional Ambiguity in Spaceborne HRWS-SAR","authors":"Yanyan Zhang;Akira Hirose;Ryo Natsuaki","doi":"10.1109/LGRS.2025.3633588","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3633588","url":null,"abstract":"Advanced Land Observing Satellite-4 (ALOS-4) is a spaceborne high-resolution and wide-swath synthetic aperture radar (HRWS-SAR) that uses a variable pulse repetition interval (VPRI) technique to achieve continuous wide imaging. In some ALOS-4 images, azimuth fractional ambiguity caused by the VPRI is observed, and it differs from the usual integer ambiguity, resulting from interchannel errors in that it occurs at smaller intervals. In this letter, we propose a sequential Doppler offset (SDO) method for locating the original target (OT) that causes azimuth fractional ambiguity. First, the ratio of the interval of integer ambiguity to that of fractional ambiguity is obtained, which is used to generate SAR images with different Doppler center frequencies. Second, the coherence between the sum image of the generated images and the image with a zero Doppler center frequency is calculated. Third, some points with coherence greater than a threshold are selected based on the coherence. Finally, the final OT is obtained by detecting the filtered selected points. Some experiments are conducted based on ALOS-4 L1.2 data, and the results demonstrate that the method locates the OT accurately. In short, the proposed method provides a starting point for fractional ambiguity suppression in HRWS-SAR.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physics-Aware Neural Framework for Multidepth Soil Carbon Mapping 多深度土壤碳制图的物理感知神经网络框架
Bishal Roy;Vasit Sagan;Haireti Alifu;Jocelyn Saxton;Cagri Gul;Nadia Shakoor
Depth-resolved estimation of soil organic carbon (SOC) remains challenging because optical measurements originate at the surface while carbon dynamics vary vertically. We propose a physics-aware uncrewed aerial vehicle (UAV) framework that integrates multispectral imagery (MSI) and hyperspectral imagery (HSI) to estimate SOC concentration (%) across five depths. The experiment was conducted at Plantheaven Farms, Missouri, with ten sorghum genotypes across three replicates. Feature construction combined spectral derivatives from HSI with texture features from MSI, compressed via principal component analysis (PCA). Physics-based regularization was implemented through: 1) a second-difference penalty to enforce vertical smoothness and 2) a profile-integral consistency constraint to preserve whole-profile balance. Four model configurations evaluated on local data showed progressive improvements: MSI-only, MSI + HSI, MSI + HSI with smoothness, and MSI + HSI with full physics constraints. In addition, transfer learning from the open soil spectral library (OSSL) was tested to address data limitations. Model fitting on the available data achieved ${R} ^{2} = 0.72$ at 0–30 cm, with physics-aware constraints notably improving vertical coherence. The physics-aware model reduced variance and improved plausibility. In-sample, transfer learning achieved ${R} ^{2}=0.60$ at 0–30 cm, with conservative interpretation below 90 cm due to reduced optical sensitivity. Exploratory genotype patterns suggested higher surface SOC percent for PI 656 029 and PI 656 057, and lower values for PI 276 837 and PI 656 044.
土壤有机碳(SOC)的深度分辨估计仍然具有挑战性,因为光学测量起源于表面,而碳动态垂直变化。我们提出了一个物理感知的无人机(UAV)框架,该框架集成了多光谱图像(MSI)和高光谱图像(HSI)来估计五个深度的SOC浓度(%)。试验在密苏里州plantheheaven农场进行,10种高粱基因型跨越3个重复。特征构建将HSI的光谱导数与MSI的纹理特征结合起来,通过主成分分析(PCA)进行压缩。基于物理的正则化通过以下方式实现:1)二次差惩罚来加强垂直平滑;2)剖面积分一致性约束来保持整个剖面平衡。根据本地数据评估的四种模型配置显示出渐进式的改进:仅MSI、MSI + HSI、MSI + HSI具有平滑性,以及MSI + HSI具有完全物理约束。此外,为了解决数据的局限性,还对开放土壤光谱库(OSSL)进行了迁移学习测试。对现有数据的模型拟合在0-30 cm处达到${R} ^{2} = 0.72$,物理感知约束显著提高了垂直相干性。物理感知模型减少了方差,提高了合理性。样本内迁移学习在0-30 cm处达到${R} ^{2}=0.60$,由于光学灵敏度降低,在90 cm以下具有保守解释。基因型分析表明,皮656029和皮656057的表面有机碳含量较高,皮276 837和皮656044的表面有机碳含量较低。
{"title":"Physics-Aware Neural Framework for Multidepth Soil Carbon Mapping","authors":"Bishal Roy;Vasit Sagan;Haireti Alifu;Jocelyn Saxton;Cagri Gul;Nadia Shakoor","doi":"10.1109/LGRS.2025.3632815","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3632815","url":null,"abstract":"Depth-resolved estimation of soil organic carbon (SOC) remains challenging because optical measurements originate at the surface while carbon dynamics vary vertically. We propose a physics-aware uncrewed aerial vehicle (UAV) framework that integrates multispectral imagery (MSI) and hyperspectral imagery (HSI) to estimate SOC concentration (%) across five depths. The experiment was conducted at Plantheaven Farms, Missouri, with ten sorghum genotypes across three replicates. Feature construction combined spectral derivatives from HSI with texture features from MSI, compressed via principal component analysis (PCA). Physics-based regularization was implemented through: 1) a second-difference penalty to enforce vertical smoothness and 2) a profile-integral consistency constraint to preserve whole-profile balance. Four model configurations evaluated on local data showed progressive improvements: MSI-only, MSI + HSI, MSI + HSI with smoothness, and MSI + HSI with full physics constraints. In addition, transfer learning from the open soil spectral library (OSSL) was tested to address data limitations. Model fitting on the available data achieved <inline-formula> <tex-math>${R} ^{2} = 0.72$ </tex-math></inline-formula> at 0–30 cm, with physics-aware constraints notably improving vertical coherence. The physics-aware model reduced variance and improved plausibility. In-sample, transfer learning achieved <inline-formula> <tex-math>${R} ^{2}=0.60$ </tex-math></inline-formula> at 0–30 cm, with conservative interpretation below 90 cm due to reduced optical sensitivity. Exploratory genotype patterns suggested higher surface SOC percent for PI 656 029 and PI 656 057, and lower values for PI 276 837 and PI 656 044.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight Attention Mechanism With Feature Differences for Efficient Change Detection in Remote Sensing 基于特征差异的轻量级关注机制在遥感中的高效变化检测
Jangsoo Park;EunSeong Lee;Jongseok Lee;Seoung-Jun Oh;Donggyu Sim
This letter presents a low-complexity attention module for fast change detection. The proposed module computes the absolute difference between bitemporal features extracted by a Siamese backbone network and sequentially applies spatial and channel attention to generate key change representations. Spatial attention emphasizes important spatial locations using representative values from channelwise pooling, while channel attention highlights discriminative feature responses using values from spatialwise pooling. By leveraging low-dimensional representative features, the module significantly reduces computational cost. Additionally, its dual-attention structure-driven by feature differences-enhances both spatial localization and semantic relevance of changes. Compared to the change-guided network (CGNet), the proposed method reduces multiply-accumulate operations (MACs) by 53.81% with only a 0.15% drop in ${F}1$ -score, demonstrating high efficiency with minimal performance degradation. These results suggest that the proposed method is suitable for large-scale or real-time remote sensing (RS) applications where computational efficiency is essential.
本文介绍了一种用于快速变化检测的低复杂度关注模块。该模块计算由Siamese骨干网提取的双时特征之间的绝对差值,并依次应用空间和通道关注来生成关键变化表示。空间注意使用通道池化的代表性值来强调重要的空间位置,而通道注意使用空间池化的值来强调判别性特征响应。通过利用低维代表性特征,该模块显著降低了计算成本。此外,由特征差异驱动的双注意结构增强了变化的空间定位和语义关联。与变化引导网络(CGNet)相比,该方法减少了53.81%的乘法累积操作(mac),而${F}1$ -score仅下降0.15%,在最小的性能下降下显示出高效率。这些结果表明,该方法适用于对计算效率要求很高的大规模或实时遥感(RS)应用。
{"title":"Lightweight Attention Mechanism With Feature Differences for Efficient Change Detection in Remote Sensing","authors":"Jangsoo Park;EunSeong Lee;Jongseok Lee;Seoung-Jun Oh;Donggyu Sim","doi":"10.1109/LGRS.2025.3633179","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3633179","url":null,"abstract":"This letter presents a low-complexity attention module for fast change detection. The proposed module computes the absolute difference between bitemporal features extracted by a Siamese backbone network and sequentially applies spatial and channel attention to generate key change representations. Spatial attention emphasizes important spatial locations using representative values from channelwise pooling, while channel attention highlights discriminative feature responses using values from spatialwise pooling. By leveraging low-dimensional representative features, the module significantly reduces computational cost. Additionally, its dual-attention structure-driven by feature differences-enhances both spatial localization and semantic relevance of changes. Compared to the change-guided network (CGNet), the proposed method reduces multiply-accumulate operations (MACs) by 53.81% with only a 0.15% drop in <inline-formula> <tex-math>${F}1$ </tex-math></inline-formula>-score, demonstrating high efficiency with minimal performance degradation. These results suggest that the proposed method is suitable for large-scale or real-time remote sensing (RS) applications where computational efficiency is essential.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Lightweight Method of Cloud-Sky Surface Upward Longwave Radiation Real-Time Estimation for FY-4A Geostationary Satellite FY-4A静止卫星云天表面向上长波辐射实时估计的轻量化方法
Qiang Na;Biao Cao;Wanchun Zhang;Limeng Zheng;Xi Zhang;Ziyi Yang;Qinhuo Liu
Satellite-derived surface upward longwave radiation (SULR) is essential for monitoring the global surface radiation budget, ecological processes, and climate change. However, the widely used SULR products derived from thermal infrared (TIR) remote sensing exhibit spatial discontinuities because TIR signals cannot penetrate cloud cover. Conventional cloud-sky SULR estimation approaches often utilize post-processed reanalysis data as inputs, which could not meet the real-time requirement of the operational system. This study proposes a lightweight cloud-sky SULR real-time estimation method for the Fengyun-4A (FY-4A) geostationary satellite using a Light Gradient Boosting Machine (LightGBM) model. The daytime cloud-sky SULR is estimated by applying the established relationship between auxiliary variables and clear-sky SULR to cloudy conditions, while the nighttime cloud-sky SULR values are estimated by applying the determined relationship between input variables and a publicly accessible, gap-filled SULR product. The model inputs include: 1) spatial-temporal location record data; 2) multiple surface characteristic parameters generated from previous-year data; and 3) two categories of operational FY-4A radiation products, with both components being available in real-time. Validation against six Heihe Watershed Allied Telemetry Experimental Research (HiWATER) sites demonstrates that the reconstructed cloud-sky SULR achieves acceptable root mean square error (RMSE) and mean bias error (MBE) values of 33.4 W/m2 (1.5 W/m2) for daytime and 25.2 W/m2 (4.7 W/m2) for nighttime conditions. Therefore, the proposed lightweight method could improve the spatial coverage of the current FY-4A SULR product and further promote real-time SULR-related applications.
卫星衍生的地表向上长波辐射(SULR)对于监测全球地表辐射收支、生态过程和气候变化至关重要。然而,广泛使用的热红外(TIR)遥感衍生的SULR产品由于TIR信号不能穿透云层而表现出空间不连续性。传统的云天SULR估计方法往往采用后处理的再分析数据作为输入,不能满足作战系统的实时性要求。本文提出了一种基于光梯度增强机(LightGBM)模型的风云- 4a (FY-4A)地球静止卫星轻量云天SULR实时估计方法。白天的云天SULR是通过在多云条件下应用辅助变量与晴空SULR之间建立的关系来估计的,而夜间的云天SULR值是通过应用输入变量与可公开访问的、空白填补的SULR产品之间确定的关系来估计的。模型输入包括:1)时空位置记录数据;2)由往年数据生成的多个地表特征参数;3)两类可操作的FY-4A辐射产品,两种组件均可实时获取。在六个黑河流域联合遥测实验研究(HiWATER)站点的验证表明,重建的云天SULR在白天条件下达到了可接受的均方根误差(RMSE)和平均偏差误差(MBE)值,分别为33.4 W/m2 (1.5 W/m2)和25.2 W/m2 (4.7 W/m2)。因此,提出的轻量化方法可以提高现有FY-4A SULR产品的空间覆盖范围,进一步促进实时SULR相关应用。
{"title":"A Lightweight Method of Cloud-Sky Surface Upward Longwave Radiation Real-Time Estimation for FY-4A Geostationary Satellite","authors":"Qiang Na;Biao Cao;Wanchun Zhang;Limeng Zheng;Xi Zhang;Ziyi Yang;Qinhuo Liu","doi":"10.1109/LGRS.2025.3632860","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3632860","url":null,"abstract":"Satellite-derived surface upward longwave radiation (SULR) is essential for monitoring the global surface radiation budget, ecological processes, and climate change. However, the widely used SULR products derived from thermal infrared (TIR) remote sensing exhibit spatial discontinuities because TIR signals cannot penetrate cloud cover. Conventional cloud-sky SULR estimation approaches often utilize post-processed reanalysis data as inputs, which could not meet the real-time requirement of the operational system. This study proposes a lightweight cloud-sky SULR real-time estimation method for the Fengyun-4A (FY-4A) geostationary satellite using a Light Gradient Boosting Machine (LightGBM) model. The daytime cloud-sky SULR is estimated by applying the established relationship between auxiliary variables and clear-sky SULR to cloudy conditions, while the nighttime cloud-sky SULR values are estimated by applying the determined relationship between input variables and a publicly accessible, gap-filled SULR product. The model inputs include: 1) spatial-temporal location record data; 2) multiple surface characteristic parameters generated from previous-year data; and 3) two categories of operational FY-4A radiation products, with both components being available in real-time. Validation against six Heihe Watershed Allied Telemetry Experimental Research (HiWATER) sites demonstrates that the reconstructed cloud-sky SULR achieves acceptable root mean square error (RMSE) and mean bias error (MBE) values of 33.4 W/m2 (1.5 W/m2) for daytime and 25.2 W/m2 (4.7 W/m2) for nighttime conditions. Therefore, the proposed lightweight method could improve the spatial coverage of the current FY-4A SULR product and further promote real-time SULR-related applications.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145830838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KECS-Net: Knowledge-Embedded CSwin-UNet With Slicing-Aided Hypersegmentation for Infrared Small Target Detection KECS-Net:基于切片辅助超分割的知识嵌入式CSwin-UNet红外小目标检测
Lingxiao Li;Linlin Liu;Dan Huang;Sen Wang;Xutao Wang;Yunan He;Zhuqiang Zhong
Infrared small target detection (IRSTD) remains a long challenging problem in infrared imaging technology. To enhance detection performance while more effectively exploiting target-specific characteristics, a novel U-shaped segmentation network called knowledge-embedded CSwin-UNet (KECS-Net) is proposed in this letter. KECS-Net first incorporates a CSwin transformer module into the encoder of the UNet backbone, enabling the extraction of multiscale features from infrared targets within an expanded receptive field, while achieving higher computational efficiency compared to the original Swin transformer. Besides, a multiscale local contrast enhancement module (MLCEM) is introduced, which utilizes hand-crafted dilated convolution operators to amplify locally salient target responses and suppress background noise, thereby guiding the model to focus on potential target regions. Finally, a slicing-aided hypersegmentation (SAHS) method is also designed to resize and rescale the output image, increasing the relative size of small targets and improving segmentation accuracy during inference. Extensive experiments on three benchmark datasets demonstrate that the proposed KECS-Net outperforms the state-of-the-art (SOTA) methods in both quantitative metrics and visual quality. Relevant code will be available at https://github.com/Lilingxiao-image/KECS-Net
红外小目标检测是红外成像技术中一个长期具有挑战性的问题。为了提高检测性能,同时更有效地利用目标特异性特征,本文提出了一种新的u形分割网络,称为知识嵌入式CSwin-UNet (KECS-Net)。KECS-Net首先将CSwin变压器模块集成到UNet骨干的编码器中,能够在扩展的接受域中从红外目标中提取多尺度特征,同时与原始Swin变压器相比,实现更高的计算效率。此外,还引入了多尺度局部对比度增强模块(MLCEM),该模块利用手工制作的扩展卷积算子放大局部显著目标响应,抑制背景噪声,从而引导模型聚焦于潜在的目标区域。最后,设计了一种切片辅助超分割(SAHS)方法来调整输出图像的大小和缩放,增加小目标的相对大小,提高推理过程中的分割精度。在三个基准数据集上的大量实验表明,所提出的KECS-Net在定量指标和视觉质量方面都优于最先进的(SOTA)方法。相关代码可在https://github.com/Lilingxiao-image/KECS-Net上获得
{"title":"KECS-Net: Knowledge-Embedded CSwin-UNet With Slicing-Aided Hypersegmentation for Infrared Small Target Detection","authors":"Lingxiao Li;Linlin Liu;Dan Huang;Sen Wang;Xutao Wang;Yunan He;Zhuqiang Zhong","doi":"10.1109/LGRS.2025.3632827","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3632827","url":null,"abstract":"Infrared small target detection (IRSTD) remains a long challenging problem in infrared imaging technology. To enhance detection performance while more effectively exploiting target-specific characteristics, a novel U-shaped segmentation network called knowledge-embedded CSwin-UNet (KECS-Net) is proposed in this letter. KECS-Net first incorporates a CSwin transformer module into the encoder of the UNet backbone, enabling the extraction of multiscale features from infrared targets within an expanded receptive field, while achieving higher computational efficiency compared to the original Swin transformer. Besides, a multiscale local contrast enhancement module (MLCEM) is introduced, which utilizes hand-crafted dilated convolution operators to amplify locally salient target responses and suppress background noise, thereby guiding the model to focus on potential target regions. Finally, a slicing-aided hypersegmentation (SAHS) method is also designed to resize and rescale the output image, increasing the relative size of small targets and improving segmentation accuracy during inference. Extensive experiments on three benchmark datasets demonstrate that the proposed KECS-Net outperforms the state-of-the-art (SOTA) methods in both quantitative metrics and visual quality. Relevant code will be available at <uri>https://github.com/Lilingxiao-image/KECS-Net</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1