首页 > 最新文献

IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society最新文献

英文 中文
TinyRS-R1: Compact Vision Language Model for Remote Sensing TinyRS-R1:用于遥感的紧凑视觉语言模型
Aybora Köksal;A. Aydın Alatan
Remote sensing (RS) applications often rely on edge hardware that cannot host the models in the 7B parametric vision language of today. This letter presents TinyRS, the first 2B-parameter vision language models (VLMs) optimized for RS, and TinyRS-R1, its reasoning-augmented variant. Based on Qwen2-VL-2B, TinyRS is trained via a four-stage pipeline: pretraining on million-scale satellite images, instruction tuning, fine-tuning with chain-of-thought (CoT) annotations from a new reasoning dataset, and group relative policy optimization (GRPO)-based alignment. TinyRS-R1 matches or surpasses recent 7B RS models in classification, visual question answering (VQA), grounding, and open-ended QA—while using one third of the memory and latency. CoT reasoning improves grounding and scene understanding, while TinyRS excels at concise, low-latency VQA. TinyRS-R1 is the first domain-specialized small VLM with GRPO-aligned CoT reasoning for general-purpose RS. The code, models, and caption datasets are available at https://github.com/aybora/TinyRS
遥感(RS)应用通常依赖于边缘硬件,这些硬件无法在当今的7B参数化视觉语言中托管模型。这封信介绍了TinyRS,第一个针对RS优化的2b参数视觉语言模型(VLMs),以及TinyRS- r1,它的推理增强变体。基于Qwen2-VL-2B, TinyRS通过四个阶段的流水线进行训练:百万级卫星图像的预训练,指令调优,使用新的推理数据集的思维链(CoT)注释进行微调,以及基于组相对策略优化(GRPO)的对齐。TinyRS-R1在分类、视觉问答(VQA)、接地和开放式问答方面匹配或超过了最近的7B RS模型,同时使用了三分之一的内存和延迟。CoT推理提高了基础和场景理解,而TinyRS擅长于简洁、低延迟的VQA。TinyRS-R1是第一个领域专用的小型VLM,具有用于通用RS的GRPO-aligned CoT推理。代码,模型和标题数据集可在https://github.com/aybora/TinyRS获得
{"title":"TinyRS-R1: Compact Vision Language Model for Remote Sensing","authors":"Aybora Köksal;A. Aydın Alatan","doi":"10.1109/LGRS.2025.3623244","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3623244","url":null,"abstract":"Remote sensing (RS) applications often rely on edge hardware that cannot host the models in the 7B parametric vision language of today. This letter presents TinyRS, the first 2B-parameter vision language models (VLMs) optimized for RS, and TinyRS-R1, its reasoning-augmented variant. Based on Qwen2-VL-2B, TinyRS is trained via a four-stage pipeline: pretraining on million-scale satellite images, instruction tuning, fine-tuning with chain-of-thought (CoT) annotations from a new reasoning dataset, and group relative policy optimization (GRPO)-based alignment. TinyRS-R1 matches or surpasses recent 7B RS models in classification, visual question answering (VQA), grounding, and open-ended QA—while using one third of the memory and latency. CoT reasoning improves grounding and scene understanding, while TinyRS excels at concise, low-latency VQA. TinyRS-R1 is the first domain-specialized small VLM with GRPO-aligned CoT reasoning for general-purpose RS. The code, models, and caption datasets are available at <uri>https://github.com/aybora/TinyRS</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145405232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale Window Attention Channel Enhanced for Remote Sensing Image Super-Resolution 遥感图像超分辨率多尺度窗口注意通道增强
Jingfan Wang;Wen Lu;Zeming Zhang;Zhaoyang Wang;Zhe Li
Transformer-based methods for remote sensing image super-resolution (SR) face challenges in reconstructing high-frequency textures due to the interference from large flat regions, such as farmlands and water bodies. To address these limitations, we propose a channel-enhanced multiscale window attention mechanism, which is designed to minimize the impact of flat regions on high-frequency area reconstruction while effectively utilizing the intrinsic multiscale features of remote sensing images. To better capture the multiscale features of remote sensing images, we introduce a series of depthwise separable convolution kernels of varying sizes during the shallow feature extraction stage. Experimental results demonstrate that the proposed method achieves superior peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) scores across multiple remote sensing benchmark datasets and scaling factors, validating its effectiveness.
基于变压器的遥感图像超分辨率(SR)方法由于受到农田和水体等大面积平坦区域的干扰,在重建高频纹理方面面临挑战。为了解决这些限制,我们提出了一种通道增强的多尺度窗口注意机制,该机制旨在最大限度地减少平坦区域对高频区域重建的影响,同时有效地利用遥感图像固有的多尺度特征。为了更好地捕获遥感图像的多尺度特征,我们在浅层特征提取阶段引入了一系列不同大小的深度可分离卷积核。实验结果表明,该方法在多个遥感基准数据集和尺度因子上均能获得较好的峰值信噪比(PSNR)和结构相似度(SSIM)分数,验证了其有效性。
{"title":"Multiscale Window Attention Channel Enhanced for Remote Sensing Image Super-Resolution","authors":"Jingfan Wang;Wen Lu;Zeming Zhang;Zhaoyang Wang;Zhe Li","doi":"10.1109/LGRS.2025.3620872","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3620872","url":null,"abstract":"Transformer-based methods for remote sensing image super-resolution (SR) face challenges in reconstructing high-frequency textures due to the interference from large flat regions, such as farmlands and water bodies. To address these limitations, we propose a channel-enhanced multiscale window attention mechanism, which is designed to minimize the impact of flat regions on high-frequency area reconstruction while effectively utilizing the intrinsic multiscale features of remote sensing images. To better capture the multiscale features of remote sensing images, we introduce a series of depthwise separable convolution kernels of varying sizes during the shallow feature extraction stage. Experimental results demonstrate that the proposed method achieves superior peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) scores across multiple remote sensing benchmark datasets and scaling factors, validating its effectiveness.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"23 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating Global and Local Information for Remote Sensing Image–Text Retrieval 集成全局和局部信息的遥感图像-文本检索
Ziyun Chen;Fan Liu;Zhangqingyun Guan;Qian Zhou;Xiaocong Zhou;Chuanyi Zhang
Pretrained vision–language models (VLMs) have demonstrated promising performance in remote sensing (RS) image–text retrieval tasks. However, the scarcity of high-quality image–text datasets remains a challenge in fine-tuning VLMs for RS. The captions in existing datasets tend to be uniform and lack details. To fully use rich detailed information from RS images, we propose a method to fine-tune VLMs. We first construct a new visual–language dataset that balances both global and local information for RS (GLRS) image–text retrieval. Specifically, a multimodal large language model (MLLM) is used to generate captions for local patches and global captions for the entire image. To effectively use local information, we propose a global and local image captioning method (GLCap). With a large language model (LLM), we further obtain higher quality captions by merging both global and local captions. Finally, we fine-tune the weights of RS-M-contrastive language image pretraining (CLIP) with a progressive global–local fine-tuning strategy on GLRS. Experimental results demonstrate that our method outperforms state-of-the-art (SoTA) approaches on two common RS image–text retrieval downstream tasks. Our code and dataset are available at https://github.com/hhu-czy/GLRS
预训练的视觉语言模型(VLMs)在遥感图像文本检索任务中表现出了良好的性能。然而,高质量的图像-文本数据集的缺乏仍然是对遥感vlm进行微调的一个挑战,现有数据集的标题往往是统一的,缺乏细节。为了充分利用RS图像中丰富的细节信息,我们提出了一种微调VLMs的方法。我们首先构建了一个新的视觉语言数据集,该数据集平衡了RS (GLRS)图像文本检索的全局和局部信息。具体而言,使用多模态大语言模型(multimodal large language model, MLLM)生成局部补丁的标题和整个图像的全局标题。为了有效地利用局部信息,我们提出了一种全局和局部图像字幕方法(GLCap)。使用大型语言模型(LLM),我们通过合并全局和局部字幕进一步获得更高质量的字幕。最后,我们在GLRS上采用渐进的全局-局部微调策略对rs - m对比语言图像预训练(CLIP)的权重进行微调。实验结果表明,我们的方法在两个常见的RS图像文本检索下游任务上优于最先进的(SoTA)方法。我们的代码和数据集可在https://github.com/hhu-czy/GLRS上获得
{"title":"Integrating Global and Local Information for Remote Sensing Image–Text Retrieval","authors":"Ziyun Chen;Fan Liu;Zhangqingyun Guan;Qian Zhou;Xiaocong Zhou;Chuanyi Zhang","doi":"10.1109/LGRS.2025.3616154","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3616154","url":null,"abstract":"Pretrained vision–language models (VLMs) have demonstrated promising performance in remote sensing (RS) image–text retrieval tasks. However, the scarcity of high-quality image–text datasets remains a challenge in fine-tuning VLMs for RS. The captions in existing datasets tend to be uniform and lack details. To fully use rich detailed information from RS images, we propose a method to fine-tune VLMs. We first construct a new visual–language dataset that balances both global and local information for RS (GLRS) image–text retrieval. Specifically, a multimodal large language model (MLLM) is used to generate captions for local patches and global captions for the entire image. To effectively use local information, we propose a global and local image captioning method (GLCap). With a large language model (LLM), we further obtain higher quality captions by merging both global and local captions. Finally, we fine-tune the weights of RS-M-contrastive language image pretraining (CLIP) with a progressive global–local fine-tuning strategy on GLRS. Experimental results demonstrate that our method outperforms state-of-the-art (SoTA) approaches on two common RS image–text retrieval downstream tasks. Our code and dataset are available at <uri>https://github.com/hhu-czy/GLRS</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-Supervised Triple-GAN With Similarity Constraint for Automatic Underground Object Classification Using Ground Penetrating Radar Data 基于相似约束的半监督三重gan探地雷达地下目标自动分类
Li Liu;Yongcheng Zhou;Hang Xu;Jingxia Li;Jianguo Zhang;Lijun Zhou;Bingjie Wang
Automatic underground object classification based on deep learning (DL) has been widely used in ground penetrating radar (GPR) fields. However, its excellent performance heavily depends on sufficient labeled training data. In GPR fields, large amounts of labeled data are difficult to obtain due to time-consuming and experience-dependent manual annotation work. To address the issue of limited labeled data, we propose a novel semi-supervised learning (SSL) method for urban-road underground multiclass object classification. It fully utilizes abundant unlabeled data and limited labeled data to enhance classification performance. We applied a variant of the triple-GAN (TGAN) model and modified it by introducing a similarity constraint, which is associated with GPR data geometric features and can help to produce high-quality generated images. Experimental results of laboratory and field data show that it has higher accuracy than representative baseline methods under limited labeled data.
基于深度学习的地下目标自动分类技术在探地雷达领域得到了广泛的应用。然而,其优异的性能在很大程度上依赖于足够的标记训练数据。在探地雷达领域,由于人工标注耗时且依赖经验,难以获得大量标注数据。为了解决标记数据有限的问题,提出了一种新的半监督学习(SSL)方法用于城市道路地下多类目标分类。它充分利用了丰富的未标记数据和有限的标记数据来提高分类性能。我们应用了三重gan (TGAN)模型的一种变体,并通过引入与GPR数据几何特征相关的相似性约束对其进行了修改,从而有助于生成高质量的生成图像。实验室和现场数据的实验结果表明,在有限的标记数据下,该方法比代表性基线方法具有更高的精度。
{"title":"Semi-Supervised Triple-GAN With Similarity Constraint for Automatic Underground Object Classification Using Ground Penetrating Radar Data","authors":"Li Liu;Yongcheng Zhou;Hang Xu;Jingxia Li;Jianguo Zhang;Lijun Zhou;Bingjie Wang","doi":"10.1109/LGRS.2025.3609444","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3609444","url":null,"abstract":"Automatic underground object classification based on deep learning (DL) has been widely used in ground penetrating radar (GPR) fields. However, its excellent performance heavily depends on sufficient labeled training data. In GPR fields, large amounts of labeled data are difficult to obtain due to time-consuming and experience-dependent manual annotation work. To address the issue of limited labeled data, we propose a novel semi-supervised learning (SSL) method for urban-road underground multiclass object classification. It fully utilizes abundant unlabeled data and limited labeled data to enhance classification performance. We applied a variant of the triple-GAN (TGAN) model and modified it by introducing a similarity constraint, which is associated with GPR data geometric features and can help to produce high-quality generated images. Experimental results of laboratory and field data show that it has higher accuracy than representative baseline methods under limited labeled data.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145078645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-Scale Traveling Ionospheric Disturbances Over North America and Europe During the May 2024 Extreme Geomagnetic Storm 2024年5月极端地磁风暴期间北美和欧洲的大尺度电离层扰动
Long Tang;Hong Zhang;Yumei Li;Fan Xu;Fang Zou
This study investigates the large-scale ionospheric traveling disturbances (LSTIDs) over North America and Europe associated with the intense geomagnetic storm in May 2024, utilizing total electron content (TEC) data derived from ground-based Global Navigation Satellite System (GNSS) stations. The findings reveal that the observed LSTIDs in both regions exhibited an unusually prolonged duration, lasting for over 10 h from 17:00 UT on May 10 to 03:30 UT on May 11, 2024. This extended duration may be attributed to the continuous triggering of LSTIDs by auroral energy input during the geomagnetic storm. Additionally, significant differences in propagation characteristics, including velocities, azimuths, wavelengths, and traveling distances of LSTIDs, were observed between the two regions. These disparities in LSTID parameters are likely due to variations in the magnitude of energy input in the polar regions and local time differences in North America (14:00 LT) and Europe (19:00 LT), which cause diurnal electron-density contrast to influence LSTID propagation.
本研究利用地面全球导航卫星系统(GNSS)站点的总电子含量(TEC)数据,研究了与2024年5月强烈地磁风暴相关的北美和欧洲的大尺度电离层行进扰动(LSTIDs)。结果表明,这两个地区观测到的lstid的持续时间都异常长,从2024年5月10日17:00 UT到2024年5月11日03:30 UT持续了10多个小时。这种持续时间的延长可能是由于地磁暴期间极光能量输入持续触发lstid。此外,在两个地区之间,观测到lstid的传播特性(包括速度、方位角、波长和传播距离)存在显著差异。LSTID参数的这些差异可能是由于极地地区能量输入大小的变化以及北美(14:00 LT)和欧洲(19:00 LT)的地方时差异造成的,这些地方时差异导致日电子密度对比影响LSTID传播。
{"title":"Large-Scale Traveling Ionospheric Disturbances Over North America and Europe During the May 2024 Extreme Geomagnetic Storm","authors":"Long Tang;Hong Zhang;Yumei Li;Fan Xu;Fang Zou","doi":"10.1109/LGRS.2025.3608704","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3608704","url":null,"abstract":"This study investigates the large-scale ionospheric traveling disturbances (LSTIDs) over North America and Europe associated with the intense geomagnetic storm in May 2024, utilizing total electron content (TEC) data derived from ground-based Global Navigation Satellite System (GNSS) stations. The findings reveal that the observed LSTIDs in both regions exhibited an unusually prolonged duration, lasting for over 10 h from 17:00 UT on May 10 to 03:30 UT on May 11, 2024. This extended duration may be attributed to the continuous triggering of LSTIDs by auroral energy input during the geomagnetic storm. Additionally, significant differences in propagation characteristics, including velocities, azimuths, wavelengths, and traveling distances of LSTIDs, were observed between the two regions. These disparities in LSTID parameters are likely due to variations in the magnitude of energy input in the polar regions and local time differences in North America (14:00 LT) and Europe (19:00 LT), which cause diurnal electron-density contrast to influence LSTID propagation.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KD-RSCC: A Karras Diffusion Framework for Efficient Remote Sensing Change Captioning KD-RSCC:一种高效遥感变化标注的Karras扩散框架
Xiaofei Yu;Jie Ma;Liqiang Qiao
Remote sensing image change captioning (RSICC) is a challenging task that involves describing surface changes between bitemporal or multitemporal satellite images using natural language. This task requires both fine-grained visual understanding and expressive language generation. Transformer-based and long short-term memory (LSTM)-based models have shown promising results in this domain. However, they may encounter difficulties in generating flexible and diverse captions, particularly when training data are limited or imbalanced. While diffusion models provide richer textual outputs, they are often constrained by long inference times. To address these issues, we propose a novel diffusion-based framework, KD-RSCC, for efficient and expressive remote sensing change captioning. This framework utilizes the Karras sampling method to significantly reduce the number of steps required during inference, while preserving the quality and diversity of the generated captions. In addition, we introduce a large language model (LLM)-based evaluation strategy $text {G-Eval}_{text {RSCC}}$ to conduct a more comprehensive assessment of the semantic accuracy, fluency, and linguistic diversity of the generated descriptions. Experimental results demonstrate that KD-RSCC achieves an optimal balance between generation quality and inference speed, enhancing the flexibility and readability of its outputs. The code and supplementary materials are available at https://github.com/Fay-Y/KD_RSCC
遥感图像变化字幕(RSICC)是一项具有挑战性的任务,涉及使用自然语言描述双时相或多时相卫星图像之间的表面变化。这项任务需要细粒度的视觉理解和表达性语言生成。基于变压器的模型和基于长短期记忆(LSTM)的模型在这一领域显示出很好的结果。然而,它们在生成灵活多样的标题时可能会遇到困难,特别是当训练数据有限或不平衡时。虽然扩散模型提供了更丰富的文本输出,但它们通常受到较长的推理时间的限制。为了解决这些问题,我们提出了一种新的基于扩散的框架KD-RSCC,用于高效和富有表现力的遥感变化字幕。该框架利用Karras采样方法显著减少了推理过程中所需的步骤数,同时保留了生成标题的质量和多样性。此外,我们引入了一个基于大型语言模型(LLM)的评估策略$text {G-Eval}_{text {RSCC}}$,以对生成的描述的语义准确性、流畅性和语言多样性进行更全面的评估。实验结果表明,KD-RSCC在生成质量和推理速度之间达到了最佳平衡,增强了输出的灵活性和可读性。代码和补充材料可在https://github.com/Fay-Y/KD_RSCC上获得
{"title":"KD-RSCC: A Karras Diffusion Framework for Efficient Remote Sensing Change Captioning","authors":"Xiaofei Yu;Jie Ma;Liqiang Qiao","doi":"10.1109/LGRS.2025.3608489","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3608489","url":null,"abstract":"Remote sensing image change captioning (RSICC) is a challenging task that involves describing surface changes between bitemporal or multitemporal satellite images using natural language. This task requires both fine-grained visual understanding and expressive language generation. Transformer-based and long short-term memory (LSTM)-based models have shown promising results in this domain. However, they may encounter difficulties in generating flexible and diverse captions, particularly when training data are limited or imbalanced. While diffusion models provide richer textual outputs, they are often constrained by long inference times. To address these issues, we propose a novel diffusion-based framework, KD-RSCC, for efficient and expressive remote sensing change captioning. This framework utilizes the Karras sampling method to significantly reduce the number of steps required during inference, while preserving the quality and diversity of the generated captions. In addition, we introduce a large language model (LLM)-based evaluation strategy <inline-formula> <tex-math>$text {G-Eval}_{text {RSCC}}$ </tex-math></inline-formula> to conduct a more comprehensive assessment of the semantic accuracy, fluency, and linguistic diversity of the generated descriptions. Experimental results demonstrate that KD-RSCC achieves an optimal balance between generation quality and inference speed, enhancing the flexibility and readability of its outputs. The code and supplementary materials are available at <uri>https://github.com/Fay-Y/KD_RSCC</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RoGLSNet: An Efficient Global–Local Scene Awareness Network With Rotary Position Embedding for Remote Image Segmentation RoGLSNet:基于旋转位置嵌入的高效全局-局部场景感知网络
Xiaosheng Yu;Weiqi Bai;Jubo Chen;Jiawei Huang;Zhuoqun Fang;Zhaokui Li
Accurate segmentation of very high-resolution remote sensing images is vital for downstream tasks. Most semantic segmentation methods fail to fully consider the inherent characteristics of the images, such as intricate backgrounds, significant intraclass variance, and spatial interdependence of geographic object distribution. To address these challenges, we propose an efficient global–local scene awareness network with rotary position embedding (RoGLSNet). Specifically, we introduce the dynamic global filter (DGF) module to adaptively select frequency components, thereby mitigating interference from background noise. For high intraclass variance, the class center aware block (CCAB) performs class-level contextual modeling with spatial information integration. Additionally, the rotary position embedding (RoPE) is incorporated into vanilla attention to indirectly model the positional and distance relationships of geographic target objects. Extensive experimental results on two widely used datasets demonstrate that RoGLSNet outperforms the state-of-the-art (SOTA) segmentation methods. The code is available at https://github.com/bai101315/RoGLSNet
高分辨率遥感图像的准确分割对后续任务至关重要。大多数语义分割方法没有充分考虑图像的内在特征,如复杂的背景、显著的类内方差、地理对象分布的空间依赖性等。为了解决这些挑战,我们提出了一种高效的基于旋转位置嵌入的全局-局部场景感知网络(RoGLSNet)。具体来说,我们引入了动态全局滤波器(DGF)模块来自适应地选择频率分量,从而减轻背景噪声的干扰。对于类内方差较大的情况,类中心感知块(CCAB)通过空间信息集成实现类级上下文建模。此外,将旋转位置嵌入(RoPE)引入到vanilla attention中,间接建模地理目标对象的位置和距离关系。在两个广泛使用的数据集上的大量实验结果表明,RoGLSNet优于最先进的(SOTA)分割方法。代码可在https://github.com/bai101315/RoGLSNet上获得
{"title":"RoGLSNet: An Efficient Global–Local Scene Awareness Network With Rotary Position Embedding for Remote Image Segmentation","authors":"Xiaosheng Yu;Weiqi Bai;Jubo Chen;Jiawei Huang;Zhuoqun Fang;Zhaokui Li","doi":"10.1109/LGRS.2025.3607840","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3607840","url":null,"abstract":"Accurate segmentation of very high-resolution remote sensing images is vital for downstream tasks. Most semantic segmentation methods fail to fully consider the inherent characteristics of the images, such as intricate backgrounds, significant intraclass variance, and spatial interdependence of geographic object distribution. To address these challenges, we propose an efficient global–local scene awareness network with rotary position embedding (RoGLSNet). Specifically, we introduce the dynamic global filter (DGF) module to adaptively select frequency components, thereby mitigating interference from background noise. For high intraclass variance, the class center aware block (CCAB) performs class-level contextual modeling with spatial information integration. Additionally, the rotary position embedding (RoPE) is incorporated into vanilla attention to indirectly model the positional and distance relationships of geographic target objects. Extensive experimental results on two widely used datasets demonstrate that RoGLSNet outperforms the state-of-the-art (SOTA) segmentation methods. The code is available at <uri>https://github.com/bai101315/RoGLSNet</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-Dimensional Controlled-Source Electromagnetic Modeling Using Octree-Based Spectral Element Method 基于八叉树的三维可控源电磁建模方法
Jintong Xu;Xiao Xiao;Jingtian Tang
The controlled-source electromagnetic (CSEM) method is an important geophysical tool for sensing and studying subsurface conductivity structures. Advanced forward modeling techniques are crucial for the inversion and imaging of CSEM data. In this letter, we develop an accurate and efficient 3-D forward modeling algorithm for CSEM problems, combining spectral element method (SEM) and octree meshes. The SEM based on high-order basis functions can provide accurate CSEM responses, and the octree meshes enable local refinement, allowing for the discretization of models with fewer elements compared to the structured hexahedral meshes used in conventional SEM, while also providing the capability to handle complex models. Two synthetic examples are presented to verify the accuracy and efficiency of the algorithm. The utility of the algorithm is verified by a realistic model with complex geometry.
可控源电磁(CSEM)方法是探测和研究地下电导率结构的重要地球物理工具。先进的正演模拟技术对煤层气数据的反演和成像至关重要。本文将谱元法(SEM)与八叉树网格相结合,开发了一种精确、高效的CSEM问题三维正演算法。基于高阶基函数的扫描电镜可以提供精确的扫描电镜响应,八叉树网格可以进行局部细化,与传统扫描电镜中使用的结构化六面体网格相比,可以用更少的元素离散模型,同时还提供处理复杂模型的能力。通过两个综合算例验证了该算法的准确性和有效性。通过一个具有复杂几何结构的实际模型验证了该算法的有效性。
{"title":"Three-Dimensional Controlled-Source Electromagnetic Modeling Using Octree-Based Spectral Element Method","authors":"Jintong Xu;Xiao Xiao;Jingtian Tang","doi":"10.1109/LGRS.2025.3606934","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3606934","url":null,"abstract":"The controlled-source electromagnetic (CSEM) method is an important geophysical tool for sensing and studying subsurface conductivity structures. Advanced forward modeling techniques are crucial for the inversion and imaging of CSEM data. In this letter, we develop an accurate and efficient 3-D forward modeling algorithm for CSEM problems, combining spectral element method (SEM) and octree meshes. The SEM based on high-order basis functions can provide accurate CSEM responses, and the octree meshes enable local refinement, allowing for the discretization of models with fewer elements compared to the structured hexahedral meshes used in conventional SEM, while also providing the capability to handle complex models. Two synthetic examples are presented to verify the accuracy and efficiency of the algorithm. The utility of the algorithm is verified by a realistic model with complex geometry.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145078642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fluid Mobility Attribute Extraction Based on Optimized Second-Order Synchroextracting Wavelet Transform 基于优化二阶同步提取小波变换的流体流动性属性提取
Yu Wang;Xiao Pan;Kang Shao;Ning Wang;Yuqiang Zhang;Xinyu Zhang;Chaoyang Lei;Xiaotao Wen
Resolution of time–frequency-based seismic attributes mainly relies on the time–frequency analysis tool. This study proposes an improved second-order synchroextracting wavelet transform (SSEWT) by optimizing the scale parameters and extraction scheme. Time–frequency computation on synthetic data shows a 5% improvement in efficiency. Then, we apply the proposed transform to fluid mobility calculation on field data, yielding a 5.6% increase in computational efficiency and an 11.26% improvement in resolution, demonstrating its superior performance. Field data tests demonstrate that the proposed transform and the related fluid mobility result outperform conventional methods. Despite remaining computational challenges, the method offers significant advancements in reservoir characterization and fluid detection.
基于时频的地震属性解析主要依赖于时频分析工具。本文通过对尺度参数和提取方案的优化,提出了一种改进的二阶同步提取小波变换。对合成数据进行时频计算,效率提高了5%。然后,我们将所提出的变换应用于现场数据的流体流度计算,计算效率提高了5.6%,分辨率提高了11.26%,证明了其优越的性能。现场数据测试表明,所提出的变换和相关的流体流动性结果优于常规方法。尽管仍然存在计算方面的挑战,但该方法在储层表征和流体检测方面取得了重大进展。
{"title":"Fluid Mobility Attribute Extraction Based on Optimized Second-Order Synchroextracting Wavelet Transform","authors":"Yu Wang;Xiao Pan;Kang Shao;Ning Wang;Yuqiang Zhang;Xinyu Zhang;Chaoyang Lei;Xiaotao Wen","doi":"10.1109/LGRS.2025.3607097","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3607097","url":null,"abstract":"Resolution of time–frequency-based seismic attributes mainly relies on the time–frequency analysis tool. This study proposes an improved second-order synchroextracting wavelet transform (SSEWT) by optimizing the scale parameters and extraction scheme. Time–frequency computation on synthetic data shows a 5% improvement in efficiency. Then, we apply the proposed transform to fluid mobility calculation on field data, yielding a 5.6% increase in computational efficiency and an 11.26% improvement in resolution, demonstrating its superior performance. Field data tests demonstrate that the proposed transform and the related fluid mobility result outperform conventional methods. Despite remaining computational challenges, the method offers significant advancements in reservoir characterization and fluid detection.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145073140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AFIMNet: An Adaptive Feature Interaction Network for Remote Sensing Scene Classification AFIMNet:用于遥感场景分类的自适应特征交互网络
Xiao Wang;Yisha Sun;Pan He
Convolutional neural network (CNN)-based methods have been widely applied in remote sensing scene classification (RSSC) and have achieved remarkable classification results. However, traditional CNN methods have certain limitations in extracting global features and capturing image semantics, especially in complex remote sensing (RS) image scenes. The Transformer can directly capture global features through the self-attention mechanism, but its performance is weaker when handling local details. Currently, methods that directly combine CNN and transformer features lead to feature imbalance and introduce redundant information. To address these issues, we propose AFIMNet, an adaptive feature interaction network for RSSC. First, we use a dual-branch network structure (based on ResNet34 and Swin-S) to extract local and global features from RS scene images. Second, we design an adaptive feature interaction module (AFIM) that effectively enhances the interaction and correlation between local and global features. Third, we use a spatial-channel fusion module (SCFM) to aggregate the interacted features, further strengthening feature representation capabilities. Our proposed method is validated on three public RS datasets, and experimental results show that AFIMNet has a stronger feature representation ability compared to current popular RS image classification methods, significantly improving classification accuracy. The source code will be publicly accessible at https://github.com/xavi276310/AFIMNet
基于卷积神经网络(CNN)的方法在遥感场景分类(RSSC)中得到了广泛的应用,并取得了显著的分类效果。然而,传统的CNN方法在提取全局特征和捕获图像语义方面存在一定的局限性,特别是在复杂的遥感图像场景中。Transformer可以通过自关注机制直接捕获全局特征,但在处理局部细节时,其性能较弱。目前,将CNN与变压器特征直接结合的方法会导致特征不平衡,引入冗余信息。为了解决这些问题,我们提出了一种用于RSSC的自适应特征交互网络AFIMNet。首先,我们使用双分支网络结构(基于ResNet34和swan - s)从RS场景图像中提取局部和全局特征。其次,设计了自适应特征交互模块(AFIM),有效增强了局部特征与全局特征之间的交互和相关性。第三,利用空间信道融合模块(SCFM)对交互特征进行聚合,进一步增强特征表示能力。我们提出的方法在三个公开的RS数据集上进行了验证,实验结果表明,与目前流行的RS图像分类方法相比,AFIMNet具有更强的特征表示能力,显著提高了分类精度。源代码可以在https://github.com/xavi276310/AFIMNet上公开访问
{"title":"AFIMNet: An Adaptive Feature Interaction Network for Remote Sensing Scene Classification","authors":"Xiao Wang;Yisha Sun;Pan He","doi":"10.1109/LGRS.2025.3607205","DOIUrl":"https://doi.org/10.1109/LGRS.2025.3607205","url":null,"abstract":"Convolutional neural network (CNN)-based methods have been widely applied in remote sensing scene classification (RSSC) and have achieved remarkable classification results. However, traditional CNN methods have certain limitations in extracting global features and capturing image semantics, especially in complex remote sensing (RS) image scenes. The Transformer can directly capture global features through the self-attention mechanism, but its performance is weaker when handling local details. Currently, methods that directly combine CNN and transformer features lead to feature imbalance and introduce redundant information. To address these issues, we propose AFIMNet, an adaptive feature interaction network for RSSC. First, we use a dual-branch network structure (based on ResNet34 and Swin-S) to extract local and global features from RS scene images. Second, we design an adaptive feature interaction module (AFIM) that effectively enhances the interaction and correlation between local and global features. Third, we use a spatial-channel fusion module (SCFM) to aggregate the interacted features, further strengthening feature representation capabilities. Our proposed method is validated on three public RS datasets, and experimental results show that AFIMNet has a stronger feature representation ability compared to current popular RS image classification methods, significantly improving classification accuracy. The source code will be publicly accessible at <uri>https://github.com/xavi276310/AFIMNet</uri>","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"22 ","pages":"1-5"},"PeriodicalIF":4.4,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1