Yuanxin Ye;Jinkun Dai;Liang Zhou;Keyi Duan;Ran Tao;Wei Li;Danfeng Hong
{"title":"Tuple Perturbation-Based Contrastive Learning Framework for Multimodal Remote Sensing Image Semantic Segmentation","authors":"Yuanxin Ye;Jinkun Dai;Liang Zhou;Keyi Duan;Ran Tao;Wei Li;Danfeng Hong","doi":"10.1109/TGRS.2025.3542868","DOIUrl":null,"url":null,"abstract":"Deep learning models exhibit promising potential in multimodal remote sensing image semantic segmentation (MRSISS). However, the constrained access to labeled samples for training deep learning networks significantly influences the performance of these models. To address that, self-supervised learning (SSL) methods have garnered significant interest in the remote sensing community. Accordingly, this article proposes a novel multimodal contrastive learning framework based on tuple perturbation, which includes the pretraining and fine-tuning stages. First, a tuple perturbation-based multimodal contrastive learning network (TMCNet) is designed to better explore shared and different feature representations across modalities during the pretraining stage and the tuple perturbation module is introduced to improve the network’s ability to extract multimodal features by generating more complex negative samples. In the fine-tuning stage, we develop a simple and effective multimodal semantic segmentation network (MSSNet), which can reduce noise by using complementary information from various modalities to integrate multimodal features more effectively, resulting in better semantic segmentation performance. Extensive experiments have been carried out on two published multimodal image datasets including optical and synthetic aperture radar (SAR) pairs, and the results show that the proposed framework can obtain more superior performance of semantic segmentation than the current state-of-the-art methods in cases of limited labeled samples. The source code is available at <uri>https://github.com/yeyuanxin110/TMCNet-MSSNet</uri>.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-15"},"PeriodicalIF":8.6000,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10896945/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning models exhibit promising potential in multimodal remote sensing image semantic segmentation (MRSISS). However, the constrained access to labeled samples for training deep learning networks significantly influences the performance of these models. To address that, self-supervised learning (SSL) methods have garnered significant interest in the remote sensing community. Accordingly, this article proposes a novel multimodal contrastive learning framework based on tuple perturbation, which includes the pretraining and fine-tuning stages. First, a tuple perturbation-based multimodal contrastive learning network (TMCNet) is designed to better explore shared and different feature representations across modalities during the pretraining stage and the tuple perturbation module is introduced to improve the network’s ability to extract multimodal features by generating more complex negative samples. In the fine-tuning stage, we develop a simple and effective multimodal semantic segmentation network (MSSNet), which can reduce noise by using complementary information from various modalities to integrate multimodal features more effectively, resulting in better semantic segmentation performance. Extensive experiments have been carried out on two published multimodal image datasets including optical and synthetic aperture radar (SAR) pairs, and the results show that the proposed framework can obtain more superior performance of semantic segmentation than the current state-of-the-art methods in cases of limited labeled samples. The source code is available at https://github.com/yeyuanxin110/TMCNet-MSSNet.
期刊介绍:
IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.