A Third-Modality Collaborative Learning Approach for Visible-Infrared Vessel Reidentification

IF 4.7 2区 地球科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing Pub Date : 2024-10-18 DOI:10.1109/JSTARS.2024.3479423
Qi Zhang;Yiming Yan;Long Gao;Congan Xu;Nan Su;Shou Feng
{"title":"A Third-Modality Collaborative Learning Approach for Visible-Infrared Vessel Reidentification","authors":"Qi Zhang;Yiming Yan;Long Gao;Congan Xu;Nan Su;Shou Feng","doi":"10.1109/JSTARS.2024.3479423","DOIUrl":null,"url":null,"abstract":"Visible Infrared Reidentification (VI-ReID) on vessels is an important component task in in the application of UAV remote sensing data, aiming to retrieve images with the same identity as a given vessel by retrieving it from image libraries containing vessels of different modalities. One of its main challenges is the huge modality difference between visible (VIS) and infrared (IR) images. Some state-of-the-art methods try to design complex networks or generative methods to mitigate the modality differences, ignoring the highly nonlinear relationship between the two modalities. To solve this problem, we propose a nonlinear Third-Modality Generator (TMG) to generate third-modality images to collaborate the original two modalities to learn together. In addition, in order to make the network focus on the image focus area and get rich local information, a Multidimensional Attention Guidance (MAG) module is proposed to guide the attention in both channel and spatial dimensions. By integrating TMG, MAG and the three designed losses (Generative Consistency Loss, Cross Modality Loss, and Modality Internal Loss) into an end-to-end learning framework, we propose a network utilizing the third-modality to collaborate learning, called third-modality collaborative network (TMCN), which has strong discriminative ability and significantly reduces the modality difference between VIS and IR. In addition, due to the lack of vessel data in the VI-ReID task, we have collected an airborne vessel cross-modality reidentification dataset (AVC-ReID) to promote the practical application of the VI-ReID task. Extensive experiments on the AVC-ReID dataset show that the proposed TMCN outperforms several other state-of-the-art methods.","PeriodicalId":13116,"journal":{"name":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","volume":null,"pages":null},"PeriodicalIF":4.7000,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10723277","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10723277/","RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Visible Infrared Reidentification (VI-ReID) on vessels is an important component task in in the application of UAV remote sensing data, aiming to retrieve images with the same identity as a given vessel by retrieving it from image libraries containing vessels of different modalities. One of its main challenges is the huge modality difference between visible (VIS) and infrared (IR) images. Some state-of-the-art methods try to design complex networks or generative methods to mitigate the modality differences, ignoring the highly nonlinear relationship between the two modalities. To solve this problem, we propose a nonlinear Third-Modality Generator (TMG) to generate third-modality images to collaborate the original two modalities to learn together. In addition, in order to make the network focus on the image focus area and get rich local information, a Multidimensional Attention Guidance (MAG) module is proposed to guide the attention in both channel and spatial dimensions. By integrating TMG, MAG and the three designed losses (Generative Consistency Loss, Cross Modality Loss, and Modality Internal Loss) into an end-to-end learning framework, we propose a network utilizing the third-modality to collaborate learning, called third-modality collaborative network (TMCN), which has strong discriminative ability and significantly reduces the modality difference between VIS and IR. In addition, due to the lack of vessel data in the VI-ReID task, we have collected an airborne vessel cross-modality reidentification dataset (AVC-ReID) to promote the practical application of the VI-ReID task. Extensive experiments on the AVC-ReID dataset show that the proposed TMCN outperforms several other state-of-the-art methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
可见红外血管再识别的第三模式协作学习方法
船只的可见光红外再识别(VI-ReID)是无人机遥感数据应用中的一项重要任务,其目的是从包含不同模态船只的图像库中检索与给定船只身份相同的图像。其主要挑战之一是可见光(VIS)和红外(IR)图像之间的巨大模态差异。一些最先进的方法试图设计复杂的网络或生成方法来减轻模态差异,却忽略了两种模态之间的高度非线性关系。为了解决这个问题,我们提出了一种非线性的第三模态生成器(TMG)来生成第三模态图像,以协同原始的两种模态共同学习。此外,为了使网络聚焦于图像焦点区域并获取丰富的局部信息,我们还提出了多维注意力引导(MAG)模块,以引导通道和空间维度的注意力。通过将 TMG、MAG 和所设计的三种损失(生成一致性损失、跨模态损失和模态内部损失)整合到端到端学习框架中,我们提出了一种利用第三模态进行协作学习的网络,称为第三模态协作网络(TMCN),它具有很强的分辨能力,能显著减少 VIS 和 IR 之间的模态差异。此外,由于VI-ReID任务中缺乏船只数据,我们收集了一个机载船只跨模态再识别数据集(AVC-ReID),以促进VI-ReID任务的实际应用。在 AVC-ReID 数据集上进行的大量实验表明,所提出的 TMCN 优于其他几种最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
9.30
自引率
10.90%
发文量
563
审稿时长
4.7 months
期刊介绍: The IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing addresses the growing field of applications in Earth observations and remote sensing, and also provides a venue for the rapidly expanding special issues that are being sponsored by the IEEE Geosciences and Remote Sensing Society. The journal draws upon the experience of the highly successful “IEEE Transactions on Geoscience and Remote Sensing” and provide a complementary medium for the wide range of topics in applied earth observations. The ‘Applications’ areas encompasses the societal benefit areas of the Global Earth Observations Systems of Systems (GEOSS) program. Through deliberations over two years, ministers from 50 countries agreed to identify nine areas where Earth observation could positively impact the quality of life and health of their respective countries. Some of these are areas not traditionally addressed in the IEEE context. These include biodiversity, health and climate. Yet it is the skill sets of IEEE members, in areas such as observations, communications, computers, signal processing, standards and ocean engineering, that form the technical underpinnings of GEOSS. Thus, the Journal attracts a broad range of interests that serves both present members in new ways and expands the IEEE visibility into new areas.
期刊最新文献
IEEE Geoscience and Remote Sensing Society Information A Third-Modality Collaborative Learning Approach for Visible-Infrared Vessel Reidentification Enhanced Spatiotemporal Heatwave Analysis in Urban and Nonurban Thai Environments Through Integration of In-Situ and Remote Sensing Data Nonagriculturalization Detection Based on Vector Polygons and Contrastive Learning With High-Resolution Remote Sensing Images Trans-Diff: Heterogeneous Domain Adaptation for Remote Sensing Segmentation With Transfer Diffusion
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1