T-Net++:用于双视图对应性剪枝的有效换向-方差网络

Guobao Xiao;Xin Liu;Zhen Zhong;Xiaoqin Zhang;Jiayi Ma;Haibin Ling
{"title":"T-Net++:用于双视图对应性剪枝的有效换向-方差网络","authors":"Guobao Xiao;Xin Liu;Zhen Zhong;Xiaoqin Zhang;Jiayi Ma;Haibin Ling","doi":"10.1109/TPAMI.2024.3444457","DOIUrl":null,"url":null,"abstract":"We propose a conceptually novel, flexible, and effective framework (named T-Net++) for the task of two-view correspondence pruning. T-Net++ comprises two unique structures: the \n<inline-formula><tex-math>$\\hbox{``}-$</tex-math></inline-formula>\n'' structure and the \n<inline-formula><tex-math>$\\hbox{``}|$</tex-math></inline-formula>\n'' structure. The \n<inline-formula><tex-math>$\\hbox{``}-$</tex-math></inline-formula>\n'' structure utilizes an iterative learning strategy to process correspondences, while the \n<inline-formula><tex-math>$\\hbox{``}|$</tex-math></inline-formula>\n'' structure integrates all feature information of the \n<inline-formula><tex-math>$\\hbox{``}-$</tex-math></inline-formula>\n'' structure and produces inlier weights. Moreover, within the \n<inline-formula><tex-math>$\\hbox{``}|$</tex-math></inline-formula>\n'' structure, we design a new Local-Global Attention Fusion module to fully exploit valuable information obtained from concatenating features through channel-wise and spatial-wise relationships. Furthermore, we develop a Channel-Spatial Squeeze-and-Excitation module, a modified network backbone that enhances the representation ability of important channels and correspondences through the squeeze-and-excitation operation. T-Net++ not only preserves the permutation-equivariance manner for correspondence pruning, but also gathers rich contextual information, thereby enhancing the effectiveness of the network. Experimental results demonstrate that T-Net++ outperforms other state-of-the-art correspondence pruning methods on various benchmarks and excels in two extended tasks.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":"46 12","pages":"10629-10644"},"PeriodicalIF":0.0000,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"T-Net++: Effective Permutation-Equivariance Network for Two-View Correspondence Pruning\",\"authors\":\"Guobao Xiao;Xin Liu;Zhen Zhong;Xiaoqin Zhang;Jiayi Ma;Haibin Ling\",\"doi\":\"10.1109/TPAMI.2024.3444457\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We propose a conceptually novel, flexible, and effective framework (named T-Net++) for the task of two-view correspondence pruning. T-Net++ comprises two unique structures: the \\n<inline-formula><tex-math>$\\\\hbox{``}-$</tex-math></inline-formula>\\n'' structure and the \\n<inline-formula><tex-math>$\\\\hbox{``}|$</tex-math></inline-formula>\\n'' structure. The \\n<inline-formula><tex-math>$\\\\hbox{``}-$</tex-math></inline-formula>\\n'' structure utilizes an iterative learning strategy to process correspondences, while the \\n<inline-formula><tex-math>$\\\\hbox{``}|$</tex-math></inline-formula>\\n'' structure integrates all feature information of the \\n<inline-formula><tex-math>$\\\\hbox{``}-$</tex-math></inline-formula>\\n'' structure and produces inlier weights. Moreover, within the \\n<inline-formula><tex-math>$\\\\hbox{``}|$</tex-math></inline-formula>\\n'' structure, we design a new Local-Global Attention Fusion module to fully exploit valuable information obtained from concatenating features through channel-wise and spatial-wise relationships. Furthermore, we develop a Channel-Spatial Squeeze-and-Excitation module, a modified network backbone that enhances the representation ability of important channels and correspondences through the squeeze-and-excitation operation. T-Net++ not only preserves the permutation-equivariance manner for correspondence pruning, but also gathers rich contextual information, thereby enhancing the effectiveness of the network. Experimental results demonstrate that T-Net++ outperforms other state-of-the-art correspondence pruning methods on various benchmarks and excels in two extended tasks.\",\"PeriodicalId\":94034,\"journal\":{\"name\":\"IEEE transactions on pattern analysis and machine intelligence\",\"volume\":\"46 12\",\"pages\":\"10629-10644\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on pattern analysis and machine intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10637773/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10637773/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

我们为双视图对应剪枝任务提出了一个概念新颖、灵活有效的框架(命名为 T-Net++)。T-Net++ 包括两种独特的结构:"-''结构和"|''结构。-''结构利用迭代学习策略来处理对应关系,而"|''结构则整合了"-''结构的所有特征信息,并产生离群器权重。此外,在"|''"结构中,我们设计了一个新的局部-全局注意力融合模块,以充分利用通过通道和空间关系串联特征所获得的有价值信息。此外,我们还开发了 "信道-空间挤压-激发 "模块,这是一种改进的网络骨干,通过挤压-激发操作增强了重要信道和对应关系的表示能力。T-Net++ 不仅保留了用于对应关系剪枝的置换-方差方式,还收集了丰富的上下文信息,从而提高了网络的有效性。实验结果表明,T-Net++ 在各种基准测试中的表现优于其他最先进的对应剪枝方法,并在两项扩展任务中表现出色。我们的代码将发布在 https://github.com/guobaoxiao/T-Net 网站上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
T-Net++: Effective Permutation-Equivariance Network for Two-View Correspondence Pruning
We propose a conceptually novel, flexible, and effective framework (named T-Net++) for the task of two-view correspondence pruning. T-Net++ comprises two unique structures: the $\hbox{``}-$ '' structure and the $\hbox{``}|$ '' structure. The $\hbox{``}-$ '' structure utilizes an iterative learning strategy to process correspondences, while the $\hbox{``}|$ '' structure integrates all feature information of the $\hbox{``}-$ '' structure and produces inlier weights. Moreover, within the $\hbox{``}|$ '' structure, we design a new Local-Global Attention Fusion module to fully exploit valuable information obtained from concatenating features through channel-wise and spatial-wise relationships. Furthermore, we develop a Channel-Spatial Squeeze-and-Excitation module, a modified network backbone that enhances the representation ability of important channels and correspondences through the squeeze-and-excitation operation. T-Net++ not only preserves the permutation-equivariance manner for correspondence pruning, but also gathers rich contextual information, thereby enhancing the effectiveness of the network. Experimental results demonstrate that T-Net++ outperforms other state-of-the-art correspondence pruning methods on various benchmarks and excels in two extended tasks.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Language-Inspired Relation Transfer for Few-Shot Class-Incremental Learning. Multi-Modality Multi-Attribute Contrastive Pre-Training for Image Aesthetics Computing. 360SFUDA++: Towards Source-Free UDA for Panoramic Segmentation by Learning Reliable Category Prototypes. Anti-Forgetting Adaptation for Unsupervised Person Re-Identification. Evolved Hierarchical Masking for Self-Supervised Learning.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1