MVF-GNN: Multi-View Fusion With GNN for 3D Semantic Segmentation

IF 5.3 2区 计算机科学 Q2 ROBOTICS IEEE Robotics and Automation Letters Pub Date : 2025-01-27 DOI:10.1109/LRA.2025.3534693
Zhenxiang Du;Minglun Ren;Wei Chu;Nengying Chen
{"title":"MVF-GNN: Multi-View Fusion With GNN for 3D Semantic Segmentation","authors":"Zhenxiang Du;Minglun Ren;Wei Chu;Nengying Chen","doi":"10.1109/LRA.2025.3534693","DOIUrl":null,"url":null,"abstract":"Due to the high cost of obtaining 3D annotations and the accumulation of many 2D datasets with 2D semantic labels, deploying multi-view 2D images for 3D semantic segmentation has attracted widespread attention. Fusion of multi-view information requires establishing local-to-local as well as local-to-global dependencies among multiple views. However, previous methods that are based on 2D annotations supervision cannot model local-to-local and local-to-global dependencies simultaneously. In this letter, we propose a novel multi-view fusion framework with graph neural networks (MVF-GNN) for multi-view interaction and integration. First, a multi-view graph based on the associated pixels in multiple views is constructed. Then, a multi-scale multi-view graph attention network (MSMVGAT) module is introduced to perform graph reasoning on multi-view graphs at different scales. Finally, an attention multi-view graph aggregation (AMVGA) module is introduced to learn the importance of different views and integrate multi-view features. Experiments on the ScanNetv2 benchmark dataset show that our method outperforms state-of-the-art 2D/3D semantic segmentation methods based on 2D annotations supervision.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 4","pages":"3262-3269"},"PeriodicalIF":5.3000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Robotics and Automation Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10855616/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ROBOTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Due to the high cost of obtaining 3D annotations and the accumulation of many 2D datasets with 2D semantic labels, deploying multi-view 2D images for 3D semantic segmentation has attracted widespread attention. Fusion of multi-view information requires establishing local-to-local as well as local-to-global dependencies among multiple views. However, previous methods that are based on 2D annotations supervision cannot model local-to-local and local-to-global dependencies simultaneously. In this letter, we propose a novel multi-view fusion framework with graph neural networks (MVF-GNN) for multi-view interaction and integration. First, a multi-view graph based on the associated pixels in multiple views is constructed. Then, a multi-scale multi-view graph attention network (MSMVGAT) module is introduced to perform graph reasoning on multi-view graphs at different scales. Finally, an attention multi-view graph aggregation (AMVGA) module is introduced to learn the importance of different views and integrate multi-view features. Experiments on the ScanNetv2 benchmark dataset show that our method outperforms state-of-the-art 2D/3D semantic segmentation methods based on 2D annotations supervision.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MVF-GNN:基于GNN的多视图融合三维语义分割
由于获取三维标注的成本较高,且积累了大量带有二维语义标签的二维数据集,利用多视图二维图像进行三维语义分割受到了广泛关注。多视图信息的融合需要在多个视图之间建立局部到局部以及局部到全局的依赖关系。然而,以前基于2D注释监督的方法不能同时建模局部到局部和局部到全局的依赖关系。在这封信中,我们提出了一个新的多视图融合框架与图神经网络(MVF-GNN)的多视图交互和集成。首先,基于多个视图中的关联像素构建多视图图;然后,引入多尺度多视图注意网络(MSMVGAT)模块,对不同尺度的多视图图进行图推理。最后,引入了关注多视图图聚合(AMVGA)模块,学习不同视图的重要性,整合多视图特征。在ScanNetv2基准数据集上的实验表明,我们的方法优于基于2D注释监督的最先进的2D/3D语义分割方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Robotics and Automation Letters
IEEE Robotics and Automation Letters Computer Science-Computer Science Applications
CiteScore
9.60
自引率
15.40%
发文量
1428
期刊介绍: The scope of this journal is to publish peer-reviewed articles that provide a timely and concise account of innovative research ideas and application results, reporting significant theoretical findings and application case studies in areas of robotics and automation.
期刊最新文献
WearaCob: A Unified Bidirectional Framework for Adaptive Synergy Between Wearable and Collaborative Robotics NMPC-Augmented Visual Navigation and Safe Learning Control for Large-Scale Mobile Robots Sandwich Jamming-Based Variable Stiffness Structures With User-Defined Degrees of Freedom for Soft Wearable Devices Sequential Probabilistic Descriptor via Uncertainty-Aware Multi-Modal Fusion for Safety-Critical Place Recognition AVO-QP: Task-Adaptive Real-Time Obstacle Avoidance for Redundant Manipulators on Edge Platforms
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1