SAFIT: Segmentation-Aware Scene Flow with Improved Transformer

Yukang Shi, Kaisheng Ma
{"title":"SAFIT: Segmentation-Aware Scene Flow with Improved Transformer","authors":"Yukang Shi, Kaisheng Ma","doi":"10.1109/icra46639.2022.9811747","DOIUrl":null,"url":null,"abstract":"Scene flow prediction is a challenging task that aims at jointly estimating the 3D structure and 3D motion of dynamic scenes. The previous methods concentrate more on point-wise estimation instead of considering the correspondence between objects as well as lacking the sensation of high-level semantic knowledge. In this paper, we propose a concise yet effective method for scene flow prediction. The key idea is to extend the view of all points for computing point cloud features into object-level, thus simultaneously modeling the relationships of the object-level and point-level via an improved transformer. In addition, we introduce a novel unsupervised loss called segmentation-aware loss, which can model semanticaware details to help predict scene flow more accurately and robustly. Since this loss can be trained without any ground truth, it can be used in both supervised training and self-supervised training. Experiments on both supervised training and self-supervised training demonstrate the effectiveness of our method. On supervised training, 3.8%, 22.58%, 10.90% and 21.82 % accuracy boosts than FLOT [23] can be observed on FT3Ds, KITTIs, FT3Do and KITTIo datasets. On self-supervised scheme, 48.23% and 48.96% accuracy boost than PointPWC-Net [40] can be observed on KITTIo and KITTIs datasets.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Robotics and Automation (ICRA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icra46639.2022.9811747","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Scene flow prediction is a challenging task that aims at jointly estimating the 3D structure and 3D motion of dynamic scenes. The previous methods concentrate more on point-wise estimation instead of considering the correspondence between objects as well as lacking the sensation of high-level semantic knowledge. In this paper, we propose a concise yet effective method for scene flow prediction. The key idea is to extend the view of all points for computing point cloud features into object-level, thus simultaneously modeling the relationships of the object-level and point-level via an improved transformer. In addition, we introduce a novel unsupervised loss called segmentation-aware loss, which can model semanticaware details to help predict scene flow more accurately and robustly. Since this loss can be trained without any ground truth, it can be used in both supervised training and self-supervised training. Experiments on both supervised training and self-supervised training demonstrate the effectiveness of our method. On supervised training, 3.8%, 22.58%, 10.90% and 21.82 % accuracy boosts than FLOT [23] can be observed on FT3Ds, KITTIs, FT3Do and KITTIo datasets. On self-supervised scheme, 48.23% and 48.96% accuracy boost than PointPWC-Net [40] can be observed on KITTIo and KITTIs datasets.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SAFIT:基于改进变压器的分割感知场景流
场景流预测是一项具有挑战性的任务,旨在共同估计动态场景的三维结构和三维运动。以往的方法多侧重于逐点估计,没有考虑对象之间的对应关系,缺乏高层次语义知识的感觉。本文提出了一种简洁而有效的场景流预测方法。关键思想是将计算点云特征的所有点的视图扩展到对象级,从而通过改进的转换器同时建模对象级和点级的关系。此外,我们还引入了一种新的无监督损失,称为分割感知损失,它可以对语义感知细节进行建模,以帮助更准确和鲁棒地预测场景流。由于这种损失可以在没有任何基础真理的情况下进行训练,因此它既可以用于监督训练,也可以用于自监督训练。在监督训练和自监督训练上的实验证明了该方法的有效性。在监督训练中,在FT3Ds、KITTIs、FT3Do和KITTIo数据集上,可以观察到比FLOT[23]提高3.8%、22.58%、10.90%和21.82%的准确率。自监督方案在KITTIo和KITTIs数据集上的准确率分别比PointPWC-Net[40]提高48.23%和48.96%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Can your drone touch? Exploring the boundaries of consumer-grade multirotors for physical interaction Underwater Dock Detection through Convolutional Neural Networks Trained with Artificial Image Generation Immersive Virtual Walking System Using an Avatar Robot R2poweR: The Proof-of-Concept of a Backdrivable, High-Ratio Gearbox for Human-Robot Collaboration* Cityscapes TL++: Semantic Traffic Light Annotations for the Cityscapes Dataset
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1