Monocular Robot Navigation with Self-Supervised Pretrained Vision Transformers

Miguel A. Saavedra-Ruiz, Sacha Morin, L. Paull
{"title":"Monocular Robot Navigation with Self-Supervised Pretrained Vision Transformers","authors":"Miguel A. Saavedra-Ruiz, Sacha Morin, L. Paull","doi":"10.1109/CRV55824.2022.00033","DOIUrl":null,"url":null,"abstract":"In this work, we consider the problem of learning a perception model for monocular robot navigation using few annotated images. Using a Vision Transformer (ViT) pretrained with a label-free self-supervised method, we successfully train a coarse image segmentation model for the Duckietown environment using 70 training images. Our model performs coarse image segmentation at the $8\\times 8$ patch level, and the inference resolution can be adjusted to balance prediction granularity and real-time perception constraints. We study how best to adapt a ViT to our task and environment, and find that some lightweight architectures can yield good single-image segmentations at a usable frame rate, even on CPU. The resulting perception model is used as the backbone for a simple yet robust visual servoing agent, which we deploy on a differential drive mobile robot to perform two tasks: lane following and obstacle avoidance.","PeriodicalId":131142,"journal":{"name":"2022 19th Conference on Robots and Vision (CRV)","volume":"480 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 19th Conference on Robots and Vision (CRV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CRV55824.2022.00033","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

In this work, we consider the problem of learning a perception model for monocular robot navigation using few annotated images. Using a Vision Transformer (ViT) pretrained with a label-free self-supervised method, we successfully train a coarse image segmentation model for the Duckietown environment using 70 training images. Our model performs coarse image segmentation at the $8\times 8$ patch level, and the inference resolution can be adjusted to balance prediction granularity and real-time perception constraints. We study how best to adapt a ViT to our task and environment, and find that some lightweight architectures can yield good single-image segmentations at a usable frame rate, even on CPU. The resulting perception model is used as the backbone for a simple yet robust visual servoing agent, which we deploy on a differential drive mobile robot to perform two tasks: lane following and obstacle avoidance.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于自监督预训练视觉变压器的单目机器人导航
在这项工作中,我们考虑了使用少量注释图像学习单眼机器人导航的感知模型的问题。采用无标签自监督方法预训练的视觉变形器(Vision Transformer, ViT),利用70张训练图像成功训练了Duckietown环境下的粗图像分割模型。我们的模型在$8\ × 8$ patch级别执行粗图像分割,并且可以调整推理分辨率以平衡预测粒度和实时感知约束。我们研究了如何最好地使ViT适应我们的任务和环境,并发现一些轻量级架构可以在可用的帧速率下产生良好的单图像分割,即使在CPU上也是如此。所得到的感知模型被用作一个简单而鲁棒的视觉伺服代理的主干,我们将其部署在差动驱动移动机器人上,以执行两项任务:车道跟随和避障。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A View Invariant Human Action Recognition System for Noisy Inputs TemporalNet: Real-time 2D-3D Video Object Detection Occluded Text Detection and Recognition in the Wild Anomaly Detection with Adversarially Learned Perturbations of Latent Space Occlusion-Aware Self-Supervised Stereo Matching with Confidence Guided Raw Disparity Fusion
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1