{"title":"ViPNet:一个端到端的6D视觉相机姿态回归网络","authors":"Haohao Hu, Aoran Wang, Marc Sons, M. Lauer","doi":"10.1109/ITSC45102.2020.9294630","DOIUrl":null,"url":null,"abstract":"In this work, we present a visual pose regression network: ViPNet. It is robust and real-time capable on mobile platforms such as self-driving vehicles. We train a convolutional neural network to estimate the six degrees of freedom camera pose from a single monocular image in an end-to-end manner. In order to estimate camera poses with uncertainty, we use a Bayesian version of the ResNet-50 as our basic network. SEBlocks are applied in residual units to increase our model’s sensitivity to informative features. Our ViPNet is trained using a geometric loss function with trainable parameters, which can simplify the fine-tuning process significantly. We evaluate our ViPNet on the Cambridge Landmarks dataset and also on our Karl-Wilhelm-Plaza dataset, which is recorded with an experimental vehicle. As evaluation results, our ViPNet outperforms other end-to-end monocular camera pose estimation methods. Our ViPNet requires only 9-15ms to predict one camera pose, which allows us to run it with a very high frequency.","PeriodicalId":394538,"journal":{"name":"2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"ViPNet: An End-to-End 6D Visual Camera Pose Regression Network\",\"authors\":\"Haohao Hu, Aoran Wang, Marc Sons, M. Lauer\",\"doi\":\"10.1109/ITSC45102.2020.9294630\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this work, we present a visual pose regression network: ViPNet. It is robust and real-time capable on mobile platforms such as self-driving vehicles. We train a convolutional neural network to estimate the six degrees of freedom camera pose from a single monocular image in an end-to-end manner. In order to estimate camera poses with uncertainty, we use a Bayesian version of the ResNet-50 as our basic network. SEBlocks are applied in residual units to increase our model’s sensitivity to informative features. Our ViPNet is trained using a geometric loss function with trainable parameters, which can simplify the fine-tuning process significantly. We evaluate our ViPNet on the Cambridge Landmarks dataset and also on our Karl-Wilhelm-Plaza dataset, which is recorded with an experimental vehicle. As evaluation results, our ViPNet outperforms other end-to-end monocular camera pose estimation methods. Our ViPNet requires only 9-15ms to predict one camera pose, which allows us to run it with a very high frequency.\",\"PeriodicalId\":394538,\"journal\":{\"name\":\"2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ITSC45102.2020.9294630\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITSC45102.2020.9294630","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
ViPNet: An End-to-End 6D Visual Camera Pose Regression Network
In this work, we present a visual pose regression network: ViPNet. It is robust and real-time capable on mobile platforms such as self-driving vehicles. We train a convolutional neural network to estimate the six degrees of freedom camera pose from a single monocular image in an end-to-end manner. In order to estimate camera poses with uncertainty, we use a Bayesian version of the ResNet-50 as our basic network. SEBlocks are applied in residual units to increase our model’s sensitivity to informative features. Our ViPNet is trained using a geometric loss function with trainable parameters, which can simplify the fine-tuning process significantly. We evaluate our ViPNet on the Cambridge Landmarks dataset and also on our Karl-Wilhelm-Plaza dataset, which is recorded with an experimental vehicle. As evaluation results, our ViPNet outperforms other end-to-end monocular camera pose estimation methods. Our ViPNet requires only 9-15ms to predict one camera pose, which allows us to run it with a very high frequency.