{"title":"Pitch and Roll Camera Orientation from a Single 2D Image Using Convolutional Neural Networks","authors":"Greg Olmschenk, Hao Tang, Zhigang Zhu","doi":"10.1109/CRV.2017.53","DOIUrl":null,"url":null,"abstract":"In this paper, we propose using convolutional neural networks (CNNs) to automatically determine the pitch and roll of a camera using a single, scene agnostic, 2D image. We compared a linear regressor, a two-layer neural network, and two CNNs. We show the CNNs produce high levels of accuracy in estimating the ground truth orientations which can be used in various computer vision tasks where calculating the camera orientation is necessary or useful. By utilizing accelerometer data in an existing image dataset, we were able to provide the large camera orientation ground truth dataset needed to train such a network with approximately correct values. The trained network is then fine-tuned to smaller datasets with exact camera orientation labels. Additionally, the network is fine-tuned to a dataset with different intrinsic camera parameters to demonstrate the transferability of the network.","PeriodicalId":308760,"journal":{"name":"2017 14th Conference on Computer and Robot Vision (CRV)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 14th Conference on Computer and Robot Vision (CRV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CRV.2017.53","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
In this paper, we propose using convolutional neural networks (CNNs) to automatically determine the pitch and roll of a camera using a single, scene agnostic, 2D image. We compared a linear regressor, a two-layer neural network, and two CNNs. We show the CNNs produce high levels of accuracy in estimating the ground truth orientations which can be used in various computer vision tasks where calculating the camera orientation is necessary or useful. By utilizing accelerometer data in an existing image dataset, we were able to provide the large camera orientation ground truth dataset needed to train such a network with approximately correct values. The trained network is then fine-tuned to smaller datasets with exact camera orientation labels. Additionally, the network is fine-tuned to a dataset with different intrinsic camera parameters to demonstrate the transferability of the network.