{"title":"Expandable Spherical Projection and Feature Fusion Methods for Object Detection from Fisheye Images","authors":"Songeun Kim, Soon-Yong Park","doi":"10.23919/MVA51890.2021.9511379","DOIUrl":null,"url":null,"abstract":"One of the key requirements for enhanced autonomous driving systems is accurate detection of the objects from a wide range of view. Large-angle images from a fisheye lens camera can be an effective solution for automotive applications. However, it comes with the cost of strong radial distortions. In particular, the fisheye camera has a photographic effect of exaggerating the size of objects in central regions of the image, while making objects near the marginal area appear smaller. Therefore, we propose the Expandable Spherical Projection that expands center or margin regions to produce straight edges of de-warped objects with less unwanted background in the bounding boxes. In addition to this, we analyze the influence of multi-scale feature fusion in a real-time object detector, which learns to extract more meaningful information for small objects. We present three different types of concatenated YOLOv3-SPP architectures. Moreover, we demonstrate the effectiveness of our proposed projection and feature-fusion using multiple fisheye lens datasets, which shows up to 4.7% AP improvement compared to fisheye images and baseline model.","PeriodicalId":312481,"journal":{"name":"2021 17th International Conference on Machine Vision and Applications (MVA)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 17th International Conference on Machine Vision and Applications (MVA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/MVA51890.2021.9511379","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
One of the key requirements for enhanced autonomous driving systems is accurate detection of the objects from a wide range of view. Large-angle images from a fisheye lens camera can be an effective solution for automotive applications. However, it comes with the cost of strong radial distortions. In particular, the fisheye camera has a photographic effect of exaggerating the size of objects in central regions of the image, while making objects near the marginal area appear smaller. Therefore, we propose the Expandable Spherical Projection that expands center or margin regions to produce straight edges of de-warped objects with less unwanted background in the bounding boxes. In addition to this, we analyze the influence of multi-scale feature fusion in a real-time object detector, which learns to extract more meaningful information for small objects. We present three different types of concatenated YOLOv3-SPP architectures. Moreover, we demonstrate the effectiveness of our proposed projection and feature-fusion using multiple fisheye lens datasets, which shows up to 4.7% AP improvement compared to fisheye images and baseline model.