{"title":"基于深度学习的车辆方向估计与虚拟世界训练模型分析","authors":"Jongkuk Park, Y. Yoon, Jahng-Hyeon Park","doi":"10.1109/IISA.2019.8900756","DOIUrl":null,"url":null,"abstract":"This paper clarifies an issue that the most commonly used ADAS sensors, monocular camera and radar, do not provide abundant information about dynamically changing road scenes. In order to make the sensor more useful for a wide range of ADAS functions, we present an approach to estimate the orientation of surrounding vehicles using deep neural network. We show the possibility that camera-based method can get more competitive, evaluating it on the KITTI Orientation Estimation Benchmark, and also verifying it on our test-driving scenarios. Although its localization performance is not perfect, our model is able to reliably predict the orientation when fine conditions are given. In addition, we further study on training models using synthetic dataset, and share the weakness of this method when comparing to LiDAR-based approach on several conditions such as fully-visible, lightly/heavily-occluded and shading/lighting circumstances.","PeriodicalId":371385,"journal":{"name":"2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Learning-Based Vehicle Orientation Estimation with Analysis of Training Models on Virtual-Worlds\",\"authors\":\"Jongkuk Park, Y. Yoon, Jahng-Hyeon Park\",\"doi\":\"10.1109/IISA.2019.8900756\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper clarifies an issue that the most commonly used ADAS sensors, monocular camera and radar, do not provide abundant information about dynamically changing road scenes. In order to make the sensor more useful for a wide range of ADAS functions, we present an approach to estimate the orientation of surrounding vehicles using deep neural network. We show the possibility that camera-based method can get more competitive, evaluating it on the KITTI Orientation Estimation Benchmark, and also verifying it on our test-driving scenarios. Although its localization performance is not perfect, our model is able to reliably predict the orientation when fine conditions are given. In addition, we further study on training models using synthetic dataset, and share the weakness of this method when comparing to LiDAR-based approach on several conditions such as fully-visible, lightly/heavily-occluded and shading/lighting circumstances.\",\"PeriodicalId\":371385,\"journal\":{\"name\":\"2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA)\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IISA.2019.8900756\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 10th International Conference on Information, Intelligence, Systems and Applications (IISA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IISA.2019.8900756","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Learning-Based Vehicle Orientation Estimation with Analysis of Training Models on Virtual-Worlds
This paper clarifies an issue that the most commonly used ADAS sensors, monocular camera and radar, do not provide abundant information about dynamically changing road scenes. In order to make the sensor more useful for a wide range of ADAS functions, we present an approach to estimate the orientation of surrounding vehicles using deep neural network. We show the possibility that camera-based method can get more competitive, evaluating it on the KITTI Orientation Estimation Benchmark, and also verifying it on our test-driving scenarios. Although its localization performance is not perfect, our model is able to reliably predict the orientation when fine conditions are given. In addition, we further study on training models using synthetic dataset, and share the weakness of this method when comparing to LiDAR-based approach on several conditions such as fully-visible, lightly/heavily-occluded and shading/lighting circumstances.