Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856421
Henning Lategahn, Johannes Beck, C. Stiller
Many robotics applications nowadays use cameras for various task such as place recognition, localization, mapping etc. These methods heavily depend on image descriptors. A plethora of descriptors have recently been introduced but hardly any address the problem of illumination robustness. Herein we introduce an illumination robust image descriptor which we dub DIRD (Dird is an Illumination Robust Descriptor). First a set of Haar features are computed and individual pixel responses are normalized to L2 unit length. Thereafter features are pooled over a predefined neighborhood region. The concatenation of several such features form the basis DIRD vector. These features are then quantized to maximize entropy allowing (among others) a binary version of DIRD consisting of only ones and zeros for very fast matching. We evaluate DIRD on three test sets and compare its performance with (extended) USURF, BRIEF and a baseline gray level descriptor. All proposed DIRD variants substantially outperform these methods by times more than doubling the performance of USURF and BRIEF.
{"title":"DIRD is an illumination robust descriptor","authors":"Henning Lategahn, Johannes Beck, C. Stiller","doi":"10.1109/IVS.2014.6856421","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856421","url":null,"abstract":"Many robotics applications nowadays use cameras for various task such as place recognition, localization, mapping etc. These methods heavily depend on image descriptors. A plethora of descriptors have recently been introduced but hardly any address the problem of illumination robustness. Herein we introduce an illumination robust image descriptor which we dub DIRD (Dird is an Illumination Robust Descriptor). First a set of Haar features are computed and individual pixel responses are normalized to L2 unit length. Thereafter features are pooled over a predefined neighborhood region. The concatenation of several such features form the basis DIRD vector. These features are then quantized to maximize entropy allowing (among others) a binary version of DIRD consisting of only ones and zeros for very fast matching. We evaluate DIRD on three test sets and compare its performance with (extended) USURF, BRIEF and a baseline gray level descriptor. All proposed DIRD variants substantially outperform these methods by times more than doubling the performance of USURF and BRIEF.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126426154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856403
Moritz Knorr, José Esparza, W. Niehsen, C. Stiller
It is well known that the robustness of many computer vision algorithms can be improved by employing large field of view cameras, such as omnidirectional cameras. To avoid obstructions in the field of view, such cameras need to be mounted in an exposed position. Alternatively, a multicamera setup can be used. However, this requires the extrinsic calibration to be known. In the present work, we propose a method to calibrate a fisheye multi-camera rig, mounted on a mobile platform. The method only relies on feature correspondences from pairwise overlapping fields of view of adjacent cameras. In contrast to existing approaches, motion estimation or specific motion patterns are not required. To compensate for the large extent of multi-camera setups and corresponding viewpoint variations, as well as geometrical distortions caused by fisheye lenses, captured images are mapped into virtual camera views such that corresponding image regions coincide. To this end, the scene geometry is approximated by the ground plane in close proximity and by infinitely far away objects elsewhere. As a result, low complexity feature detectors and matchers can be employed. The approach is evaluated using a setup of four rigidly coupled and synchronized wide angle fisheye cameras that were attached to four sides of a mobile platform. The cameras have pairwise overlapping fields of view and baselines between 2.25 and 3 meters.
{"title":"Extrinsic calibration of a fisheye multi-camera setup using overlapping fields of view","authors":"Moritz Knorr, José Esparza, W. Niehsen, C. Stiller","doi":"10.1109/IVS.2014.6856403","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856403","url":null,"abstract":"It is well known that the robustness of many computer vision algorithms can be improved by employing large field of view cameras, such as omnidirectional cameras. To avoid obstructions in the field of view, such cameras need to be mounted in an exposed position. Alternatively, a multicamera setup can be used. However, this requires the extrinsic calibration to be known. In the present work, we propose a method to calibrate a fisheye multi-camera rig, mounted on a mobile platform. The method only relies on feature correspondences from pairwise overlapping fields of view of adjacent cameras. In contrast to existing approaches, motion estimation or specific motion patterns are not required. To compensate for the large extent of multi-camera setups and corresponding viewpoint variations, as well as geometrical distortions caused by fisheye lenses, captured images are mapped into virtual camera views such that corresponding image regions coincide. To this end, the scene geometry is approximated by the ground plane in close proximity and by infinitely far away objects elsewhere. As a result, low complexity feature detectors and matchers can be employed. The approach is evaluated using a setup of four rigidly coupled and synchronized wide angle fisheye cameras that were attached to four sides of a mobile platform. The cameras have pairwise overlapping fields of view and baselines between 2.25 and 3 meters.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128772235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856485
D. Margaria, E. Falletti, T. Acarman
This tutorial paper highlights possible issues related to the integrity and authentication of the GNSS position in road applications. In fact, the Global Navigation Satellite System (GNSS) community is already aware of the conceptual and practical problems related to the availability of the position integrity (i.e. position confidence, protection level) and authentication in urban scenarios. However, these issues seem not to be widely known in the Intelligent Transportation Systems (ITS) domain. These limitations need to be carefully considered and addressed in the perspective of deploying reliable and robust systems based on positioning information.
{"title":"The need for GNSS position integrity and authentication in ITS: Conceptual and practical limitations in urban contexts","authors":"D. Margaria, E. Falletti, T. Acarman","doi":"10.1109/IVS.2014.6856485","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856485","url":null,"abstract":"This tutorial paper highlights possible issues related to the integrity and authentication of the GNSS position in road applications. In fact, the Global Navigation Satellite System (GNSS) community is already aware of the conceptual and practical problems related to the availability of the position integrity (i.e. position confidence, protection level) and authentication in urban scenarios. However, these issues seem not to be widely known in the Intelligent Transportation Systems (ITS) domain. These limitations need to be carefully considered and addressed in the perspective of deploying reliable and robust systems based on positioning information.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128948090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856487
Philipp Bender, Julius Ziegler, C. Stiller
In this paper we propose a highly detailed map for the field of autonomous driving. We introduce the notion of lanelets to represent the drivable environment under both geometrical and topological aspects. Lanelets are atomic, interconnected drivable road segments which may carry additional data to describe the static environment. We describe the map specification, an example creation process as well as the access library libLanelet which is available for download. Based on the map, we briefly describe our behavioural layer (which we call behaviour generation) which is heavily exploiting the proposed map structure. Both contributions have been used throughout the autonomous journey of the Mercedes Benz S 500 Intelligent Drive following the Bertha Benz Memorial Route in summer 2013.
{"title":"Lanelets: Efficient map representation for autonomous driving","authors":"Philipp Bender, Julius Ziegler, C. Stiller","doi":"10.1109/IVS.2014.6856487","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856487","url":null,"abstract":"In this paper we propose a highly detailed map for the field of autonomous driving. We introduce the notion of lanelets to represent the drivable environment under both geometrical and topological aspects. Lanelets are atomic, interconnected drivable road segments which may carry additional data to describe the static environment. We describe the map specification, an example creation process as well as the access library libLanelet which is available for download. Based on the map, we briefly describe our behavioural layer (which we call behaviour generation) which is heavily exploiting the proposed map structure. Both contributions have been used throughout the autonomous journey of the Mercedes Benz S 500 Intelligent Drive following the Bertha Benz Memorial Route in summer 2013.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129170997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856458
Chaoyong Chen, Markus Rickert, A. Knoll
A real-time vehicle motion planning engine is presented in this paper, with the focus on exploiting the prior and online traffic knowledge, e.g., predefined roadmap, prior environment information, behaviour-based motion primitives, within the space exploration guided heuristic search (SEHS) framework. The SEHS algorithm plans a kinodynamic vehicle motion in two steps: a geometric investigation of the free space, followed by a grid-free heuristic search employing primitive motions. These two procedures are generic and possible to take advantage of traffic knowledge. In this paper, the space exploration is supported by a roadmap and the heuristic search benefits from the behaviour-based primitives. Based on this idea, a light weighted motion planning engine is built, with the purpose to handle the traffic knowledge and the planning time in real-time motion planning. The experiments demonstrate that this SEHS motion planning engine is flexible and scalable for practical traffic scenarios with better results than the baseline SEHS motion planner regarding the provided traffic knowledge.
{"title":"A Traffic Knowledge Aided Vehicle Motion Planning Engine Based on Space Exploration Guided Heuristic Search","authors":"Chaoyong Chen, Markus Rickert, A. Knoll","doi":"10.1109/IVS.2014.6856458","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856458","url":null,"abstract":"A real-time vehicle motion planning engine is presented in this paper, with the focus on exploiting the prior and online traffic knowledge, e.g., predefined roadmap, prior environment information, behaviour-based motion primitives, within the space exploration guided heuristic search (SEHS) framework. The SEHS algorithm plans a kinodynamic vehicle motion in two steps: a geometric investigation of the free space, followed by a grid-free heuristic search employing primitive motions. These two procedures are generic and possible to take advantage of traffic knowledge. In this paper, the space exploration is supported by a roadmap and the heuristic search benefits from the behaviour-based primitives. Based on this idea, a light weighted motion planning engine is built, with the purpose to handle the traffic knowledge and the planning time in real-time motion planning. The experiments demonstrate that this SEHS motion planning engine is flexible and scalable for practical traffic scenarios with better results than the baseline SEHS motion planner regarding the provided traffic knowledge.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132619820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856442
Amnir Hadachi, Oleg Batrashev, Artjom Lind, Georg Singer, E. Vainikko
This article presents a mobility prediction method for mobile phone users based on an enhanced Markov Chain algorithm. The mobile phone data has a highly dynamic nature and a sparcely sampled aspect; therefore, the prediction of user's mobility location poses a challenge. Our enhancement approach can be summarized as an embedded association of rules applied to a Markov chain algorithm. The proposed solution is encouraging for the next generation of mobile networks and it can be used to optimize the existing mobile network infrastructure, road traffic, tracking systems and localization. Validation of our system was carried out using real data collected from the field.
{"title":"Cell phone subscribers mobility prediction using enhanced Markov Chain algorithm","authors":"Amnir Hadachi, Oleg Batrashev, Artjom Lind, Georg Singer, E. Vainikko","doi":"10.1109/IVS.2014.6856442","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856442","url":null,"abstract":"This article presents a mobility prediction method for mobile phone users based on an enhanced Markov Chain algorithm. The mobile phone data has a highly dynamic nature and a sparcely sampled aspect; therefore, the prediction of user's mobility location poses a challenge. Our enhancement approach can be summarized as an embedded association of rules applied to a Markov chain algorithm. The proposed solution is encouraging for the next generation of mobile networks and it can be used to optimize the existing mobile network infrastructure, road traffic, tracking systems and localization. Validation of our system was carried out using real data collected from the field.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122195749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856408
Pakapoj Tulsuk, Panu Srestasathiern, M. Ruchanurucks, T. Phatrapornnant, H. Nagahashi
This paper presents a novel method for extrinsic parameters estimation of a single line scan LiDAR and a camera. Using a checkerboard, the calibration setup is simple and practical. Particularly, the proposed calibration method is based on resolving geometry of the checkerboard that visible to the camera and the LiDAR. The calibration setup geometry is described by planes, lines and points. Our novelty is a new hypothesis of the geometry which is the orthogonal distances between LiDAR points and the line from the intersection between the checkerboard and LiDAR scan plane. To evaluate the performance of the proposed method, we compared our proposed method with the state of the art method i.e. Zhang and Pless [1]. The experimental results showed that the proposed method yielded better results.
{"title":"A novel method for extrinsic parameters estimation between a single-line scan LiDAR and a camera","authors":"Pakapoj Tulsuk, Panu Srestasathiern, M. Ruchanurucks, T. Phatrapornnant, H. Nagahashi","doi":"10.1109/IVS.2014.6856408","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856408","url":null,"abstract":"This paper presents a novel method for extrinsic parameters estimation of a single line scan LiDAR and a camera. Using a checkerboard, the calibration setup is simple and practical. Particularly, the proposed calibration method is based on resolving geometry of the checkerboard that visible to the camera and the LiDAR. The calibration setup geometry is described by planes, lines and points. Our novelty is a new hypothesis of the geometry which is the orthogonal distances between LiDAR points and the line from the intersection between the checkerboard and LiDAR scan plane. To evaluate the performance of the proposed method, we compared our proposed method with the state of the art method i.e. Zhang and Pless [1]. The experimental results showed that the proposed method yielded better results.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127017378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856585
R. Bishop, D. Bevly, Joshua Switkes, Lisa Park
This paper describes results to date of a project to prototype, evaluate, and test Driver-Assistive Truck Platooning (DATP), which could have significant positive safety and fuel savings potential for heavy truck operations. The project is led by Auburn University and funded within the Federal Highway Administration Exploratory Advanced Research program. This paper provides selected results from Phase One, which is currently exploring a range of technical and non-technical issues, including assessing real-world business and operational issues within the trucking industry. Specific technical sections address sensing and computing hardware; driver interface; sensor and actuator software and interfacing; control software; and operational environment.
{"title":"Results of initial test and evaluation of a Driver-Assistive Truck Platooning prototype","authors":"R. Bishop, D. Bevly, Joshua Switkes, Lisa Park","doi":"10.1109/IVS.2014.6856585","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856585","url":null,"abstract":"This paper describes results to date of a project to prototype, evaluate, and test Driver-Assistive Truck Platooning (DATP), which could have significant positive safety and fuel savings potential for heavy truck operations. The project is led by Auburn University and funded within the Federal Highway Administration Exploratory Advanced Research program. This paper provides selected results from Phase One, which is currently exploring a range of technical and non-technical issues, including assessing real-world business and operational issues within the trucking industry. Specific technical sections address sensing and computing hardware; driver interface; sensor and actuator software and interfacing; control software; and operational environment.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"428 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132385965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856465
Huan Huang, Shiying Li, Kai Tang, Renfa Li
We present a two-stage method to accurately segment single or multiple moving objects and their shadows, especially when the moving objects have similar chromaticity and intensity to their shadows or when they are immersed in the shadows of other moving objects. Our algorithm first detects potential shadows via brightness ratios at each motion region, which is already separated from the background of an image sequence. Movement patterns are then applied to optimize the regions of moving objects and their shadows. We conducted experiments using our captured image sequences and public videos of Highway I and II to verify our method. The results demonstrate the method's efficiency quantitatively and qualitatively in comparison with ground truth and several advanced methods.
{"title":"Accurate segmentation of moving objects and their shadows via brightness ratios and movement patterns","authors":"Huan Huang, Shiying Li, Kai Tang, Renfa Li","doi":"10.1109/IVS.2014.6856465","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856465","url":null,"abstract":"We present a two-stage method to accurately segment single or multiple moving objects and their shadows, especially when the moving objects have similar chromaticity and intensity to their shadows or when they are immersed in the shadows of other moving objects. Our algorithm first detects potential shadows via brightness ratios at each motion region, which is already separated from the background of an image sequence. Movement patterns are then applied to optimize the regions of moving objects and their shadows. We conducted experiments using our captured image sequences and public videos of Highway I and II to verify our method. The results demonstrate the method's efficiency quantitatively and qualitatively in comparison with ground truth and several advanced methods.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133396359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856470
Jiajia Chen, Pan Zhao, Huawei Liang, Tao Mei
In this paper, a maneuver decision making method for autonomous vehicle in complex urban environment is studied. We decompose the decision making problem into three steps. The first step is for selecting the logical maneuvers, in the second step we remove the maneuvers which break the traffic rules. In the third step, Multiple Attribute Decision Making (MADM) methods such as Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) are used in the process of selecting the optimum driving maneuver in the scenario considering safety and efficiency. AHP is used for obtaining the weights of attributes, TOPSIS is responsible for calculating the ratings and ranking the alternatives. Road test indicates that the proposed method helps the autonomous vehicle to make reasonable decisions in complex environment. In general, the experiment results show that this method is efficient and reliable.
{"title":"A Multiple Attribute-based Decision Making model for autonomous vehicle in urban environment","authors":"Jiajia Chen, Pan Zhao, Huawei Liang, Tao Mei","doi":"10.1109/IVS.2014.6856470","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856470","url":null,"abstract":"In this paper, a maneuver decision making method for autonomous vehicle in complex urban environment is studied. We decompose the decision making problem into three steps. The first step is for selecting the logical maneuvers, in the second step we remove the maneuvers which break the traffic rules. In the third step, Multiple Attribute Decision Making (MADM) methods such as Analytic Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) are used in the process of selecting the optimum driving maneuver in the scenario considering safety and efficiency. AHP is used for obtaining the weights of attributes, TOPSIS is responsible for calculating the ratings and ranking the alternatives. Road test indicates that the proposed method helps the autonomous vehicle to make reasonable decisions in complex environment. In general, the experiment results show that this method is efficient and reliable.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"332 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133321664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}