Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856590
S. Baltzer, J. Gissing, P. Jeck, Thomas Lichius, L. Eckstein
The increasing electrification of electric drive trains leads to new challenges concerning automotive system design. Since no or only a little amount of useable waste heat is available on a sufficiently high temperature level the passenger cabin heating directly influences the electric driving range for battery electric vehicles (BEVs). The scope of the paper is to analyze the integration of micro-combined heat and power (CHP) units into BEVs providing heating energy in an efficient way. Both the influence on the electric driving range as well as the overall energy efficiency in terms of primary energy and CO2 emissions is investigated and compared to other heating systems for BEVs.
{"title":"Integration of micro-CHP units into BEVs — Influence on the overall efficiency, emissions and the electric driving range","authors":"S. Baltzer, J. Gissing, P. Jeck, Thomas Lichius, L. Eckstein","doi":"10.1109/IVS.2014.6856590","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856590","url":null,"abstract":"The increasing electrification of electric drive trains leads to new challenges concerning automotive system design. Since no or only a little amount of useable waste heat is available on a sufficiently high temperature level the passenger cabin heating directly influences the electric driving range for battery electric vehicles (BEVs). The scope of the paper is to analyze the integration of micro-combined heat and power (CHP) units into BEVs providing heating energy in an efficient way. Both the influence on the electric driving range as well as the overall energy efficiency in terms of primary energy and CO2 emissions is investigated and compared to other heating systems for BEVs.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131772037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856461
L. Bergasa, Daniel Almeria, J. Almazán, J. J. Torres, R. Arroyo
This paper presents DriveSafe, a new driver safety app for iPhones that detects inattentive driving behaviors and gives corresponding feedback to drivers, scoring their driving and alerting them in case their behaviors are unsafe. It uses computer vision and pattern recognition techniques on the iPhone to assess whether the driver is drowsy or distracted using the rear-camera, the microphone, the inertial sensors and the GPS. We present the general architecture of DriveSafe and evaluate its performance using data from 12 drivers in two different studies. The first one evaluates the detection of some inattentive driving behaviors obtaining an overall precision of 82% at 92% of recall. The second one compares the scores between DriveSafe vs the commercial AXA Drive app obtaining a better valuation to its operation. DriveSafe is the first app for smartphones based on inbuilt sensors able to detect inattentive behaviors evaluating the quality of the driving at the same time. It represents a new disruptive technology because, on the one hand, it provides similar ADAS features that found in luxury cars, and on the other hand, it presents a viable alternative for the “blackboxes” installed in vehicles by the insurance companies.
{"title":"DriveSafe: An app for alerting inattentive drivers and scoring driving behaviors","authors":"L. Bergasa, Daniel Almeria, J. Almazán, J. J. Torres, R. Arroyo","doi":"10.1109/IVS.2014.6856461","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856461","url":null,"abstract":"This paper presents DriveSafe, a new driver safety app for iPhones that detects inattentive driving behaviors and gives corresponding feedback to drivers, scoring their driving and alerting them in case their behaviors are unsafe. It uses computer vision and pattern recognition techniques on the iPhone to assess whether the driver is drowsy or distracted using the rear-camera, the microphone, the inertial sensors and the GPS. We present the general architecture of DriveSafe and evaluate its performance using data from 12 drivers in two different studies. The first one evaluates the detection of some inattentive driving behaviors obtaining an overall precision of 82% at 92% of recall. The second one compares the scores between DriveSafe vs the commercial AXA Drive app obtaining a better valuation to its operation. DriveSafe is the first app for smartphones based on inbuilt sensors able to detect inattentive behaviors evaluating the quality of the driving at the same time. It represents a new disruptive technology because, on the one hand, it provides similar ADAS features that found in luxury cars, and on the other hand, it presents a viable alternative for the “blackboxes” installed in vehicles by the insurance companies.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134325962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856558
A. Azim, O. Aycard
In this paper, we present a layered approach for classification of moving objects from 3D range data based on supervised learning technique. Our approach combines the model based classification in 2D with boosting for classifying the objects into four classes of interest namely bus, car, bike and pedestrian. In contrast to most of the existing work on 3D classification which involves extensive feature extraction and description, this combination uses simple single-valued features and allows our system to perform efficiently. The proposed method can be used in conjunction with any type of range sensors, however, we have demonstrated its performance using the data acquired from a Velodyne HDL-64E laser scanner.
{"title":"Layer-based supervised classification of moving objects in outdoor dynamic environment using 3D laser scanner","authors":"A. Azim, O. Aycard","doi":"10.1109/IVS.2014.6856558","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856558","url":null,"abstract":"In this paper, we present a layered approach for classification of moving objects from 3D range data based on supervised learning technique. Our approach combines the model based classification in 2D with boosting for classifying the objects into four classes of interest namely bus, car, bike and pedestrian. In contrast to most of the existing work on 3D classification which involves extensive feature extraction and description, this combination uses simple single-valued features and allows our system to perform efficiently. The proposed method can be used in conjunction with any type of range sensors, however, we have demonstrated its performance using the data acquired from a Velodyne HDL-64E laser scanner.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133080271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856486
Andrés E. Gómez, Francisco A. R. Alencar, Paulo V. S. Prado, F. Osório, D. Wolf
The detection of a traffic light on the road is important for the safety of persons who occupy a vehicle, in a normal vehicles or an autonomous land vehicle. In normal vehicle, a system that helps a driver to perceive the details of traffic signals, necessary to drive, could be critical in a delicate driving manoeuvre (i.e crossing an intersection of roads). Furthermore, traffic lights detection by an autonomous vehicle is a special case of perception, because it is important for the control that the autonomous vehicle must take. Multiples authors have used image processing as a base for achieving traffic light detection. However, the image processing presents a problem regarding conditions for capturing scenes, and therefore, the traffic light detection is affected. For this reason, this paper proposes a method that links the image processing with an estimation state routine formed by Hidden Markov Models (HMM). This method helps to determine the current state of the traffic light detected, based on the obtained states by image processing, aiming to obtain the best performance in the determination of the traffic light states. With the proposed method in this paper, we obtained 90.55% of accuracy in the detection of the traffic light state, versus a 78.54% obtained using solely image processing. The recognition of traffic lights using image processing still has a large dependence on the capture conditions of each frame from the video camera. In this context, the addition of a pre-processing stage before image processing could contribute to improve this aspect, and could provide a better results in determining the traffic light state.
{"title":"Traffic lights detection and state estimation using Hidden Markov Models","authors":"Andrés E. Gómez, Francisco A. R. Alencar, Paulo V. S. Prado, F. Osório, D. Wolf","doi":"10.1109/IVS.2014.6856486","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856486","url":null,"abstract":"The detection of a traffic light on the road is important for the safety of persons who occupy a vehicle, in a normal vehicles or an autonomous land vehicle. In normal vehicle, a system that helps a driver to perceive the details of traffic signals, necessary to drive, could be critical in a delicate driving manoeuvre (i.e crossing an intersection of roads). Furthermore, traffic lights detection by an autonomous vehicle is a special case of perception, because it is important for the control that the autonomous vehicle must take. Multiples authors have used image processing as a base for achieving traffic light detection. However, the image processing presents a problem regarding conditions for capturing scenes, and therefore, the traffic light detection is affected. For this reason, this paper proposes a method that links the image processing with an estimation state routine formed by Hidden Markov Models (HMM). This method helps to determine the current state of the traffic light detected, based on the obtained states by image processing, aiming to obtain the best performance in the determination of the traffic light states. With the proposed method in this paper, we obtained 90.55% of accuracy in the detection of the traffic light state, versus a 78.54% obtained using solely image processing. The recognition of traffic lights using image processing still has a large dependence on the capture conditions of each frame from the video camera. In this context, the addition of a pre-processing stage before image processing could contribute to improve this aspect, and could provide a better results in determining the traffic light state.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130326388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856455
A. Gaier, A. Asteroth
An evolutionary algorithm is presented to solve the optimal control problem for energy optimal driving. Results show that the algorithm computes equivalent strategies as traditional graph searching approaches like dynamic programming or A*. The algorithm proves to be time efficient while saving multiple orders of magnitude in memory compared to graph searching techniques. Thereby making it applicable in embedded applications such as eco-driving assistants or intelligent route planning.
{"title":"Evolution of optimal control for energy-efficient transport","authors":"A. Gaier, A. Asteroth","doi":"10.1109/IVS.2014.6856455","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856455","url":null,"abstract":"An evolutionary algorithm is presented to solve the optimal control problem for energy optimal driving. Results show that the algorithm computes equivalent strategies as traditional graph searching approaches like dynamic programming or A*. The algorithm proves to be time efficient while saving multiple orders of magnitude in memory compared to graph searching techniques. Thereby making it applicable in embedded applications such as eco-driving assistants or intelligent route planning.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123014347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856442
Amnir Hadachi, Oleg Batrashev, Artjom Lind, Georg Singer, E. Vainikko
This article presents a mobility prediction method for mobile phone users based on an enhanced Markov Chain algorithm. The mobile phone data has a highly dynamic nature and a sparcely sampled aspect; therefore, the prediction of user's mobility location poses a challenge. Our enhancement approach can be summarized as an embedded association of rules applied to a Markov chain algorithm. The proposed solution is encouraging for the next generation of mobile networks and it can be used to optimize the existing mobile network infrastructure, road traffic, tracking systems and localization. Validation of our system was carried out using real data collected from the field.
{"title":"Cell phone subscribers mobility prediction using enhanced Markov Chain algorithm","authors":"Amnir Hadachi, Oleg Batrashev, Artjom Lind, Georg Singer, E. Vainikko","doi":"10.1109/IVS.2014.6856442","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856442","url":null,"abstract":"This article presents a mobility prediction method for mobile phone users based on an enhanced Markov Chain algorithm. The mobile phone data has a highly dynamic nature and a sparcely sampled aspect; therefore, the prediction of user's mobility location poses a challenge. Our enhancement approach can be summarized as an embedded association of rules applied to a Markov chain algorithm. The proposed solution is encouraging for the next generation of mobile networks and it can be used to optimize the existing mobile network infrastructure, road traffic, tracking systems and localization. Validation of our system was carried out using real data collected from the field.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122195749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856466
Xiao Hu, S. R. Florez, A. Gepperth
Reliable road detection is a key issue for modern Intelligent Vehicles, since it can help to identify the driv-able area as well as boosting other perception functions like object detection. However, real environments present several challenges like illumination changes and varying weather conditions. We propose a multi-modal road detection and segmentation method based on monocular images and HD multi-layer LIDAR data (3D point cloud). This algorithm consists of three stages: extraction of ground points from multilayer LIDAR, transformation of color camera information to an illumination-invariant representation, and lastly the segmentation of the road area. For the first module, the core function is to extract the ground points from LIDAR data. To this end a road boundary detection is performed based on histogram analysis, then a plane estimation using RANSAC, and a ground point extraction according to the point-to-plane distance. In the second module, an image representation of illumination-invariant features is computed simultaneously. Ground points are projected to image plane and then used to compute a road probability map using a Gaussian model. The combination of these modalities improves the robustness of the whole system and reduces the overall computational time, since the first two modules can be run in parallel. Quantitative experiments carried on the public KITTI dataset enhanced by road annotations confirmed the effectiveness of the proposed method.
{"title":"A multi-modal system for road detection and segmentation","authors":"Xiao Hu, S. R. Florez, A. Gepperth","doi":"10.1109/IVS.2014.6856466","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856466","url":null,"abstract":"Reliable road detection is a key issue for modern Intelligent Vehicles, since it can help to identify the driv-able area as well as boosting other perception functions like object detection. However, real environments present several challenges like illumination changes and varying weather conditions. We propose a multi-modal road detection and segmentation method based on monocular images and HD multi-layer LIDAR data (3D point cloud). This algorithm consists of three stages: extraction of ground points from multilayer LIDAR, transformation of color camera information to an illumination-invariant representation, and lastly the segmentation of the road area. For the first module, the core function is to extract the ground points from LIDAR data. To this end a road boundary detection is performed based on histogram analysis, then a plane estimation using RANSAC, and a ground point extraction according to the point-to-plane distance. In the second module, an image representation of illumination-invariant features is computed simultaneously. Ground points are projected to image plane and then used to compute a road probability map using a Gaussian model. The combination of these modalities improves the robustness of the whole system and reduces the overall computational time, since the first two modules can be run in parallel. Quantitative experiments carried on the public KITTI dataset enhanced by road annotations confirmed the effectiveness of the proposed method.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124276783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856585
R. Bishop, D. Bevly, Joshua Switkes, Lisa Park
This paper describes results to date of a project to prototype, evaluate, and test Driver-Assistive Truck Platooning (DATP), which could have significant positive safety and fuel savings potential for heavy truck operations. The project is led by Auburn University and funded within the Federal Highway Administration Exploratory Advanced Research program. This paper provides selected results from Phase One, which is currently exploring a range of technical and non-technical issues, including assessing real-world business and operational issues within the trucking industry. Specific technical sections address sensing and computing hardware; driver interface; sensor and actuator software and interfacing; control software; and operational environment.
{"title":"Results of initial test and evaluation of a Driver-Assistive Truck Platooning prototype","authors":"R. Bishop, D. Bevly, Joshua Switkes, Lisa Park","doi":"10.1109/IVS.2014.6856585","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856585","url":null,"abstract":"This paper describes results to date of a project to prototype, evaluate, and test Driver-Assistive Truck Platooning (DATP), which could have significant positive safety and fuel savings potential for heavy truck operations. The project is led by Auburn University and funded within the Federal Highway Administration Exploratory Advanced Research program. This paper provides selected results from Phase One, which is currently exploring a range of technical and non-technical issues, including assessing real-world business and operational issues within the trucking industry. Specific technical sections address sensing and computing hardware; driver interface; sensor and actuator software and interfacing; control software; and operational environment.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"428 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132385965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856487
Philipp Bender, Julius Ziegler, C. Stiller
In this paper we propose a highly detailed map for the field of autonomous driving. We introduce the notion of lanelets to represent the drivable environment under both geometrical and topological aspects. Lanelets are atomic, interconnected drivable road segments which may carry additional data to describe the static environment. We describe the map specification, an example creation process as well as the access library libLanelet which is available for download. Based on the map, we briefly describe our behavioural layer (which we call behaviour generation) which is heavily exploiting the proposed map structure. Both contributions have been used throughout the autonomous journey of the Mercedes Benz S 500 Intelligent Drive following the Bertha Benz Memorial Route in summer 2013.
{"title":"Lanelets: Efficient map representation for autonomous driving","authors":"Philipp Bender, Julius Ziegler, C. Stiller","doi":"10.1109/IVS.2014.6856487","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856487","url":null,"abstract":"In this paper we propose a highly detailed map for the field of autonomous driving. We introduce the notion of lanelets to represent the drivable environment under both geometrical and topological aspects. Lanelets are atomic, interconnected drivable road segments which may carry additional data to describe the static environment. We describe the map specification, an example creation process as well as the access library libLanelet which is available for download. Based on the map, we briefly describe our behavioural layer (which we call behaviour generation) which is heavily exploiting the proposed map structure. Both contributions have been used throughout the autonomous journey of the Mercedes Benz S 500 Intelligent Drive following the Bertha Benz Memorial Route in summer 2013.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129170997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856422
Dingfu Zhou, V. Fremont, B. Quost, Bihao Wang
In this paper, we propose an effective approach for moving object detection based on modeling the ego-motion uncertainty and using a graph-cut based motion segmentation. First, the relative camera pose is estimated by minimizing the sum of reprojection errors and its covariance matrix is calculated using a first-order errors propagation method. Next, a motion likelihood for each pixel is obtained by propagating the uncertainty of the ego-motion to the Residual Image Motion Flow (RIMF). Finally, the motion likelihood and the depth gradient are used in a graph-cut based approach as region and boundary terms respectively, in order to obtain the moving objects segmentation. Experimental results on real-world data show that our approach can detect dynamic objects which move on the epipolar plane or that are partially occluded in complex urban traffic scenes.
{"title":"On modeling ego-motion uncertainty for moving object detection from a mobile platform","authors":"Dingfu Zhou, V. Fremont, B. Quost, Bihao Wang","doi":"10.1109/IVS.2014.6856422","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856422","url":null,"abstract":"In this paper, we propose an effective approach for moving object detection based on modeling the ego-motion uncertainty and using a graph-cut based motion segmentation. First, the relative camera pose is estimated by minimizing the sum of reprojection errors and its covariance matrix is calculated using a first-order errors propagation method. Next, a motion likelihood for each pixel is obtained by propagating the uncertainty of the ego-motion to the Residual Image Motion Flow (RIMF). Finally, the motion likelihood and the depth gradient are used in a graph-cut based approach as region and boundary terms respectively, in order to obtain the moving objects segmentation. Experimental results on real-world data show that our approach can detect dynamic objects which move on the epipolar plane or that are partially occluded in complex urban traffic scenes.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132113540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}