Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856607
Ashish Tawari, M. Trivedi
Analysis of driver's head behavior is an integral part of driver monitoring system. Driver's coarse gaze direction or gaze zone is a very important cue in understanding driver-state. Many existing gaze zone estimators are, however, limited to single camera perspectives, which are vulnerable to occlusions of facial features from spatially large head movements away from the frontal pose. Non-frontal glances away from the driving direction, though, are of special interest as interesting events, critical to driver safety, occur during those times. In this paper, we present a distributed camera framework for gaze zone estimation using head pose dynamics to operate robustly and continuously even during large head movements. For experimental evaluations, we collected a dataset from naturalistic on-road driving in urban streets and freeways. A human expert provided the gaze zone ground truth using all vision information including eyes and surround context. Our emphasis is to understand the efficacy of the head pose dynamic information in predicting eye-gaze-based zone ground truth. We conducted several experiments in designing the dynamic features and compared the performance against static head pose based approach. Analyses show that dynamic information significantly improves the results.
{"title":"Robust and continuous estimation of driver gaze zone by dynamic analysis of multiple face videos","authors":"Ashish Tawari, M. Trivedi","doi":"10.1109/IVS.2014.6856607","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856607","url":null,"abstract":"Analysis of driver's head behavior is an integral part of driver monitoring system. Driver's coarse gaze direction or gaze zone is a very important cue in understanding driver-state. Many existing gaze zone estimators are, however, limited to single camera perspectives, which are vulnerable to occlusions of facial features from spatially large head movements away from the frontal pose. Non-frontal glances away from the driving direction, though, are of special interest as interesting events, critical to driver safety, occur during those times. In this paper, we present a distributed camera framework for gaze zone estimation using head pose dynamics to operate robustly and continuously even during large head movements. For experimental evaluations, we collected a dataset from naturalistic on-road driving in urban streets and freeways. A human expert provided the gaze zone ground truth using all vision information including eyes and surround context. Our emphasis is to understand the efficacy of the head pose dynamic information in predicting eye-gaze-based zone ground truth. We conducted several experiments in designing the dynamic features and compared the performance against static head pose based approach. Analyses show that dynamic information significantly improves the results.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128866217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856591
Christian Ruhhammer, N. Hirsenkorn, F. Klanner, C. Stiller
Digital maps within cars are not only the basis for navigation but also for advanced driver assistance systems. Therefore more and more up-to-date details about the environment of the vehicle are required which means that they have to be enriched with further attributes such as detailed representations of intersections. In the future we will be able to extract details of the environment out of the sensory data of connected cars. We present a generic approach for extracting multiple intersection parameters with the same method by analyzing logged data from a test fleet. Based on that a method for a feature based estimation of the confidence is introduced. The proposed approaches are applied in a completely automated process to estimate stop line positions and traffic flows at intersections with traffic lights. Altogether 203.701 traces of the test fleet were used for developing and testing. The performance of the method and the confidence estimation were analyzed using a ground truth, consisting of 108 stop line positions, which was derived from satellite images. The results show that the approach is fast and predictions with an absolute accuracy of 3.5m can be achieved. Hence the method is able to deliver valuable inputs for driver assistance systems.
{"title":"Crowdsourced intersection parameters: A generic approach for extraction and confidence estimation","authors":"Christian Ruhhammer, N. Hirsenkorn, F. Klanner, C. Stiller","doi":"10.1109/IVS.2014.6856591","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856591","url":null,"abstract":"Digital maps within cars are not only the basis for navigation but also for advanced driver assistance systems. Therefore more and more up-to-date details about the environment of the vehicle are required which means that they have to be enriched with further attributes such as detailed representations of intersections. In the future we will be able to extract details of the environment out of the sensory data of connected cars. We present a generic approach for extracting multiple intersection parameters with the same method by analyzing logged data from a test fleet. Based on that a method for a feature based estimation of the confidence is introduced. The proposed approaches are applied in a completely automated process to estimate stop line positions and traffic flows at intersections with traffic lights. Altogether 203.701 traces of the test fleet were used for developing and testing. The performance of the method and the confidence estimation were analyzed using a ground truth, consisting of 108 stop line positions, which was derived from satellite images. The results show that the approach is fast and predictions with an absolute accuracy of 3.5m can be achieved. Hence the method is able to deliver valuable inputs for driver assistance systems.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124808420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856403
Moritz Knorr, José Esparza, W. Niehsen, C. Stiller
It is well known that the robustness of many computer vision algorithms can be improved by employing large field of view cameras, such as omnidirectional cameras. To avoid obstructions in the field of view, such cameras need to be mounted in an exposed position. Alternatively, a multicamera setup can be used. However, this requires the extrinsic calibration to be known. In the present work, we propose a method to calibrate a fisheye multi-camera rig, mounted on a mobile platform. The method only relies on feature correspondences from pairwise overlapping fields of view of adjacent cameras. In contrast to existing approaches, motion estimation or specific motion patterns are not required. To compensate for the large extent of multi-camera setups and corresponding viewpoint variations, as well as geometrical distortions caused by fisheye lenses, captured images are mapped into virtual camera views such that corresponding image regions coincide. To this end, the scene geometry is approximated by the ground plane in close proximity and by infinitely far away objects elsewhere. As a result, low complexity feature detectors and matchers can be employed. The approach is evaluated using a setup of four rigidly coupled and synchronized wide angle fisheye cameras that were attached to four sides of a mobile platform. The cameras have pairwise overlapping fields of view and baselines between 2.25 and 3 meters.
{"title":"Extrinsic calibration of a fisheye multi-camera setup using overlapping fields of view","authors":"Moritz Knorr, José Esparza, W. Niehsen, C. Stiller","doi":"10.1109/IVS.2014.6856403","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856403","url":null,"abstract":"It is well known that the robustness of many computer vision algorithms can be improved by employing large field of view cameras, such as omnidirectional cameras. To avoid obstructions in the field of view, such cameras need to be mounted in an exposed position. Alternatively, a multicamera setup can be used. However, this requires the extrinsic calibration to be known. In the present work, we propose a method to calibrate a fisheye multi-camera rig, mounted on a mobile platform. The method only relies on feature correspondences from pairwise overlapping fields of view of adjacent cameras. In contrast to existing approaches, motion estimation or specific motion patterns are not required. To compensate for the large extent of multi-camera setups and corresponding viewpoint variations, as well as geometrical distortions caused by fisheye lenses, captured images are mapped into virtual camera views such that corresponding image regions coincide. To this end, the scene geometry is approximated by the ground plane in close proximity and by infinitely far away objects elsewhere. As a result, low complexity feature detectors and matchers can be employed. The approach is evaluated using a setup of four rigidly coupled and synchronized wide angle fisheye cameras that were attached to four sides of a mobile platform. The cameras have pairwise overlapping fields of view and baselines between 2.25 and 3 meters.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128772235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856408
Pakapoj Tulsuk, Panu Srestasathiern, M. Ruchanurucks, T. Phatrapornnant, H. Nagahashi
This paper presents a novel method for extrinsic parameters estimation of a single line scan LiDAR and a camera. Using a checkerboard, the calibration setup is simple and practical. Particularly, the proposed calibration method is based on resolving geometry of the checkerboard that visible to the camera and the LiDAR. The calibration setup geometry is described by planes, lines and points. Our novelty is a new hypothesis of the geometry which is the orthogonal distances between LiDAR points and the line from the intersection between the checkerboard and LiDAR scan plane. To evaluate the performance of the proposed method, we compared our proposed method with the state of the art method i.e. Zhang and Pless [1]. The experimental results showed that the proposed method yielded better results.
{"title":"A novel method for extrinsic parameters estimation between a single-line scan LiDAR and a camera","authors":"Pakapoj Tulsuk, Panu Srestasathiern, M. Ruchanurucks, T. Phatrapornnant, H. Nagahashi","doi":"10.1109/IVS.2014.6856408","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856408","url":null,"abstract":"This paper presents a novel method for extrinsic parameters estimation of a single line scan LiDAR and a camera. Using a checkerboard, the calibration setup is simple and practical. Particularly, the proposed calibration method is based on resolving geometry of the checkerboard that visible to the camera and the LiDAR. The calibration setup geometry is described by planes, lines and points. Our novelty is a new hypothesis of the geometry which is the orthogonal distances between LiDAR points and the line from the intersection between the checkerboard and LiDAR scan plane. To evaluate the performance of the proposed method, we compared our proposed method with the state of the art method i.e. Zhang and Pless [1]. The experimental results showed that the proposed method yielded better results.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127017378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856551
Johannes Beck, C. Stiller
Lane estimation of the ego vehicle plays a key role in navigating a car through unknown areas. In fact, solving this problem is a prerequisite for any vehicle driving autonomously in previously unmapped areas. Most of the proposed methods for lane detection are tuned for freeways and rural environments. In urban scenarios, however, they are unable to reliably detect the ego lane in many situations. Often, these methods simply work on the principle of fitting a parametric model to lane markers. Since a large variety of lane shapes are found in urban environments, it is obvious that these models are too restrictive. Moreover, the complex structure of intersection-like situations further hampers the success of the aforementioned methods. Therefore we propose a non-parametric lane model which can handle a wide range of different features such as grass verge, free space, lane markers etc. The ego lane estimation is formulated as a shortest path problem. A directed acyclic graph is constructed from the feature pool rendering it efficiently solvable. The proposed approach is easily extendable as it is able to cope with pixel-wise low level features as well as highlevel ones jointly. We demonstrate the potential of our method in urban and rural areas and present experimental findings on difficult real world data sets.
{"title":"Non-parametric lane estimation in urban environments","authors":"Johannes Beck, C. Stiller","doi":"10.1109/IVS.2014.6856551","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856551","url":null,"abstract":"Lane estimation of the ego vehicle plays a key role in navigating a car through unknown areas. In fact, solving this problem is a prerequisite for any vehicle driving autonomously in previously unmapped areas. Most of the proposed methods for lane detection are tuned for freeways and rural environments. In urban scenarios, however, they are unable to reliably detect the ego lane in many situations. Often, these methods simply work on the principle of fitting a parametric model to lane markers. Since a large variety of lane shapes are found in urban environments, it is obvious that these models are too restrictive. Moreover, the complex structure of intersection-like situations further hampers the success of the aforementioned methods. Therefore we propose a non-parametric lane model which can handle a wide range of different features such as grass verge, free space, lane markers etc. The ego lane estimation is formulated as a shortest path problem. A directed acyclic graph is constructed from the feature pool rendering it efficiently solvable. The proposed approach is easily extendable as it is able to cope with pixel-wise low level features as well as highlevel ones jointly. We demonstrate the potential of our method in urban and rural areas and present experimental findings on difficult real world data sets.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125880073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856421
Henning Lategahn, Johannes Beck, C. Stiller
Many robotics applications nowadays use cameras for various task such as place recognition, localization, mapping etc. These methods heavily depend on image descriptors. A plethora of descriptors have recently been introduced but hardly any address the problem of illumination robustness. Herein we introduce an illumination robust image descriptor which we dub DIRD (Dird is an Illumination Robust Descriptor). First a set of Haar features are computed and individual pixel responses are normalized to L2 unit length. Thereafter features are pooled over a predefined neighborhood region. The concatenation of several such features form the basis DIRD vector. These features are then quantized to maximize entropy allowing (among others) a binary version of DIRD consisting of only ones and zeros for very fast matching. We evaluate DIRD on three test sets and compare its performance with (extended) USURF, BRIEF and a baseline gray level descriptor. All proposed DIRD variants substantially outperform these methods by times more than doubling the performance of USURF and BRIEF.
{"title":"DIRD is an illumination robust descriptor","authors":"Henning Lategahn, Johannes Beck, C. Stiller","doi":"10.1109/IVS.2014.6856421","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856421","url":null,"abstract":"Many robotics applications nowadays use cameras for various task such as place recognition, localization, mapping etc. These methods heavily depend on image descriptors. A plethora of descriptors have recently been introduced but hardly any address the problem of illumination robustness. Herein we introduce an illumination robust image descriptor which we dub DIRD (Dird is an Illumination Robust Descriptor). First a set of Haar features are computed and individual pixel responses are normalized to L2 unit length. Thereafter features are pooled over a predefined neighborhood region. The concatenation of several such features form the basis DIRD vector. These features are then quantized to maximize entropy allowing (among others) a binary version of DIRD consisting of only ones and zeros for very fast matching. We evaluate DIRD on three test sets and compare its performance with (extended) USURF, BRIEF and a baseline gray level descriptor. All proposed DIRD variants substantially outperform these methods by times more than doubling the performance of USURF and BRIEF.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126426154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856500
Rhea Valentina, A. Viehl, O. Bringmann, W. Rosenstiel
The HVAC system is considered as the largest auxiliary power load in electric vehicles (EV). Therefore, this paper presents a detailed modeling of an EV-based HVAC system to support a priori prediction of HVAC system energy consumption under consideration of the EV users thermal comfort. This prediction is integrated into a navigation system to allow the driver entering the preferred parameters of thermal comfort and advising the driver about the predicted overall energy consumption. The advice acceptance might increase the awareness of the driver regarding the potential saved energy and leads to an energy-efficient vehicle operation by extending the overall driving range.
{"title":"HVAC system modeling for range prediction of electric vehicles","authors":"Rhea Valentina, A. Viehl, O. Bringmann, W. Rosenstiel","doi":"10.1109/IVS.2014.6856500","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856500","url":null,"abstract":"The HVAC system is considered as the largest auxiliary power load in electric vehicles (EV). Therefore, this paper presents a detailed modeling of an EV-based HVAC system to support a priori prediction of HVAC system energy consumption under consideration of the EV users thermal comfort. This prediction is integrated into a navigation system to allow the driver entering the preferred parameters of thermal comfort and advising the driver about the predicted overall energy consumption. The advice acceptance might increase the awareness of the driver regarding the potential saved energy and leads to an energy-efficient vehicle operation by extending the overall driving range.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126849399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856440
C. D'Agostino, A. Saidi, Gilles Scouarnec, Liming Chen
Truck drivers typically display different behaviors when facing various driving events, e.g., approaching a roundabout, and thereby have a major impact both on the fuel consumption and the vehicle speed. Within the context where fuel is increasingly a major cost center for merchandise transport companies, it is important to recognize different driver behaviors in order to be able to simulate them as closely to the real data as possible during the truck development process. In this paper, we introduce, instead of economic driving, the notion of rational driving which seeks to decrease the average fuel consumption while respecting the transport companies' constraint, i.e., the delivery delay. Moreover, we also propose an indicator, namely rational driving index (RDI), which enables to quantify how good a driver behavior is with respect to the rational driving. We then investigate various driving features contributing to characterize a rational driver behavior, using real driving data collected from 34 different truck drivers on an extra-urban road section particularly representative of travel paths of trucks ensuring regional merchandise distribution. Given the fact that real driving data collected on an open road can differ in terms of environment, e.g., weather, traffic, we further study, through simulations on a digital representation of a roundabout, the impact of two major driving features, i.e., the use of coasting and crossing speed at roundabouts, with respect to rational driving. The experimental results from both real driving data and simulations show high correlations of these two driving features with respect to RDI and demonstrate that a good rational driver tends to decelerate slowly during braking periods (use of coasting) and have high crossing speed in roundabouts.
{"title":"Rational truck driving and its correlated driving features in extra-urban areas","authors":"C. D'Agostino, A. Saidi, Gilles Scouarnec, Liming Chen","doi":"10.1109/IVS.2014.6856440","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856440","url":null,"abstract":"Truck drivers typically display different behaviors when facing various driving events, e.g., approaching a roundabout, and thereby have a major impact both on the fuel consumption and the vehicle speed. Within the context where fuel is increasingly a major cost center for merchandise transport companies, it is important to recognize different driver behaviors in order to be able to simulate them as closely to the real data as possible during the truck development process. In this paper, we introduce, instead of economic driving, the notion of rational driving which seeks to decrease the average fuel consumption while respecting the transport companies' constraint, i.e., the delivery delay. Moreover, we also propose an indicator, namely rational driving index (RDI), which enables to quantify how good a driver behavior is with respect to the rational driving. We then investigate various driving features contributing to characterize a rational driver behavior, using real driving data collected from 34 different truck drivers on an extra-urban road section particularly representative of travel paths of trucks ensuring regional merchandise distribution. Given the fact that real driving data collected on an open road can differ in terms of environment, e.g., weather, traffic, we further study, through simulations on a digital representation of a roundabout, the impact of two major driving features, i.e., the use of coasting and crossing speed at roundabouts, with respect to rational driving. The experimental results from both real driving data and simulations show high correlations of these two driving features with respect to RDI and demonstrate that a good rational driver tends to decelerate slowly during braking periods (use of coasting) and have high crossing speed in roundabouts.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127040793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856458
Chaoyong Chen, Markus Rickert, A. Knoll
A real-time vehicle motion planning engine is presented in this paper, with the focus on exploiting the prior and online traffic knowledge, e.g., predefined roadmap, prior environment information, behaviour-based motion primitives, within the space exploration guided heuristic search (SEHS) framework. The SEHS algorithm plans a kinodynamic vehicle motion in two steps: a geometric investigation of the free space, followed by a grid-free heuristic search employing primitive motions. These two procedures are generic and possible to take advantage of traffic knowledge. In this paper, the space exploration is supported by a roadmap and the heuristic search benefits from the behaviour-based primitives. Based on this idea, a light weighted motion planning engine is built, with the purpose to handle the traffic knowledge and the planning time in real-time motion planning. The experiments demonstrate that this SEHS motion planning engine is flexible and scalable for practical traffic scenarios with better results than the baseline SEHS motion planner regarding the provided traffic knowledge.
{"title":"A Traffic Knowledge Aided Vehicle Motion Planning Engine Based on Space Exploration Guided Heuristic Search","authors":"Chaoyong Chen, Markus Rickert, A. Knoll","doi":"10.1109/IVS.2014.6856458","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856458","url":null,"abstract":"A real-time vehicle motion planning engine is presented in this paper, with the focus on exploiting the prior and online traffic knowledge, e.g., predefined roadmap, prior environment information, behaviour-based motion primitives, within the space exploration guided heuristic search (SEHS) framework. The SEHS algorithm plans a kinodynamic vehicle motion in two steps: a geometric investigation of the free space, followed by a grid-free heuristic search employing primitive motions. These two procedures are generic and possible to take advantage of traffic knowledge. In this paper, the space exploration is supported by a roadmap and the heuristic search benefits from the behaviour-based primitives. Based on this idea, a light weighted motion planning engine is built, with the purpose to handle the traffic knowledge and the planning time in real-time motion planning. The experiments demonstrate that this SEHS motion planning engine is flexible and scalable for practical traffic scenarios with better results than the baseline SEHS motion planner regarding the provided traffic knowledge.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132619820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2014-06-08DOI: 10.1109/IVS.2014.6856399
S. Silberstein, Dan Levi, V. Kogan, R. Gazit
We present a new vision-based pedestrian detection system for rear-view cameras which is robust to partial occlusions and non-upright poses. Detection is made using a single automotive rear-view fisheye lens camera. The system uses “Accelerated Feature Synthesis”, a multiple-part based detection method with state-of-the-art performance. In addition, we collected and annotated an extensive dataset of videos for this specific application which includes pedestrians in a wide range of environmental conditions. Using this dataset we demonstrate the benefits of using part-based detection for detecting people in various poses and under occlusions. We also show, using a measure developed specifically for video-based evaluation, the gain in detection accuracy compared with template-based detection.
{"title":"Vision-based pedestrian detection for rear-view cameras","authors":"S. Silberstein, Dan Levi, V. Kogan, R. Gazit","doi":"10.1109/IVS.2014.6856399","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856399","url":null,"abstract":"We present a new vision-based pedestrian detection system for rear-view cameras which is robust to partial occlusions and non-upright poses. Detection is made using a single automotive rear-view fisheye lens camera. The system uses “Accelerated Feature Synthesis”, a multiple-part based detection method with state-of-the-art performance. In addition, we collected and annotated an extensive dataset of videos for this specific application which includes pedestrians in a wide range of environmental conditions. Using this dataset we demonstrate the benefits of using part-based detection for detecting people in various poses and under occlusions. We also show, using a measure developed specifically for video-based evaluation, the gain in detection accuracy compared with template-based detection.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131331716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}