Pub Date : 2009-06-03DOI: 10.1109/IVS.2009.5164335
Xinyu Zhang, Zhong-ke Shi
Lane boundary detection is the most important component of Driving Assistance System which aims at keeping drivers safe. In this paper, a method combining the edge characteristics with brightness of lane is discussed for traffic scene at night. First images are preprocessed by dual thresholding algorithm in green channel. Then, the edge is detected by a fast method based on single-direction gradient operator. Finally, noises such as headlights of vehicles, reflected lights and street lamps are removed through filter template. Experiment results indicate that the proposed approach is adapted to night condition.
{"title":"Study on lane boundary detection in night scene","authors":"Xinyu Zhang, Zhong-ke Shi","doi":"10.1109/IVS.2009.5164335","DOIUrl":"https://doi.org/10.1109/IVS.2009.5164335","url":null,"abstract":"Lane boundary detection is the most important component of Driving Assistance System which aims at keeping drivers safe. In this paper, a method combining the edge characteristics with brightness of lane is discussed for traffic scene at night. First images are preprocessed by dual thresholding algorithm in green channel. Then, the edge is detected by a fast method based on single-direction gradient operator. Finally, noises such as headlights of vehicles, reflected lights and street lamps are removed through filter template. Experiment results indicate that the proposed approach is adapted to night condition.","PeriodicalId":396749,"journal":{"name":"2009 IEEE Intelligent Vehicles Symposium","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128597547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-03DOI: 10.1109/IVS.2009.5164417
Kenji Yamashiro, Daisuke Deguchi, Tomokazu Takahashi, I. Ide, H. Murase, K. Higuchi, T. Naito
Many research works have been carried out to measure and use a driver's gaze directions to prevent traffic accidents caused by inattentive driving, neglect to confirm safe conditions, and other driver errors. A calibration process is needed to measure correct gaze directions for a gaze tracking system. However, existing calibration methods require a driver to gaze at specified points before driving. In this paper, we propose a method for automatic calibration of an in-vehicle gaze tracking system by analyzing the driver's typical gaze behavior. The proposed method uses the rear-view and the side-view mirror positions as reference points. The effectiveness of the proposed method is demonstrated by experiments on measuring gaze directions in actual road environments.
{"title":"Automatic calibration of an in-vehicle gaze tracking system using driver's typical gaze behavior","authors":"Kenji Yamashiro, Daisuke Deguchi, Tomokazu Takahashi, I. Ide, H. Murase, K. Higuchi, T. Naito","doi":"10.1109/IVS.2009.5164417","DOIUrl":"https://doi.org/10.1109/IVS.2009.5164417","url":null,"abstract":"Many research works have been carried out to measure and use a driver's gaze directions to prevent traffic accidents caused by inattentive driving, neglect to confirm safe conditions, and other driver errors. A calibration process is needed to measure correct gaze directions for a gaze tracking system. However, existing calibration methods require a driver to gaze at specified points before driving. In this paper, we propose a method for automatic calibration of an in-vehicle gaze tracking system by analyzing the driver's typical gaze behavior. The proposed method uses the rear-view and the side-view mirror positions as reference points. The effectiveness of the proposed method is demonstrated by experiments on measuring gaze directions in actual road environments.","PeriodicalId":396749,"journal":{"name":"2009 IEEE Intelligent Vehicles Symposium","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117185561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-03DOI: 10.1109/IVS.2009.5164397
A. Doshi, M. Trivedi
Recent advances in driver behavior analysis for Active Safety have led to the ability to reliably predict certain driver intentions. Specifically, researchers have developed Advanced Driver Assistance Systems that produce an estimate of a driver's intention to change lanes, make an intersection turn, or brake, several seconds before the act itself. One integral feature in these systems is the analysis of driver visual search prior to a maneuver, using head pose and eye gaze as a proxy to determine focus of attention. However it is not clear whether visual distractions during a goal-oriented visual search could change the driver's behavior and thereby cause a degradation in the performance of the behavior analysis systems. In this paper we aim to determine whether it is feasible to use computer vision to determine whether a driver's visual search was affected by an external stimulus. A holistic ethnographic driving dataset is used as a basis to generate a motion-based visual saliency map of the scene. This map is correlated with predetermined eye gaze data in situations where a driver intends to change lanes. Results demonstrate the capability of this methodology to improve driver attention and behavior estimation, as well as intent prediction.
{"title":"Investigating the relationships between gaze patterns, dynamic vehicle surround analysis, and driver intentions","authors":"A. Doshi, M. Trivedi","doi":"10.1109/IVS.2009.5164397","DOIUrl":"https://doi.org/10.1109/IVS.2009.5164397","url":null,"abstract":"Recent advances in driver behavior analysis for Active Safety have led to the ability to reliably predict certain driver intentions. Specifically, researchers have developed Advanced Driver Assistance Systems that produce an estimate of a driver's intention to change lanes, make an intersection turn, or brake, several seconds before the act itself. One integral feature in these systems is the analysis of driver visual search prior to a maneuver, using head pose and eye gaze as a proxy to determine focus of attention. However it is not clear whether visual distractions during a goal-oriented visual search could change the driver's behavior and thereby cause a degradation in the performance of the behavior analysis systems. In this paper we aim to determine whether it is feasible to use computer vision to determine whether a driver's visual search was affected by an external stimulus. A holistic ethnographic driving dataset is used as a basis to generate a motion-based visual saliency map of the scene. This map is correlated with predetermined eye gaze data in situations where a driver intends to change lanes. Results demonstrate the capability of this methodology to improve driver attention and behavior estimation, as well as intent prediction.","PeriodicalId":396749,"journal":{"name":"2009 IEEE Intelligent Vehicles Symposium","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114294701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-03DOI: 10.1109/IVS.2009.5164320
E. Seignez, A. Lambert
Estimating the configuration of a vehicle is crucial for navigation. The most classical approaches are (extended) Kalman filtering and Markov localization, often implemented via particle filtering. Interval analysis allows an alternative approach: bounded-error localization. Contrary to classical Extended Kalman Filtering, this approach allows global localisation, and contrary to Markov localization it provides guaranteed results in the sense that a set is computed that contains all of the configurations that are consistent with the data and hypotheses. This paper describes the bounded-error localization algorithms so as to present a complexity study and how to achieve a real time implementation.
{"title":"Guaranteed state estimation tuning for real time applications","authors":"E. Seignez, A. Lambert","doi":"10.1109/IVS.2009.5164320","DOIUrl":"https://doi.org/10.1109/IVS.2009.5164320","url":null,"abstract":"Estimating the configuration of a vehicle is crucial for navigation. The most classical approaches are (extended) Kalman filtering and Markov localization, often implemented via particle filtering. Interval analysis allows an alternative approach: bounded-error localization. Contrary to classical Extended Kalman Filtering, this approach allows global localisation, and contrary to Markov localization it provides guaranteed results in the sense that a set is computed that contains all of the configurations that are consistent with the data and hypotheses. This paper describes the bounded-error localization algorithms so as to present a complexity study and how to achieve a real time implementation.","PeriodicalId":396749,"journal":{"name":"2009 IEEE Intelligent Vehicles Symposium","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125594622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a road signs detection, recognition and tracking system based on multi-cues hybrid. In detection stage, the color and gradient cues are used to segment the interesting regions, and the corner and geometrical cues are used to detect the signs. A pseudo RGB-HSI conversion method without the need of nonlinear transformation is presented for color extraction. In recognition stage, a coarse classification is performed using the corresponding relationship of color and shape, then the Support Vector Machines with Binary Tree Architecture is built to recognize each category of road sign. Furthermore, we present a finite-state machine to decide whether a road sign is really recognized by fusion multi-frame recognition results or not. In order to reduce recognition errors, Lucas-Kanade feature tracker is introduced for road sign tracking. Experimental results in different conditions, including sunny, cloudy, and rainy weather demonstrates that most road signs can be correctly detected and recognized with a high accuracy and a frame rate of approximately 15 frames per second on a standard PC.
{"title":"A system for road sign detection, recognition and tracking based on multi-cues hybrid","authors":"Wei Liu, Xue Chen, Bobo Duan, Hui Dong, Pengyu Fu, Huai Yuan, Hong Zhao","doi":"10.1109/IVS.2009.5164339","DOIUrl":"https://doi.org/10.1109/IVS.2009.5164339","url":null,"abstract":"This paper presents a road signs detection, recognition and tracking system based on multi-cues hybrid. In detection stage, the color and gradient cues are used to segment the interesting regions, and the corner and geometrical cues are used to detect the signs. A pseudo RGB-HSI conversion method without the need of nonlinear transformation is presented for color extraction. In recognition stage, a coarse classification is performed using the corresponding relationship of color and shape, then the Support Vector Machines with Binary Tree Architecture is built to recognize each category of road sign. Furthermore, we present a finite-state machine to decide whether a road sign is really recognized by fusion multi-frame recognition results or not. In order to reduce recognition errors, Lucas-Kanade feature tracker is introduced for road sign tracking. Experimental results in different conditions, including sunny, cloudy, and rainy weather demonstrates that most road signs can be correctly detected and recognized with a high accuracy and a frame rate of approximately 15 frames per second on a standard PC.","PeriodicalId":396749,"journal":{"name":"2009 IEEE Intelligent Vehicles Symposium","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124931275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-03DOI: 10.1109/IVS.2009.5164311
Sayanan Sivaraman, M. Trivedi
In this paper, the framework is presented for using active learning to train a robust monocular on-road vehicle detector for active safety, based on Adaboost classification and Haar-like rectangular image features. An initial vehicle detector was trained using Adaboost and Haar-like rectangular image features and was very susceptible to false positives. This detector was run on an independent highway dataset, storing true detections and false positives to obtain a selectively sampled training set for the active learning training iteration. Various configurations of the newly trained classifier were tested, experimenting with the trade-off between detection rate and false detection rate. Experimental results show that this method yields a vehicle classifier with a high detection rate and low false detection rate on real data, yielding a valuable addition to environmental awareness for intelligent active safety systems in vehicles.
{"title":"Active learning based robust monocular vehicle detection for on-road safety systems","authors":"Sayanan Sivaraman, M. Trivedi","doi":"10.1109/IVS.2009.5164311","DOIUrl":"https://doi.org/10.1109/IVS.2009.5164311","url":null,"abstract":"In this paper, the framework is presented for using active learning to train a robust monocular on-road vehicle detector for active safety, based on Adaboost classification and Haar-like rectangular image features. An initial vehicle detector was trained using Adaboost and Haar-like rectangular image features and was very susceptible to false positives. This detector was run on an independent highway dataset, storing true detections and false positives to obtain a selectively sampled training set for the active learning training iteration. Various configurations of the newly trained classifier were tested, experimenting with the trade-off between detection rate and false detection rate. Experimental results show that this method yields a vehicle classifier with a high detection rate and low false detection rate on real data, yielding a valuable addition to environmental awareness for intelligent active safety systems in vehicles.","PeriodicalId":396749,"journal":{"name":"2009 IEEE Intelligent Vehicles Symposium","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121944366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-03DOI: 10.1109/IVS.2009.5164381
Hongliang Zhou, Zhiyuan Liu
Yaw stability of an automotive vehicle in steering maneuver is critical to stability and handling performance of the vehicle. In the paper, a yaw stability controller based on model predictive control is designed in the principle of active differential brake. Implementing a simple 6 DoF linear vehicle model, the proposed controller solves brake torque constraints and overactuated problems in vehicle yaw stability control with the consideration of tire nonlinear characteristics. The simulations on a professional vehicle dynamics tool show the controller could calculate reasonable brake torque of the most efficient wheel in moving horizon manner, and keep the vehicle yaw stability.
{"title":"Design of vehicle yaw stability controller based on model predictive control","authors":"Hongliang Zhou, Zhiyuan Liu","doi":"10.1109/IVS.2009.5164381","DOIUrl":"https://doi.org/10.1109/IVS.2009.5164381","url":null,"abstract":"Yaw stability of an automotive vehicle in steering maneuver is critical to stability and handling performance of the vehicle. In the paper, a yaw stability controller based on model predictive control is designed in the principle of active differential brake. Implementing a simple 6 DoF linear vehicle model, the proposed controller solves brake torque constraints and overactuated problems in vehicle yaw stability control with the consideration of tire nonlinear characteristics. The simulations on a professional vehicle dynamics tool show the controller could calculate reasonable brake torque of the most efficient wheel in moving horizon manner, and keep the vehicle yaw stability.","PeriodicalId":396749,"journal":{"name":"2009 IEEE Intelligent Vehicles Symposium","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122564687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-03DOI: 10.1109/IVS.2009.5164310
Taoufik Bdiri, F. Moutarde, B. Steux
We present promising results for visual object categorization, obtained with adaBoost using new original “keypoints-based features”. These weak-classifiers produce a boolean response based on presence or absence in the tested image of a “keypoint” (a kind of SURF interest point) with a descriptor sufficiently similar (i.e. within a given distance) to a reference descriptor characterizing the feature. A first experiment was conducted on a public image dataset containing lateral-viewed cars, yielding 95% recall with 95% precision on test set. Preliminary tests on a small subset of a pedestrians database also gives promising 97% recall with 92 % precision, which shows the generality of our new family of features. Moreover, analysis of the positions of adaBoost-selected keypoints show that they correspond to a specific part of the object category (such as “wheel” or “side skirt” in the case of lateral-cars) and thus have a “semantic” meaning. We also made a first test on video for detecting vehicles from adaBoost-selected keypoints filtered in real-time from all detected keypoints.
{"title":"Visual object categorization with new keypoint-based adaBoost features","authors":"Taoufik Bdiri, F. Moutarde, B. Steux","doi":"10.1109/IVS.2009.5164310","DOIUrl":"https://doi.org/10.1109/IVS.2009.5164310","url":null,"abstract":"We present promising results for visual object categorization, obtained with adaBoost using new original “keypoints-based features”. These weak-classifiers produce a boolean response based on presence or absence in the tested image of a “keypoint” (a kind of SURF interest point) with a descriptor sufficiently similar (i.e. within a given distance) to a reference descriptor characterizing the feature. A first experiment was conducted on a public image dataset containing lateral-viewed cars, yielding 95% recall with 95% precision on test set. Preliminary tests on a small subset of a pedestrians database also gives promising 97% recall with 92 % precision, which shows the generality of our new family of features. Moreover, analysis of the positions of adaBoost-selected keypoints show that they correspond to a specific part of the object category (such as “wheel” or “side skirt” in the case of lateral-cars) and thus have a “semantic” meaning. We also made a first test on video for detecting vehicles from adaBoost-selected keypoints filtered in real-time from all detected keypoints.","PeriodicalId":396749,"journal":{"name":"2009 IEEE Intelligent Vehicles Symposium","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122714940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-03DOI: 10.1109/IVS.2009.5164253
H. Loose, U. Franke, C. Stiller
Despite the availability of lane departure and lane keeping systems for highway assistance, unmarked and winding rural roads still pose challenges to lane recognition systems. To detect an upcoming curve as soon as possible, the viewing range of image-based lane recognition systems has to be extended. This is done by evaluating 3D information obtained from stereo vision or imaging radar in this paper. Both sensors deliver evidence grids as the basis for road course estimation. Besides known Kalman Filter approaches, Particle Filters have recently gained interest since they offer the possibility to employ cues of a road, which can not be described as measurements needed for a Kalman Filter approach. We propose to combine both principles and their benefits in a Kalman Particle Filter. The comparison between the results gained from this recently published filter scheme and the classical approaches using real world data proves the advantages of the Kalman Particle Filter.
{"title":"Kalman Particle Filter for lane recognition on rural roads","authors":"H. Loose, U. Franke, C. Stiller","doi":"10.1109/IVS.2009.5164253","DOIUrl":"https://doi.org/10.1109/IVS.2009.5164253","url":null,"abstract":"Despite the availability of lane departure and lane keeping systems for highway assistance, unmarked and winding rural roads still pose challenges to lane recognition systems. To detect an upcoming curve as soon as possible, the viewing range of image-based lane recognition systems has to be extended. This is done by evaluating 3D information obtained from stereo vision or imaging radar in this paper. Both sensors deliver evidence grids as the basis for road course estimation. Besides known Kalman Filter approaches, Particle Filters have recently gained interest since they offer the possibility to employ cues of a road, which can not be described as measurements needed for a Kalman Filter approach. We propose to combine both principles and their benefits in a Kalman Particle Filter. The comparison between the results gained from this recently published filter scheme and the classical approaches using real world data proves the advantages of the Kalman Particle Filter.","PeriodicalId":396749,"journal":{"name":"2009 IEEE Intelligent Vehicles Symposium","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124780668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-06-03DOI: 10.1109/IVS.2009.5164292
A. Broggi, Pietro Cerri, Luca Gatti, P. Grisleri, H. Jung, Junhee Lee
This paper presents the results of an innovative approach to pedestrian detection for automotive applications in which a non-reversible system is used; therefore the aim is to reach a very low false detection rate, ideally zero, by searching for pedestrians in specific areas only.
{"title":"Scenario-driven search for pedestrians aimed at triggering non-reversible systems","authors":"A. Broggi, Pietro Cerri, Luca Gatti, P. Grisleri, H. Jung, Junhee Lee","doi":"10.1109/IVS.2009.5164292","DOIUrl":"https://doi.org/10.1109/IVS.2009.5164292","url":null,"abstract":"This paper presents the results of an innovative approach to pedestrian detection for automotive applications in which a non-reversible system is used; therefore the aim is to reach a very low false detection rate, ideally zero, by searching for pedestrians in specific areas only.","PeriodicalId":396749,"journal":{"name":"2009 IEEE Intelligent Vehicles Symposium","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123747862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}