首页 > 最新文献

2014 IEEE Intelligent Vehicles Symposium Proceedings最新文献

英文 中文
Robust and continuous estimation of driver gaze zone by dynamic analysis of multiple face videos 基于多人脸视频动态分析的驾驶员注视区域鲁棒连续估计
Pub Date : 2014-06-08 DOI: 10.1109/IVS.2014.6856607
Ashish Tawari, M. Trivedi
Analysis of driver's head behavior is an integral part of driver monitoring system. Driver's coarse gaze direction or gaze zone is a very important cue in understanding driver-state. Many existing gaze zone estimators are, however, limited to single camera perspectives, which are vulnerable to occlusions of facial features from spatially large head movements away from the frontal pose. Non-frontal glances away from the driving direction, though, are of special interest as interesting events, critical to driver safety, occur during those times. In this paper, we present a distributed camera framework for gaze zone estimation using head pose dynamics to operate robustly and continuously even during large head movements. For experimental evaluations, we collected a dataset from naturalistic on-road driving in urban streets and freeways. A human expert provided the gaze zone ground truth using all vision information including eyes and surround context. Our emphasis is to understand the efficacy of the head pose dynamic information in predicting eye-gaze-based zone ground truth. We conducted several experiments in designing the dynamic features and compared the performance against static head pose based approach. Analyses show that dynamic information significantly improves the results.
驾驶员头部行为分析是驾驶员监控系统的重要组成部分。驾驶员的粗略注视方向或注视区域是理解驾驶员状态的重要线索。然而,许多现有的凝视区域估计器仅限于单镜头视角,这很容易受到空间上头部运动远离正面姿势的面部特征遮挡的影响。然而,远离驾驶方向的非正面目光会引起特别的兴趣,因为在这些时候会发生对驾驶员安全至关重要的有趣事件。在本文中,我们提出了一个分布式相机框架,用于注视区域估计,使用头部姿势动态,即使在大的头部运动中也能鲁棒地连续运行。为了进行实验评估,我们从城市街道和高速公路的自然道路驾驶中收集了一个数据集。一位人类专家利用包括眼睛和周围环境在内的所有视觉信息提供了凝视区域的地面真相。我们的重点是了解头部姿势动态信息在预测基于眼睛的区域地面真相方面的功效。我们在设计动态特征方面进行了多次实验,并将其性能与基于静态头部姿态的方法进行了比较。分析表明,动态信息显著改善了结果。
{"title":"Robust and continuous estimation of driver gaze zone by dynamic analysis of multiple face videos","authors":"Ashish Tawari, M. Trivedi","doi":"10.1109/IVS.2014.6856607","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856607","url":null,"abstract":"Analysis of driver's head behavior is an integral part of driver monitoring system. Driver's coarse gaze direction or gaze zone is a very important cue in understanding driver-state. Many existing gaze zone estimators are, however, limited to single camera perspectives, which are vulnerable to occlusions of facial features from spatially large head movements away from the frontal pose. Non-frontal glances away from the driving direction, though, are of special interest as interesting events, critical to driver safety, occur during those times. In this paper, we present a distributed camera framework for gaze zone estimation using head pose dynamics to operate robustly and continuously even during large head movements. For experimental evaluations, we collected a dataset from naturalistic on-road driving in urban streets and freeways. A human expert provided the gaze zone ground truth using all vision information including eyes and surround context. Our emphasis is to understand the efficacy of the head pose dynamic information in predicting eye-gaze-based zone ground truth. We conducted several experiments in designing the dynamic features and compared the performance against static head pose based approach. Analyses show that dynamic information significantly improves the results.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128866217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 86
Crowdsourced intersection parameters: A generic approach for extraction and confidence estimation 众包交叉参数:一种提取和置信度估计的通用方法
Pub Date : 2014-06-08 DOI: 10.1109/IVS.2014.6856591
Christian Ruhhammer, N. Hirsenkorn, F. Klanner, C. Stiller
Digital maps within cars are not only the basis for navigation but also for advanced driver assistance systems. Therefore more and more up-to-date details about the environment of the vehicle are required which means that they have to be enriched with further attributes such as detailed representations of intersections. In the future we will be able to extract details of the environment out of the sensory data of connected cars. We present a generic approach for extracting multiple intersection parameters with the same method by analyzing logged data from a test fleet. Based on that a method for a feature based estimation of the confidence is introduced. The proposed approaches are applied in a completely automated process to estimate stop line positions and traffic flows at intersections with traffic lights. Altogether 203.701 traces of the test fleet were used for developing and testing. The performance of the method and the confidence estimation were analyzed using a ground truth, consisting of 108 stop line positions, which was derived from satellite images. The results show that the approach is fast and predictions with an absolute accuracy of 3.5m can be achieved. Hence the method is able to deliver valuable inputs for driver assistance systems.
车内数字地图不仅是导航的基础,也是高级驾驶辅助系统的基础。因此,需要越来越多关于车辆环境的最新细节,这意味着它们必须具有更多的属性,例如交叉口的详细表示。在未来,我们将能够从联网汽车的感知数据中提取环境的细节。通过分析测试车队的日志数据,提出了一种通用的提取多个交叉口参数的方法。在此基础上,提出了一种基于特征的置信度估计方法。所提出的方法被应用于一个完全自动化的过程中,以估计停车线位置和交通流量在有交通灯的十字路口。总共203.701架测试机队被用于发展和测试。利用卫星图像中108个停止线位置组成的地面真值分析了该方法的性能和置信度估计。结果表明,该方法速度快,预测绝对精度可达3.5m。因此,该方法能够为驾驶员辅助系统提供有价值的输入。
{"title":"Crowdsourced intersection parameters: A generic approach for extraction and confidence estimation","authors":"Christian Ruhhammer, N. Hirsenkorn, F. Klanner, C. Stiller","doi":"10.1109/IVS.2014.6856591","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856591","url":null,"abstract":"Digital maps within cars are not only the basis for navigation but also for advanced driver assistance systems. Therefore more and more up-to-date details about the environment of the vehicle are required which means that they have to be enriched with further attributes such as detailed representations of intersections. In the future we will be able to extract details of the environment out of the sensory data of connected cars. We present a generic approach for extracting multiple intersection parameters with the same method by analyzing logged data from a test fleet. Based on that a method for a feature based estimation of the confidence is introduced. The proposed approaches are applied in a completely automated process to estimate stop line positions and traffic flows at intersections with traffic lights. Altogether 203.701 traces of the test fleet were used for developing and testing. The performance of the method and the confidence estimation were analyzed using a ground truth, consisting of 108 stop line positions, which was derived from satellite images. The results show that the approach is fast and predictions with an absolute accuracy of 3.5m can be achieved. Hence the method is able to deliver valuable inputs for driver assistance systems.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124808420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Extrinsic calibration of a fisheye multi-camera setup using overlapping fields of view 基于重叠视场的鱼眼多相机外部标定
Pub Date : 2014-06-08 DOI: 10.1109/IVS.2014.6856403
Moritz Knorr, José Esparza, W. Niehsen, C. Stiller
It is well known that the robustness of many computer vision algorithms can be improved by employing large field of view cameras, such as omnidirectional cameras. To avoid obstructions in the field of view, such cameras need to be mounted in an exposed position. Alternatively, a multicamera setup can be used. However, this requires the extrinsic calibration to be known. In the present work, we propose a method to calibrate a fisheye multi-camera rig, mounted on a mobile platform. The method only relies on feature correspondences from pairwise overlapping fields of view of adjacent cameras. In contrast to existing approaches, motion estimation or specific motion patterns are not required. To compensate for the large extent of multi-camera setups and corresponding viewpoint variations, as well as geometrical distortions caused by fisheye lenses, captured images are mapped into virtual camera views such that corresponding image regions coincide. To this end, the scene geometry is approximated by the ground plane in close proximity and by infinitely far away objects elsewhere. As a result, low complexity feature detectors and matchers can be employed. The approach is evaluated using a setup of four rigidly coupled and synchronized wide angle fisheye cameras that were attached to four sides of a mobile platform. The cameras have pairwise overlapping fields of view and baselines between 2.25 and 3 meters.
众所周知,许多计算机视觉算法的鲁棒性可以通过使用大视场摄像机来提高,例如全向摄像机。为了避免视野中的障碍物,这样的相机需要安装在一个暴露的位置。另外,可以使用多摄像机设置。然而,这需要外部校准是已知的。在目前的工作中,我们提出了一种方法来校准鱼眼多相机钻机,安装在一个移动平台上。该方法仅依赖于相邻相机成对重叠视场的特征对应。与现有的方法相比,不需要运动估计或特定的运动模式。为了补偿大范围的多相机设置和相应的视点变化,以及由鱼眼镜头引起的几何扭曲,捕获的图像被映射到虚拟相机视图中,以便相应的图像区域重合。为此,场景几何图形由近距离的地平面和其他无限远的物体近似。因此,可以使用低复杂度的特征检测器和匹配器。该方法是通过安装四个刚性耦合和同步的广角鱼眼相机来评估的,这些相机安装在移动平台的四面。这些相机的视场和基线在2.25米到3米之间成对重叠。
{"title":"Extrinsic calibration of a fisheye multi-camera setup using overlapping fields of view","authors":"Moritz Knorr, José Esparza, W. Niehsen, C. Stiller","doi":"10.1109/IVS.2014.6856403","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856403","url":null,"abstract":"It is well known that the robustness of many computer vision algorithms can be improved by employing large field of view cameras, such as omnidirectional cameras. To avoid obstructions in the field of view, such cameras need to be mounted in an exposed position. Alternatively, a multicamera setup can be used. However, this requires the extrinsic calibration to be known. In the present work, we propose a method to calibrate a fisheye multi-camera rig, mounted on a mobile platform. The method only relies on feature correspondences from pairwise overlapping fields of view of adjacent cameras. In contrast to existing approaches, motion estimation or specific motion patterns are not required. To compensate for the large extent of multi-camera setups and corresponding viewpoint variations, as well as geometrical distortions caused by fisheye lenses, captured images are mapped into virtual camera views such that corresponding image regions coincide. To this end, the scene geometry is approximated by the ground plane in close proximity and by infinitely far away objects elsewhere. As a result, low complexity feature detectors and matchers can be employed. The approach is evaluated using a setup of four rigidly coupled and synchronized wide angle fisheye cameras that were attached to four sides of a mobile platform. The cameras have pairwise overlapping fields of view and baselines between 2.25 and 3 meters.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128772235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A novel method for extrinsic parameters estimation between a single-line scan LiDAR and a camera 单线扫描激光雷达与相机间外部参数估计的新方法
Pub Date : 2014-06-08 DOI: 10.1109/IVS.2014.6856408
Pakapoj Tulsuk, Panu Srestasathiern, M. Ruchanurucks, T. Phatrapornnant, H. Nagahashi
This paper presents a novel method for extrinsic parameters estimation of a single line scan LiDAR and a camera. Using a checkerboard, the calibration setup is simple and practical. Particularly, the proposed calibration method is based on resolving geometry of the checkerboard that visible to the camera and the LiDAR. The calibration setup geometry is described by planes, lines and points. Our novelty is a new hypothesis of the geometry which is the orthogonal distances between LiDAR points and the line from the intersection between the checkerboard and LiDAR scan plane. To evaluate the performance of the proposed method, we compared our proposed method with the state of the art method i.e. Zhang and Pless [1]. The experimental results showed that the proposed method yielded better results.
本文提出了一种单线扫描激光雷达和相机外部参数估计的新方法。使用棋盘,校准设置简单实用。特别地,所提出的校准方法是基于解析相机和激光雷达可见的棋盘的几何形状。标定装置几何由平面、直线和点来描述。我们的新颖之处是一个新的几何假设,即激光雷达点与棋盘与激光雷达扫描平面相交的直线之间的正交距离。为了评估所提出的方法的性能,我们将所提出的方法与最先进的方法(即Zhang和Pless[1])进行了比较。实验结果表明,该方法具有较好的效果。
{"title":"A novel method for extrinsic parameters estimation between a single-line scan LiDAR and a camera","authors":"Pakapoj Tulsuk, Panu Srestasathiern, M. Ruchanurucks, T. Phatrapornnant, H. Nagahashi","doi":"10.1109/IVS.2014.6856408","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856408","url":null,"abstract":"This paper presents a novel method for extrinsic parameters estimation of a single line scan LiDAR and a camera. Using a checkerboard, the calibration setup is simple and practical. Particularly, the proposed calibration method is based on resolving geometry of the checkerboard that visible to the camera and the LiDAR. The calibration setup geometry is described by planes, lines and points. Our novelty is a new hypothesis of the geometry which is the orthogonal distances between LiDAR points and the line from the intersection between the checkerboard and LiDAR scan plane. To evaluate the performance of the proposed method, we compared our proposed method with the state of the art method i.e. Zhang and Pless [1]. The experimental results showed that the proposed method yielded better results.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127017378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Non-parametric lane estimation in urban environments 城市环境下非参数车道估计
Pub Date : 2014-06-08 DOI: 10.1109/IVS.2014.6856551
Johannes Beck, C. Stiller
Lane estimation of the ego vehicle plays a key role in navigating a car through unknown areas. In fact, solving this problem is a prerequisite for any vehicle driving autonomously in previously unmapped areas. Most of the proposed methods for lane detection are tuned for freeways and rural environments. In urban scenarios, however, they are unable to reliably detect the ego lane in many situations. Often, these methods simply work on the principle of fitting a parametric model to lane markers. Since a large variety of lane shapes are found in urban environments, it is obvious that these models are too restrictive. Moreover, the complex structure of intersection-like situations further hampers the success of the aforementioned methods. Therefore we propose a non-parametric lane model which can handle a wide range of different features such as grass verge, free space, lane markers etc. The ego lane estimation is formulated as a shortest path problem. A directed acyclic graph is constructed from the feature pool rendering it efficiently solvable. The proposed approach is easily extendable as it is able to cope with pixel-wise low level features as well as highlevel ones jointly. We demonstrate the potential of our method in urban and rural areas and present experimental findings on difficult real world data sets.
自我车辆的车道估计在引导汽车通过未知区域方面起着关键作用。事实上,解决这个问题是任何车辆在以前未绘制地图的区域自动驾驶的先决条件。大多数提出的车道检测方法都是针对高速公路和农村环境进行调整的。然而,在城市场景中,它们在许多情况下无法可靠地检测到自我车道。通常,这些方法只是简单地将参数模型拟合到车道标记上。由于在城市环境中发现了各种各样的车道形状,很明显,这些模型的限制太大。此外,类交集情况的复杂结构进一步阻碍了上述方法的成功。因此,我们提出了一种非参数车道模型,该模型可以处理各种不同的特征,如草地边缘、自由空间、车道标记等。自我车道估计被表述为一个最短路径问题。从特征池中构造有向无环图,使其有效可解。该方法既能处理像素级的低级特征,又能同时处理高级特征,具有较好的扩展性。我们展示了我们的方法在城市和农村地区的潜力,并在困难的现实世界数据集上展示了实验结果。
{"title":"Non-parametric lane estimation in urban environments","authors":"Johannes Beck, C. Stiller","doi":"10.1109/IVS.2014.6856551","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856551","url":null,"abstract":"Lane estimation of the ego vehicle plays a key role in navigating a car through unknown areas. In fact, solving this problem is a prerequisite for any vehicle driving autonomously in previously unmapped areas. Most of the proposed methods for lane detection are tuned for freeways and rural environments. In urban scenarios, however, they are unable to reliably detect the ego lane in many situations. Often, these methods simply work on the principle of fitting a parametric model to lane markers. Since a large variety of lane shapes are found in urban environments, it is obvious that these models are too restrictive. Moreover, the complex structure of intersection-like situations further hampers the success of the aforementioned methods. Therefore we propose a non-parametric lane model which can handle a wide range of different features such as grass verge, free space, lane markers etc. The ego lane estimation is formulated as a shortest path problem. A directed acyclic graph is constructed from the feature pool rendering it efficiently solvable. The proposed approach is easily extendable as it is able to cope with pixel-wise low level features as well as highlevel ones jointly. We demonstrate the potential of our method in urban and rural areas and present experimental findings on difficult real world data sets.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125880073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
DIRD is an illumination robust descriptor DIRD是一个光照健壮的描述符
Pub Date : 2014-06-08 DOI: 10.1109/IVS.2014.6856421
Henning Lategahn, Johannes Beck, C. Stiller
Many robotics applications nowadays use cameras for various task such as place recognition, localization, mapping etc. These methods heavily depend on image descriptors. A plethora of descriptors have recently been introduced but hardly any address the problem of illumination robustness. Herein we introduce an illumination robust image descriptor which we dub DIRD (Dird is an Illumination Robust Descriptor). First a set of Haar features are computed and individual pixel responses are normalized to L2 unit length. Thereafter features are pooled over a predefined neighborhood region. The concatenation of several such features form the basis DIRD vector. These features are then quantized to maximize entropy allowing (among others) a binary version of DIRD consisting of only ones and zeros for very fast matching. We evaluate DIRD on three test sets and compare its performance with (extended) USURF, BRIEF and a baseline gray level descriptor. All proposed DIRD variants substantially outperform these methods by times more than doubling the performance of USURF and BRIEF.
如今,许多机器人应用都使用相机来完成各种任务,如位置识别、定位、地图绘制等。这些方法严重依赖于图像描述符。最近引入了大量的描述符,但几乎没有解决照明鲁棒性的问题。本文引入了一个光照鲁棒图像描述符,我们称之为DIRD (DIRD是一个光照鲁棒描述符)。首先计算一组哈尔特征,并将单个像素响应归一化为L2单位长度。然后,特征被汇集到一个预定义的邻域。几个这样的特征的连接形成了基本的DIRD向量。然后将这些特征量化以使熵最大化,从而允许(除其他外)DIRD的二进制版本仅由1和0组成,以实现非常快速的匹配。我们在三个测试集上评估了DIRD,并将其与(扩展的)篡位函数、BRIEF和基线灰度描述符的性能进行了比较。所有提出的DIRD变体的性能都大大优于这些方法,其性能是篡夺和BRIEF的两倍以上。
{"title":"DIRD is an illumination robust descriptor","authors":"Henning Lategahn, Johannes Beck, C. Stiller","doi":"10.1109/IVS.2014.6856421","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856421","url":null,"abstract":"Many robotics applications nowadays use cameras for various task such as place recognition, localization, mapping etc. These methods heavily depend on image descriptors. A plethora of descriptors have recently been introduced but hardly any address the problem of illumination robustness. Herein we introduce an illumination robust image descriptor which we dub DIRD (Dird is an Illumination Robust Descriptor). First a set of Haar features are computed and individual pixel responses are normalized to L2 unit length. Thereafter features are pooled over a predefined neighborhood region. The concatenation of several such features form the basis DIRD vector. These features are then quantized to maximize entropy allowing (among others) a binary version of DIRD consisting of only ones and zeros for very fast matching. We evaluate DIRD on three test sets and compare its performance with (extended) USURF, BRIEF and a baseline gray level descriptor. All proposed DIRD variants substantially outperform these methods by times more than doubling the performance of USURF and BRIEF.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126426154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
HVAC system modeling for range prediction of electric vehicles 用于电动汽车行程预测的暖通空调系统建模
Pub Date : 2014-06-08 DOI: 10.1109/IVS.2014.6856500
Rhea Valentina, A. Viehl, O. Bringmann, W. Rosenstiel
The HVAC system is considered as the largest auxiliary power load in electric vehicles (EV). Therefore, this paper presents a detailed modeling of an EV-based HVAC system to support a priori prediction of HVAC system energy consumption under consideration of the EV users thermal comfort. This prediction is integrated into a navigation system to allow the driver entering the preferred parameters of thermal comfort and advising the driver about the predicted overall energy consumption. The advice acceptance might increase the awareness of the driver regarding the potential saved energy and leads to an energy-efficient vehicle operation by extending the overall driving range.
暖通空调系统被认为是电动汽车中最大的辅助电力负荷。因此,本文提出了基于电动汽车的暖通空调系统的详细建模,以支持考虑电动汽车用户热舒适的暖通空调系统能耗的先验预测。该预测集成到导航系统中,允许驾驶员输入热舒适的首选参数,并向驾驶员提供有关预测的总体能耗的建议。接受建议可能会增加驾驶员对潜在节能的认识,并通过扩大整体行驶里程来实现节能车辆的运行。
{"title":"HVAC system modeling for range prediction of electric vehicles","authors":"Rhea Valentina, A. Viehl, O. Bringmann, W. Rosenstiel","doi":"10.1109/IVS.2014.6856500","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856500","url":null,"abstract":"The HVAC system is considered as the largest auxiliary power load in electric vehicles (EV). Therefore, this paper presents a detailed modeling of an EV-based HVAC system to support a priori prediction of HVAC system energy consumption under consideration of the EV users thermal comfort. This prediction is integrated into a navigation system to allow the driver entering the preferred parameters of thermal comfort and advising the driver about the predicted overall energy consumption. The advice acceptance might increase the awareness of the driver regarding the potential saved energy and leads to an energy-efficient vehicle operation by extending the overall driving range.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126849399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Rational truck driving and its correlated driving features in extra-urban areas 城郊地区货车理性驾驶及其相关驾驶特征
Pub Date : 2014-06-08 DOI: 10.1109/IVS.2014.6856440
C. D'Agostino, A. Saidi, Gilles Scouarnec, Liming Chen
Truck drivers typically display different behaviors when facing various driving events, e.g., approaching a roundabout, and thereby have a major impact both on the fuel consumption and the vehicle speed. Within the context where fuel is increasingly a major cost center for merchandise transport companies, it is important to recognize different driver behaviors in order to be able to simulate them as closely to the real data as possible during the truck development process. In this paper, we introduce, instead of economic driving, the notion of rational driving which seeks to decrease the average fuel consumption while respecting the transport companies' constraint, i.e., the delivery delay. Moreover, we also propose an indicator, namely rational driving index (RDI), which enables to quantify how good a driver behavior is with respect to the rational driving. We then investigate various driving features contributing to characterize a rational driver behavior, using real driving data collected from 34 different truck drivers on an extra-urban road section particularly representative of travel paths of trucks ensuring regional merchandise distribution. Given the fact that real driving data collected on an open road can differ in terms of environment, e.g., weather, traffic, we further study, through simulations on a digital representation of a roundabout, the impact of two major driving features, i.e., the use of coasting and crossing speed at roundabouts, with respect to rational driving. The experimental results from both real driving data and simulations show high correlations of these two driving features with respect to RDI and demonstrate that a good rational driver tends to decelerate slowly during braking periods (use of coasting) and have high crossing speed in roundabouts.
卡车司机在面对不同的驾驶事件时,通常会表现出不同的行为,例如接近环形交叉路口,从而对油耗和车速产生重大影响。在燃料日益成为商品运输公司主要成本中心的背景下,识别驾驶员的不同行为非常重要,以便能够在卡车开发过程中尽可能接近真实数据进行模拟。在本文中,我们引入了理性驾驶的概念,而不是经济驾驶,它寻求降低平均燃料消耗,同时尊重运输公司的约束,即交货延迟。此外,我们还提出了一个指标,即理性驾驶指数(RDI),它可以量化驾驶员行为相对于理性驾驶的良好程度。然后,我们研究了有助于表征理性驾驶员行为的各种驾驶特征,使用从34名不同卡车司机收集的真实驾驶数据,这些数据来自于城市外路段,特别是确保区域商品配送的卡车行驶路径的代表性路段。考虑到在开放道路上收集的真实驾驶数据可能会因环境(例如天气、交通)而有所不同,我们通过模拟环形交叉路口的数字表示,进一步研究了两种主要驾驶特征(即环形交叉路口的滑行和过马路速度的使用)对理性驾驶的影响。来自真实驾驶数据和模拟的实验结果表明,这两种驾驶特征在RDI方面具有很高的相关性,并且表明良好的理性驾驶员在制动期间(使用滑行)倾向于缓慢减速,并且在环形交叉路口具有较高的过马路速度。
{"title":"Rational truck driving and its correlated driving features in extra-urban areas","authors":"C. D'Agostino, A. Saidi, Gilles Scouarnec, Liming Chen","doi":"10.1109/IVS.2014.6856440","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856440","url":null,"abstract":"Truck drivers typically display different behaviors when facing various driving events, e.g., approaching a roundabout, and thereby have a major impact both on the fuel consumption and the vehicle speed. Within the context where fuel is increasingly a major cost center for merchandise transport companies, it is important to recognize different driver behaviors in order to be able to simulate them as closely to the real data as possible during the truck development process. In this paper, we introduce, instead of economic driving, the notion of rational driving which seeks to decrease the average fuel consumption while respecting the transport companies' constraint, i.e., the delivery delay. Moreover, we also propose an indicator, namely rational driving index (RDI), which enables to quantify how good a driver behavior is with respect to the rational driving. We then investigate various driving features contributing to characterize a rational driver behavior, using real driving data collected from 34 different truck drivers on an extra-urban road section particularly representative of travel paths of trucks ensuring regional merchandise distribution. Given the fact that real driving data collected on an open road can differ in terms of environment, e.g., weather, traffic, we further study, through simulations on a digital representation of a roundabout, the impact of two major driving features, i.e., the use of coasting and crossing speed at roundabouts, with respect to rational driving. The experimental results from both real driving data and simulations show high correlations of these two driving features with respect to RDI and demonstrate that a good rational driver tends to decelerate slowly during braking periods (use of coasting) and have high crossing speed in roundabouts.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127040793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Traffic Knowledge Aided Vehicle Motion Planning Engine Based on Space Exploration Guided Heuristic Search 基于空间探索引导启发式搜索的交通知识辅助车辆运动规划引擎
Pub Date : 2014-06-08 DOI: 10.1109/IVS.2014.6856458
Chaoyong Chen, Markus Rickert, A. Knoll
A real-time vehicle motion planning engine is presented in this paper, with the focus on exploiting the prior and online traffic knowledge, e.g., predefined roadmap, prior environment information, behaviour-based motion primitives, within the space exploration guided heuristic search (SEHS) framework. The SEHS algorithm plans a kinodynamic vehicle motion in two steps: a geometric investigation of the free space, followed by a grid-free heuristic search employing primitive motions. These two procedures are generic and possible to take advantage of traffic knowledge. In this paper, the space exploration is supported by a roadmap and the heuristic search benefits from the behaviour-based primitives. Based on this idea, a light weighted motion planning engine is built, with the purpose to handle the traffic knowledge and the planning time in real-time motion planning. The experiments demonstrate that this SEHS motion planning engine is flexible and scalable for practical traffic scenarios with better results than the baseline SEHS motion planner regarding the provided traffic knowledge.
本文提出了一种实时车辆运动规划引擎,其重点是在空间探索引导启发式搜索(SEHS)框架内利用先验和在线交通知识,如预定义路线图、先验环境信息、基于行为的运动原语。SEHS算法分两步规划车辆的动力学运动:对自由空间进行几何调查,然后采用原始运动进行无网格启发式搜索。这两个程序是通用的,可以利用交通知识。在本文中,空间探索由路线图支持,启发式搜索受益于基于行为的原语。在此基础上,构建了一个轻量级的运动规划引擎,用于处理实时运动规划中的交通知识和规划时间。实验表明,该SEHS运动规划引擎在实际交通场景中具有灵活性和可扩展性,在提供的交通知识方面优于基线SEHS运动规划器。
{"title":"A Traffic Knowledge Aided Vehicle Motion Planning Engine Based on Space Exploration Guided Heuristic Search","authors":"Chaoyong Chen, Markus Rickert, A. Knoll","doi":"10.1109/IVS.2014.6856458","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856458","url":null,"abstract":"A real-time vehicle motion planning engine is presented in this paper, with the focus on exploiting the prior and online traffic knowledge, e.g., predefined roadmap, prior environment information, behaviour-based motion primitives, within the space exploration guided heuristic search (SEHS) framework. The SEHS algorithm plans a kinodynamic vehicle motion in two steps: a geometric investigation of the free space, followed by a grid-free heuristic search employing primitive motions. These two procedures are generic and possible to take advantage of traffic knowledge. In this paper, the space exploration is supported by a roadmap and the heuristic search benefits from the behaviour-based primitives. Based on this idea, a light weighted motion planning engine is built, with the purpose to handle the traffic knowledge and the planning time in real-time motion planning. The experiments demonstrate that this SEHS motion planning engine is flexible and scalable for practical traffic scenarios with better results than the baseline SEHS motion planner regarding the provided traffic knowledge.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132619820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Vision-based pedestrian detection for rear-view cameras 基于视觉的后视摄像头行人检测
Pub Date : 2014-06-08 DOI: 10.1109/IVS.2014.6856399
S. Silberstein, Dan Levi, V. Kogan, R. Gazit
We present a new vision-based pedestrian detection system for rear-view cameras which is robust to partial occlusions and non-upright poses. Detection is made using a single automotive rear-view fisheye lens camera. The system uses “Accelerated Feature Synthesis”, a multiple-part based detection method with state-of-the-art performance. In addition, we collected and annotated an extensive dataset of videos for this specific application which includes pedestrians in a wide range of environmental conditions. Using this dataset we demonstrate the benefits of using part-based detection for detecting people in various poses and under occlusions. We also show, using a measure developed specifically for video-based evaluation, the gain in detection accuracy compared with template-based detection.
我们提出了一种新的基于视觉的后视摄像头行人检测系统,该系统对部分遮挡和非直立姿势具有鲁棒性。检测是使用单个汽车后视鱼眼镜头相机进行的。该系统使用“加速特征合成”,这是一种基于多部分的检测方法,具有最先进的性能。此外,我们还为这一特定应用收集并注释了广泛的视频数据集,其中包括各种环境条件下的行人。使用这个数据集,我们展示了使用基于部位的检测来检测各种姿势和遮挡下的人的好处。我们还显示,使用专门为基于视频的评估开发的测量方法,与基于模板的检测相比,检测精度有所提高。
{"title":"Vision-based pedestrian detection for rear-view cameras","authors":"S. Silberstein, Dan Levi, V. Kogan, R. Gazit","doi":"10.1109/IVS.2014.6856399","DOIUrl":"https://doi.org/10.1109/IVS.2014.6856399","url":null,"abstract":"We present a new vision-based pedestrian detection system for rear-view cameras which is robust to partial occlusions and non-upright poses. Detection is made using a single automotive rear-view fisheye lens camera. The system uses “Accelerated Feature Synthesis”, a multiple-part based detection method with state-of-the-art performance. In addition, we collected and annotated an extensive dataset of videos for this specific application which includes pedestrians in a wide range of environmental conditions. Using this dataset we demonstrate the benefits of using part-based detection for detecting people in various poses and under occlusions. We also show, using a measure developed specifically for video-based evaluation, the gain in detection accuracy compared with template-based detection.","PeriodicalId":254500,"journal":{"name":"2014 IEEE Intelligent Vehicles Symposium Proceedings","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131331716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
期刊
2014 IEEE Intelligent Vehicles Symposium Proceedings
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1