首页 > 最新文献

2023 IEEE Intelligent Vehicles Symposium (IV)最新文献

英文 中文
Deer in the headlights: FIR-based Future Trajectory Prediction in Nighttime Autonomous Driving 前灯下的鹿:夜间自动驾驶中基于fir的未来轨迹预测
Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186756
Alireza Rahimpour, Navid Fallahinia, D. Upadhyay, Justin Miller
The performance of the current collision avoidance systems in Autonomous Vehicles (AV) and Advanced Driver Assistance Systems (ADAS) can be drastically affected by low light and adverse weather conditions. Collisions with large animals such as deer in low light cause significant cost and damage every year. In this paper, we propose the first AI-based method for future trajectory prediction of large animals and mitigating the risk of collision with them in low light. In order to minimize false collision warnings, in our multi-step framework, first, the large animal is accurately detected and a preliminary risk level is predicted for it and low-risk animals are discarded. In the next stage a multi-stream CONV-LSTM-based encoder-decoder framework is designed to predict the future trajectory of the potentially high-risk animals. The proposed model uses camera motion prediction as well as the local and global context of the scene to generate accurate predictions. Furthermore, this paper introduces a new dataset of FIR videos for large animal detection and risk estimation in real nighttime driving scenarios. Our experiments show promising results of the proposed framework in adverse conditions. Our code is available online1.
当前自动驾驶汽车(AV)和高级驾驶辅助系统(ADAS)中的避碰系统的性能可能会受到低光照和恶劣天气条件的严重影响。在弱光下与鹿等大型动物的碰撞每年都会造成巨大的损失和损害。在本文中,我们提出了第一个基于人工智能的方法来预测大型动物的未来轨迹,并降低在弱光下与它们碰撞的风险。为了最大限度地减少错误的碰撞警告,在我们的多步骤框架中,首先,准确检测大型动物并预测其初步风险等级,并丢弃低风险动物。在下一阶段,设计一个基于多流convl - lstm的编码器-解码器框架来预测潜在高风险动物的未来轨迹。该模型使用摄像机运动预测以及场景的局部和全局上下文来生成准确的预测。此外,本文还介绍了一个新的FIR视频数据集,用于真实夜间驾驶场景中的大型动物检测和风险估计。我们的实验显示了在不利条件下提出的框架的良好结果。我们的代码可以在网上找到。
{"title":"Deer in the headlights: FIR-based Future Trajectory Prediction in Nighttime Autonomous Driving","authors":"Alireza Rahimpour, Navid Fallahinia, D. Upadhyay, Justin Miller","doi":"10.1109/IV55152.2023.10186756","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186756","url":null,"abstract":"The performance of the current collision avoidance systems in Autonomous Vehicles (AV) and Advanced Driver Assistance Systems (ADAS) can be drastically affected by low light and adverse weather conditions. Collisions with large animals such as deer in low light cause significant cost and damage every year. In this paper, we propose the first AI-based method for future trajectory prediction of large animals and mitigating the risk of collision with them in low light. In order to minimize false collision warnings, in our multi-step framework, first, the large animal is accurately detected and a preliminary risk level is predicted for it and low-risk animals are discarded. In the next stage a multi-stream CONV-LSTM-based encoder-decoder framework is designed to predict the future trajectory of the potentially high-risk animals. The proposed model uses camera motion prediction as well as the local and global context of the scene to generate accurate predictions. Furthermore, this paper introduces a new dataset of FIR videos for large animal detection and risk estimation in real nighttime driving scenarios. Our experiments show promising results of the proposed framework in adverse conditions. Our code is available online1.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127115826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
D3VIL-SLAM: 3D Visual Inertial LiDAR SLAM for Outdoor Environments D3VIL-SLAM:用于室外环境的三维视觉惯性激光雷达SLAM
Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186534
Matteo Frosi, Matteo Matteucci
Autonomous driving and 3D mapping are a few applications associated with real-time six-degrees-of-freedom pose estimation of ground vehicles, especially in outdoor (e.g., urban) environments. During the past decades, many systems have been proposed, with the majority working on data coming from only one sensor, while also struggling to keep accuracy and performance balanced. In this paper, we present D3VIL-SLAM, which extends an existing LiDAR-based SLAM system, ART-SLAM, to include inertial and visual information. The front-end comprises three branches that perform short-term data association, i.e., tracking, by exploiting laser, visual, and inertial data, respectively. All motion estimates and loop constraints derived from both LiDAR scans and images are used to build a robust g2o pose graph, which is later optimized to best satisfy all motion constraints. We compare the accuracy of our system with state-of-the-art SLAM methods, showing that D3VIL-SLAM is more accurate and produces highly detailed 3D maps while retaining real-time performance. Lastly, we perform a brief ablation study with different limitations (e.g., only images are allowed). All experimental campaigns are done by evaluating the estimated trajectory displacement using the KITTI dataset.
自动驾驶和3D地图是一些与地面车辆实时六自由度姿态估计相关的应用,特别是在室外(例如城市)环境中。在过去的几十年里,已经提出了许多系统,其中大多数系统只处理来自一个传感器的数据,同时也在努力保持准确性和性能的平衡。在本文中,我们提出了D3VIL-SLAM,它扩展了现有的基于lidar的SLAM系统ART-SLAM,包括惯性和视觉信息。前端包括三个分支,分别通过利用激光、视觉和惯性数据来执行短期数据关联,即跟踪。所有来自激光雷达扫描和图像的运动估计和环路约束用于构建鲁棒的g20姿态图,随后对其进行优化以最好地满足所有运动约束。我们将该系统的精度与最先进的SLAM方法进行了比较,结果表明D3VIL-SLAM更准确,在保持实时性能的同时,可以生成非常详细的3D地图。最后,我们进行了一个简短的消融研究,有不同的局限性(例如,只允许图像)。所有的实验活动都是通过使用KITTI数据集评估估计的轨迹位移来完成的。
{"title":"D3VIL-SLAM: 3D Visual Inertial LiDAR SLAM for Outdoor Environments","authors":"Matteo Frosi, Matteo Matteucci","doi":"10.1109/IV55152.2023.10186534","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186534","url":null,"abstract":"Autonomous driving and 3D mapping are a few applications associated with real-time six-degrees-of-freedom pose estimation of ground vehicles, especially in outdoor (e.g., urban) environments. During the past decades, many systems have been proposed, with the majority working on data coming from only one sensor, while also struggling to keep accuracy and performance balanced. In this paper, we present D3VIL-SLAM, which extends an existing LiDAR-based SLAM system, ART-SLAM, to include inertial and visual information. The front-end comprises three branches that perform short-term data association, i.e., tracking, by exploiting laser, visual, and inertial data, respectively. All motion estimates and loop constraints derived from both LiDAR scans and images are used to build a robust g2o pose graph, which is later optimized to best satisfy all motion constraints. We compare the accuracy of our system with state-of-the-art SLAM methods, showing that D3VIL-SLAM is more accurate and produces highly detailed 3D maps while retaining real-time performance. Lastly, we perform a brief ablation study with different limitations (e.g., only images are allowed). All experimental campaigns are done by evaluating the estimated trajectory displacement using the KITTI dataset.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"433 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116007167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resilient Navigation Based on Multimodal Measurements and Degradation Identification for High-Speed Autonomous Race Cars 基于多模态测量和退化识别的高速自动驾驶赛车弹性导航
Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186537
Daegyu Lee, Hyunwoo Nam, Chanhoe Ryu, Sun-Young Nah, D. Shim
This paper presents a localization system robust against unreliable measurements and a resilient navigation system recovering from localization failures for Indy autonomous challenge (IAC). The IAC is a competition with full-scale autonomous race cars that drive at speeds up to 300 kph. Owing to high-speed and heavy vibration in the car, a GPS/INS system is prone to degrade causing critical localization errors, which leads to catastrophic accidents.In order to address this issue, we propose a robust localization system that probabilistically evaluates the credibility of multi-modal measurements. At a correction step of the Kalman filter, a degradation identification method with a novel hyper-parameter derived from Bayesian decision theory is introduced to choose the most credible measurement values in real-time. Since the racing condition is so harsh that even our robust localization method can fail for a short period of time, we present a resilient navigation system that enables the race car to continue to follow the race track in the event of a localization failure. Our system uses direct perception information in planning and execution until the completion of localization recovery.The proposed localization system is first validated in a simulation with real measurement data contaminated by large artificial noises. The experimental validation during an actual race is also presented. The last part of our paper shows the results from the real-world tests where our system recovers from failures and prevents accidents in real-time, which proves the resilience of the proposed navigation system.
针对Indy自主挑战(IAC),提出了一种抗不可靠测量的鲁棒定位系统和一种从定位失败中恢复的弹性导航系统。IAC是一项全尺寸自动驾驶赛车的比赛,时速可达300公里。由于汽车高速、剧烈的振动,GPS/INS系统容易退化,导致严重的定位误差,从而导致灾难性事故。为了解决这个问题,我们提出了一个鲁棒的定位系统,该系统可以概率地评估多模态测量的可信度。在卡尔曼滤波的校正步骤中,引入了一种基于贝叶斯决策理论的超参数退化识别方法,实时选择最可信的测量值。由于比赛条件非常恶劣,即使我们稳健的定位方法也可能在短时间内失败,因此我们提出了一种弹性导航系统,使赛车在定位失败的情况下能够继续沿着赛道行驶。我们的系统在计划和执行中使用直接感知信息,直到完成定位恢复。该定位系统首先在受较大人工噪声污染的实测数据中进行了仿真验证。并在实际比赛中进行了实验验证。论文的最后一部分给出了系统从故障中实时恢复并防止事故发生的实际测试结果,证明了所提出的导航系统的弹性。
{"title":"Resilient Navigation Based on Multimodal Measurements and Degradation Identification for High-Speed Autonomous Race Cars","authors":"Daegyu Lee, Hyunwoo Nam, Chanhoe Ryu, Sun-Young Nah, D. Shim","doi":"10.1109/IV55152.2023.10186537","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186537","url":null,"abstract":"This paper presents a localization system robust against unreliable measurements and a resilient navigation system recovering from localization failures for Indy autonomous challenge (IAC). The IAC is a competition with full-scale autonomous race cars that drive at speeds up to 300 kph. Owing to high-speed and heavy vibration in the car, a GPS/INS system is prone to degrade causing critical localization errors, which leads to catastrophic accidents.In order to address this issue, we propose a robust localization system that probabilistically evaluates the credibility of multi-modal measurements. At a correction step of the Kalman filter, a degradation identification method with a novel hyper-parameter derived from Bayesian decision theory is introduced to choose the most credible measurement values in real-time. Since the racing condition is so harsh that even our robust localization method can fail for a short period of time, we present a resilient navigation system that enables the race car to continue to follow the race track in the event of a localization failure. Our system uses direct perception information in planning and execution until the completion of localization recovery.The proposed localization system is first validated in a simulation with real measurement data contaminated by large artificial noises. The experimental validation during an actual race is also presented. The last part of our paper shows the results from the real-world tests where our system recovers from failures and prevents accidents in real-time, which proves the resilience of the proposed navigation system.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122352197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified Pedestrian Path Prediction Framework: A Comparison Study 统一行人路径预测框架之比较研究
Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186739
Jarl L. A. Lemmens, Ariyan Bighashdel, P. Jancura, Gijs Dubbelman
Pedestrian path prediction is an emerging and crucial task in numerous applications, such as autonomous vehicles. Due to the complexity of the task, various formulations are proposed throughout the literature. However, the interconnection between these formulations remains to be seen, which makes a fair comparison challenging. This work proposes a unified pedestrian path prediction framework via Markov decision process (MDP). We demonstrate that by carefully designing the components of the MDP, various standard formulations can be perceived as specific combinations of settings in our framework. Additionally, the unified framework allows us to discover new combinations of settings that integrate the benefits of current formulations improving the prediction performance. We conduct a comparison study and evaluate several formulations in well-controlled experiments. Furthermore, we carefully assess the influence of various settings, such as policy stochasticity and sequential decision-making, on prediction performance. The goal of this work is not to propose a new state-of-the- art method but to study various formulations of the pedestrian path prediction task under a unifying framework and uncover new directions that can eventually advance the current state-of-the-art.
在自动驾驶汽车等众多应用中,行人路径预测是一项新兴的关键任务。由于任务的复杂性,在整个文献中提出了各种公式。然而,这些表述之间的相互联系仍有待观察,这使得公平比较具有挑战性。本文提出了一种基于马尔可夫决策过程(MDP)的统一行人路径预测框架。我们证明,通过仔细设计MDP的组件,可以将各种标准公式视为框架中设置的特定组合。此外,统一的框架允许我们发现新的设置组合,这些组合集成了当前公式的好处,从而提高了预测性能。我们进行了比较研究,并在控制良好的实验中评估了几种配方。此外,我们仔细评估了各种设置(如政策随机性和顺序决策)对预测性能的影响。这项工作的目标不是提出一种新的最先进的方法,而是在一个统一的框架下研究行人路径预测任务的各种公式,并发现最终可以推进当前最先进技术的新方向。
{"title":"Unified Pedestrian Path Prediction Framework: A Comparison Study","authors":"Jarl L. A. Lemmens, Ariyan Bighashdel, P. Jancura, Gijs Dubbelman","doi":"10.1109/IV55152.2023.10186739","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186739","url":null,"abstract":"Pedestrian path prediction is an emerging and crucial task in numerous applications, such as autonomous vehicles. Due to the complexity of the task, various formulations are proposed throughout the literature. However, the interconnection between these formulations remains to be seen, which makes a fair comparison challenging. This work proposes a unified pedestrian path prediction framework via Markov decision process (MDP). We demonstrate that by carefully designing the components of the MDP, various standard formulations can be perceived as specific combinations of settings in our framework. Additionally, the unified framework allows us to discover new combinations of settings that integrate the benefits of current formulations improving the prediction performance. We conduct a comparison study and evaluate several formulations in well-controlled experiments. Furthermore, we carefully assess the influence of various settings, such as policy stochasticity and sequential decision-making, on prediction performance. The goal of this work is not to propose a new state-of-the- art method but to study various formulations of the pedestrian path prediction task under a unifying framework and uncover new directions that can eventually advance the current state-of-the-art.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122234570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GAN-based EEG Forecasting for Attaining Driving Operations 基于gan的驾驶操作脑电预测
Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186750
Kenichi Takasaki, Yuka Sasaki, Shoichiro Watanabe, Yasutaka Nishimura, Mari Abe
In the domain of connected vehicles or advanced driver assistance systems, electroencephalogram (EEG) data is measured in vehicles and used for applications in driver safety. These analysis modules are designed to detect abnormal driver states such as drowsiness, fatigue, and dangerous driving by using EEG data in real-time on edge devices since these conditions reflect a driver’s current cognitive state. However, there are few approaches to forecasting EEG data to prevent dangerous driving in advance using recent deep learning techniques. In this paper, we propose a novel generative adversarial network (W-GAN) which aims to forecast EEGs as a multivariate multi-step times series data. It consists of dilated causal convolutional layers to maintain EEG characteristics. We also propose a new performance measure reflecting the reproducibility of frequency components which confirms the feasibility of the forecasted EEG data. We conducted an experiment to evaluate our proposed model using EEG analysis research data. In the experiment, it was shown that our model outperformed several deep learning models in reproducibility of both EEG waveform and frequency components.
在联网车辆或高级驾驶辅助系统领域,脑电图(EEG)数据是在车辆中测量的,并用于驾驶员安全方面的应用。这些分析模块旨在通过在边缘设备上实时使用脑电图数据来检测驾驶员的异常状态,如困倦、疲劳和危险驾驶,因为这些状态反映了驾驶员当前的认知状态。然而,使用最新的深度学习技术来预测脑电图数据以提前防止危险驾驶的方法很少。在本文中,我们提出了一种新的生成对抗网络(W-GAN),旨在将脑电图作为多元多步时间序列数据进行预测。它由扩展的因果卷积层组成,以保持脑电图特征。我们还提出了一种新的反映频率分量再现性的性能指标,证实了预测脑电数据的可行性。我们利用脑电分析研究数据进行了实验来评估我们提出的模型。在实验中,我们的模型在脑电图波形和频率成分的再现性方面优于几种深度学习模型。
{"title":"GAN-based EEG Forecasting for Attaining Driving Operations","authors":"Kenichi Takasaki, Yuka Sasaki, Shoichiro Watanabe, Yasutaka Nishimura, Mari Abe","doi":"10.1109/IV55152.2023.10186750","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186750","url":null,"abstract":"In the domain of connected vehicles or advanced driver assistance systems, electroencephalogram (EEG) data is measured in vehicles and used for applications in driver safety. These analysis modules are designed to detect abnormal driver states such as drowsiness, fatigue, and dangerous driving by using EEG data in real-time on edge devices since these conditions reflect a driver’s current cognitive state. However, there are few approaches to forecasting EEG data to prevent dangerous driving in advance using recent deep learning techniques. In this paper, we propose a novel generative adversarial network (W-GAN) which aims to forecast EEGs as a multivariate multi-step times series data. It consists of dilated causal convolutional layers to maintain EEG characteristics. We also propose a new performance measure reflecting the reproducibility of frequency components which confirms the feasibility of the forecasted EEG data. We conducted an experiment to evaluate our proposed model using EEG analysis research data. In the experiment, it was shown that our model outperformed several deep learning models in reproducibility of both EEG waveform and frequency components.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128589121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Filling Action Selection Reinforcement Learning Algorithm for Safer Autonomous Driving in Multi-Traffic Scenes 多交通场景下更安全的自动驾驶填充动作选择强化学习算法
Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186804
Fan Yang, Xueyuan Li, Qi Liu, Chaoyang Liu, Zirui Li, Yong Liu
Learning-based algorithms are gradually emerging in the field of autonomous driving due to their powerful data processing capabilities. Researchers in the field of intelligent vehicle planning and decision-making are gradually using reinforcement learning algorithms to solve related problems. The safety research of reinforcement learning algorithms is significant and widely concerned. The main reason for the safety problem of the existing reinforcement learning algorithm is that there is still a bias in the safety judgment of the current environment, and it is impossible to make directional improvements by modifying the network and training method. In this paper, an action judgment network is designed as a standard to select the optimal action, which can assist the algorithm to judge environmental safety more deeply. Firstly, the action judgment network takes the state space and action as input, and the output is the safety state of the vehicle after the action. Secondly, this work establishes the required database to train the action judgment network through deep learning and achieves the highest accuracy of 98%. Finally, the proposed algorithm is tested in three scenarios: single-lane, intersection, and roundabout. This algorithm can judge the actions according to the reinforcement learning q value table order until the optimal and safe action is selected. The results show that the newly proposed algorithm can greatly improve the safety of the algorithm without affecting vehicle speed.
基于学习的算法由于其强大的数据处理能力,在自动驾驶领域逐渐兴起。智能车辆规划与决策领域的研究人员正在逐步使用强化学习算法来解决相关问题。强化学习算法的安全性研究意义重大,受到广泛关注。现有强化学习算法存在安全性问题的主要原因是对当前环境的安全性判断仍然存在偏差,无法通过修改网络和训练方法进行定向改进。本文设计了一个行为判断网络作为选择最优行为的标准,帮助算法对环境安全进行更深入的判断。首先,动作判断网络以状态空间和动作作为输入,输出为动作后车辆的安全状态。其次,本文建立了所需的数据库,通过深度学习训练动作判断网络,达到了98%的最高准确率。最后,在单车道、交叉路口和环形交叉路口三种场景下对该算法进行了测试。该算法可以根据强化学习q值表顺序判断动作,直到选择出最优且安全的动作。结果表明,该算法在不影响车速的情况下,大大提高了算法的安全性。
{"title":"Filling Action Selection Reinforcement Learning Algorithm for Safer Autonomous Driving in Multi-Traffic Scenes","authors":"Fan Yang, Xueyuan Li, Qi Liu, Chaoyang Liu, Zirui Li, Yong Liu","doi":"10.1109/IV55152.2023.10186804","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186804","url":null,"abstract":"Learning-based algorithms are gradually emerging in the field of autonomous driving due to their powerful data processing capabilities. Researchers in the field of intelligent vehicle planning and decision-making are gradually using reinforcement learning algorithms to solve related problems. The safety research of reinforcement learning algorithms is significant and widely concerned. The main reason for the safety problem of the existing reinforcement learning algorithm is that there is still a bias in the safety judgment of the current environment, and it is impossible to make directional improvements by modifying the network and training method. In this paper, an action judgment network is designed as a standard to select the optimal action, which can assist the algorithm to judge environmental safety more deeply. Firstly, the action judgment network takes the state space and action as input, and the output is the safety state of the vehicle after the action. Secondly, this work establishes the required database to train the action judgment network through deep learning and achieves the highest accuracy of 98%. Finally, the proposed algorithm is tested in three scenarios: single-lane, intersection, and roundabout. This algorithm can judge the actions according to the reinforcement learning q value table order until the optimal and safe action is selected. The results show that the newly proposed algorithm can greatly improve the safety of the algorithm without affecting vehicle speed.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129926955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FPGA-based Acceleration of Lidar Point Cloud Processing and Detection on the Edge 基于fpga的激光雷达点云边缘处理与检测加速
Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186612
Cecilia Latotzke, Amarin Kloeker, Simon Schoening, Fabian Kemper, Mazen Slimi, L. Eckstein, T. Gemmeke
Edge nodes such as Intelligent Transportation System Stations are becoming increasingly relevant in the context of automated driving as they provide connected vehicles with additional information to support their automated driving functions. However, the power budget for these edge nodes is limited and data has to be processed in real-time to be of use to automated driving functions. In this work, we present a system for processing raw lidar data in real-time on an FPGA, resulting in a significant reduction in power consumption compared to conventional hardware. Our approach leads to a 42.4% reduction in power consumption while maintaining the quality of the results. Processing two 128-layer surround-view lidar point clouds takes 522 ms per frame and an average power consumption of 39.3 W for the CPU and 34.5W for the FPGA. Our optimizations surpass the state-of-the-art by up to 193 times.
智能交通系统站等边缘节点在自动驾驶环境中变得越来越重要,因为它们为联网车辆提供额外的信息,以支持其自动驾驶功能。然而,这些边缘节点的功率预算是有限的,数据必须实时处理才能用于自动驾驶功能。在这项工作中,我们提出了一个在FPGA上实时处理原始激光雷达数据的系统,与传统硬件相比,大大降低了功耗。我们的方法使功耗降低了42.4%,同时保持了结果的质量。处理两个128层环视激光雷达点云每帧耗时522毫秒,CPU平均功耗为39.3 W, FPGA平均功耗为34.5W。我们的优化比最先进的技术高出193倍。
{"title":"FPGA-based Acceleration of Lidar Point Cloud Processing and Detection on the Edge","authors":"Cecilia Latotzke, Amarin Kloeker, Simon Schoening, Fabian Kemper, Mazen Slimi, L. Eckstein, T. Gemmeke","doi":"10.1109/IV55152.2023.10186612","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186612","url":null,"abstract":"Edge nodes such as Intelligent Transportation System Stations are becoming increasingly relevant in the context of automated driving as they provide connected vehicles with additional information to support their automated driving functions. However, the power budget for these edge nodes is limited and data has to be processed in real-time to be of use to automated driving functions. In this work, we present a system for processing raw lidar data in real-time on an FPGA, resulting in a significant reduction in power consumption compared to conventional hardware. Our approach leads to a 42.4% reduction in power consumption while maintaining the quality of the results. Processing two 128-layer surround-view lidar point clouds takes 522 ms per frame and an average power consumption of 39.3 W for the CPU and 34.5W for the FPGA. Our optimizations surpass the state-of-the-art by up to 193 times.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123389623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UCLF: An Uncertainty-Aware Cooperative Lane-Changing Framework for Connected Autonomous Vehicles in Mixed Traffic 混合交通条件下互联自动驾驶车辆不确定性感知协同变道框架
Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186758
Yijun Mao, Yan Ding, Chongshan Jiao, Pengju Ren
Human-driven vehicles (HDVs) will still exist for a long time as we move towards the era of connected autonomous vehicles (CAVs). It is challenging to ensure the safety of the system and improve the efficiency of convoys in mixed traffic environments due to the stochastic behaviors and uncertain intentions of HDVs. To address these issues, this paper develops an uncertainty-aware cooperative lane-changing framework, termed UCLF, for CAVs based on partially observable Markov decision process (POMDP). We extend POMDP to multi-agent cooperative lane-changing by prioritizing CAVs according to lane-changing urgency and planning for CAVs sequentially. Two novel cooperation mechanisms, namely cooperative implicit branching and cooperative explicit pruning, are proposed to promote efficiency and ensure safety. Numerical experiments are conducted to show the smooth and efficient lane-changing maneuvers under intention uncertainty. Compared to baseline, UCLF achieves up to 28.7% decrease in total travel time on average. We also validate UCLF in a real multi-AGV (Automated Guided Vehicle) system to demonstrate the usability and reliability of our study.
随着我们向联网自动驾驶汽车(cav)时代迈进,人类驾驶汽车(HDVs)仍将存在很长一段时间。混合交通环境下,由于车辆行为的随机性和意图的不确定性,如何保证系统的安全并提高车队的效率是一个挑战。为了解决这些问题,本文开发了一个基于部分可观察马尔可夫决策过程(POMDP)的自动驾驶汽车的不确定性感知合作变道框架,称为UCLF。我们将POMDP扩展到多智能体协同变道,根据变道紧急程度对自动驾驶汽车进行优先级排序,并对自动驾驶汽车进行顺序规划。为了提高效率和保证安全,提出了合作隐式分支和合作显式剪枝两种新的合作机制。通过数值实验,验证了在不确定性情况下的平稳高效变道机动。与基线相比,UCLF平均减少了28.7%的总旅行时间。我们还在一个真实的多agv(自动导引车)系统中验证了UCLF,以证明我们研究的可用性和可靠性。
{"title":"UCLF: An Uncertainty-Aware Cooperative Lane-Changing Framework for Connected Autonomous Vehicles in Mixed Traffic","authors":"Yijun Mao, Yan Ding, Chongshan Jiao, Pengju Ren","doi":"10.1109/IV55152.2023.10186758","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186758","url":null,"abstract":"Human-driven vehicles (HDVs) will still exist for a long time as we move towards the era of connected autonomous vehicles (CAVs). It is challenging to ensure the safety of the system and improve the efficiency of convoys in mixed traffic environments due to the stochastic behaviors and uncertain intentions of HDVs. To address these issues, this paper develops an uncertainty-aware cooperative lane-changing framework, termed UCLF, for CAVs based on partially observable Markov decision process (POMDP). We extend POMDP to multi-agent cooperative lane-changing by prioritizing CAVs according to lane-changing urgency and planning for CAVs sequentially. Two novel cooperation mechanisms, namely cooperative implicit branching and cooperative explicit pruning, are proposed to promote efficiency and ensure safety. Numerical experiments are conducted to show the smooth and efficient lane-changing maneuvers under intention uncertainty. Compared to baseline, UCLF achieves up to 28.7% decrease in total travel time on average. We also validate UCLF in a real multi-AGV (Automated Guided Vehicle) system to demonstrate the usability and reliability of our study.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"60 3-4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120896437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Sensor Performance Evaluation of Robot-Guided Vehicles for High Dynamic Tests 机器人引导车辆高动态试验自动传感器性能评价
Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186631
David Hermann, Granit Tejeci, C. M. Martinez, Gereon Hinz, Alois Knoll
As the demand for automated vehicle testing on proving grounds grows, the need for comprehensive and reliable environment monitoring systems becomes increasingly important. In highly dynamic driving test scenarios, long-range perception is essential for detecting dangers and hazards, ensuring the safety of both the test vehicle and other people on the track. However, determining an appropriate sensor setup can be challenging due to the complexity of sensor perception limitations. Perception limitations depend on the sensor characteristics and the environment. In this work, we propose a new approach to automatically evaluate sensor performance for high dynamic driving to improve the safety and efficiency of automated testing on proving grounds. Our approach involves estimating the detection range of common sensor technologies and analyzing the performance of sensor systems under various environmental conditions. By evaluating sensor performance in advance and comparing different sensor setups on tracks with a high-speed profile, we are able to identify critical track sections with higher collision risks and safeguard tests accordingly. This study emphasizes the importance of advanced environmental monitoring and sensor analysis in ensuring the safety and efficiency of automated vehicle testing.
随着自动驾驶汽车测试需求的增长,对全面可靠的环境监测系统的需求变得越来越重要。在高度动态的驾驶测试场景中,远程感知对于检测危险和危险,确保测试车辆和赛道上其他人的安全至关重要。然而,由于传感器感知限制的复杂性,确定适当的传感器设置可能具有挑战性。感知限制取决于传感器的特性和环境。在这项工作中,我们提出了一种自动评估高动态驾驶传感器性能的新方法,以提高试验场自动化测试的安全性和效率。我们的方法包括估计常见传感器技术的检测范围和分析传感器系统在各种环境条件下的性能。通过提前评估传感器性能,并在高速轨道上比较不同的传感器设置,我们能够识别出具有较高碰撞风险的关键轨道路段,并相应地进行保障测试。本研究强调先进的环境监测和传感器分析在确保自动车辆测试的安全性和效率方面的重要性。
{"title":"Automated Sensor Performance Evaluation of Robot-Guided Vehicles for High Dynamic Tests","authors":"David Hermann, Granit Tejeci, C. M. Martinez, Gereon Hinz, Alois Knoll","doi":"10.1109/IV55152.2023.10186631","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186631","url":null,"abstract":"As the demand for automated vehicle testing on proving grounds grows, the need for comprehensive and reliable environment monitoring systems becomes increasingly important. In highly dynamic driving test scenarios, long-range perception is essential for detecting dangers and hazards, ensuring the safety of both the test vehicle and other people on the track. However, determining an appropriate sensor setup can be challenging due to the complexity of sensor perception limitations. Perception limitations depend on the sensor characteristics and the environment. In this work, we propose a new approach to automatically evaluate sensor performance for high dynamic driving to improve the safety and efficiency of automated testing on proving grounds. Our approach involves estimating the detection range of common sensor technologies and analyzing the performance of sensor systems under various environmental conditions. By evaluating sensor performance in advance and comparing different sensor setups on tracks with a high-speed profile, we are able to identify critical track sections with higher collision risks and safeguard tests accordingly. This study emphasizes the importance of advanced environmental monitoring and sensor analysis in ensuring the safety and efficiency of automated vehicle testing.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124452339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HPCR-VI: Heterogeneous point cloud registration for vehicle-infrastructure collaboration HPCR-VI:车辆与基础设施协同的异构点云配准
Pub Date : 2023-06-04 DOI: 10.1109/IV55152.2023.10186606
Yuting Zhao, Xinyu Zhang, Shiyan Zhang, Shaoting Qiu, Haojie Yin, Xu Zhang
The perceptual information acquired by a single vehicle-side LiDAR in autonomous driving is limited, and this phenomenon is more prominent at intersections where vehicles are turning. Existing solutions improve vehicle perception by designing complex systems to match homogeneous point clouds acquired by the same type of sensors. In this study, we propose a heterogeneous point cloud registration for vehicle-infrastructure collaboration (HPCR-VI) that supplements the missing sensory information of the vehicle-side mechanical LiDAR with the point cloud information acquired by the infrastructure-side solid-state LiDAR. The HPCR-VI framework proposed in this paper breaks the limitation of homogeneous point cloud registration and can quickly obtain alignment results from two frames of heterogeneous point clouds, whose densities and viewing angles differ greatly, solving the heterogeneous point cloud registration problem where traditional point cloud alignment methods fail. Our proposed method is tested on the DAIR-V2X dataset, and the success rate of alignment is 40-50 points higher than that of the baseline method.
在自动驾驶中,单个车侧激光雷达获取的感知信息是有限的,这种现象在车辆转弯的十字路口更为突出。现有的解决方案通过设计复杂的系统来匹配由相同类型的传感器获得的均匀点云,从而提高车辆的感知能力。在本研究中,我们提出了一种用于车辆-基础设施协作的异构点云配准(HPCR-VI),该配准可以用基础设施侧固态激光雷达获取的点云信息来补充车辆侧机械激光雷达缺失的传感信息。本文提出的hcr - vi框架突破了同质点云配准的限制,能够快速获得密度和视角差异较大的两帧异质点云的配准结果,解决了传统点云配准方法无法解决的异质点云配准问题。本文提出的方法在DAIR-V2X数据集上进行了测试,比对成功率比基线方法提高了40-50点。
{"title":"HPCR-VI: Heterogeneous point cloud registration for vehicle-infrastructure collaboration","authors":"Yuting Zhao, Xinyu Zhang, Shiyan Zhang, Shaoting Qiu, Haojie Yin, Xu Zhang","doi":"10.1109/IV55152.2023.10186606","DOIUrl":"https://doi.org/10.1109/IV55152.2023.10186606","url":null,"abstract":"The perceptual information acquired by a single vehicle-side LiDAR in autonomous driving is limited, and this phenomenon is more prominent at intersections where vehicles are turning. Existing solutions improve vehicle perception by designing complex systems to match homogeneous point clouds acquired by the same type of sensors. In this study, we propose a heterogeneous point cloud registration for vehicle-infrastructure collaboration (HPCR-VI) that supplements the missing sensory information of the vehicle-side mechanical LiDAR with the point cloud information acquired by the infrastructure-side solid-state LiDAR. The HPCR-VI framework proposed in this paper breaks the limitation of homogeneous point cloud registration and can quickly obtain alignment results from two frames of heterogeneous point clouds, whose densities and viewing angles differ greatly, solving the heterogeneous point cloud registration problem where traditional point cloud alignment methods fail. Our proposed method is tested on the DAIR-V2X dataset, and the success rate of alignment is 40-50 points higher than that of the baseline method.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126436024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2023 IEEE Intelligent Vehicles Symposium (IV)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1