首页 > 最新文献

2022 IEEE Intelligent Vehicles Symposium (IV)最新文献

英文 中文
MAConAuto: Framework for Mobile-Assisted Human-in-the-Loop Automotive System MAConAuto:移动辅助人在环汽车系统框架
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827415
Salma Elmalaki
Automotive is becoming more and more sensor-equipped. Collision avoidance, lane departure warning, and self-parking are examples of applications becoming possible with the adoption of more sensors in the automotive industry. Moreover, the driver is now equipped with sensory systems like wearables and mobile phones. This rich sensory environment and the real-time streaming of contextual data from the vehicle make the human factor integral in the loop of computation. By integrating the human’s behavior and reaction into the advanced driver-assistance systems (ADAS), the vehicles become a more context-aware entity. Hence, we propose MAConAuto, a framework that helps design human-in-the-loop automotive systems by providing a common platform to engage the rich sensory systems in wearables and mobile to have context-aware applications. By personalizing the context adaptation in automotive applications, MAConAuto learns the behavior and reactions of the human to adapt to the personalized preference where interventions are continuously tuned using Reinforcement Learning. Our general framework satisfies three main design properties, adaptability, generalizability, and conflict resolution. We show how MAConAuto can be used as a framework to design two applications as human-centric applications, forward collision warning, and vehicle HVAC system with negligible time overhead to the average human response time.
汽车越来越多地配备了传感器。防撞、车道偏离警告和自动泊车是汽车行业采用更多传感器的应用实例。此外,驾驶员现在配备了可穿戴设备和手机等传感系统。这种丰富的感官环境和来自车辆的实时上下文数据流使得人的因素在计算循环中不可或缺。通过将人类的行为和反应整合到先进的驾驶员辅助系统(ADAS)中,车辆将成为一个更具环境感知能力的实体。因此,我们提出了MAConAuto,这是一个框架,通过提供一个通用平台,使可穿戴设备和移动设备中的丰富感官系统具有上下文感知应用,帮助设计人在环汽车系统。通过在汽车应用中个性化环境适应,MAConAuto学习人类的行为和反应,以适应个性化偏好,其中使用强化学习不断调整干预措施。我们的通用框架满足三个主要的设计属性:适应性、通用性和冲突解决。我们将展示如何使用MAConAuto作为框架来设计两个应用程序:以人为中心的应用程序,前向碰撞警告和车辆HVAC系统,其时间开销与平均人类响应时间相比可以忽略不计。
{"title":"MAConAuto: Framework for Mobile-Assisted Human-in-the-Loop Automotive System","authors":"Salma Elmalaki","doi":"10.1109/iv51971.2022.9827415","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827415","url":null,"abstract":"Automotive is becoming more and more sensor-equipped. Collision avoidance, lane departure warning, and self-parking are examples of applications becoming possible with the adoption of more sensors in the automotive industry. Moreover, the driver is now equipped with sensory systems like wearables and mobile phones. This rich sensory environment and the real-time streaming of contextual data from the vehicle make the human factor integral in the loop of computation. By integrating the human’s behavior and reaction into the advanced driver-assistance systems (ADAS), the vehicles become a more context-aware entity. Hence, we propose MAConAuto, a framework that helps design human-in-the-loop automotive systems by providing a common platform to engage the rich sensory systems in wearables and mobile to have context-aware applications. By personalizing the context adaptation in automotive applications, MAConAuto learns the behavior and reactions of the human to adapt to the personalized preference where interventions are continuously tuned using Reinforcement Learning. Our general framework satisfies three main design properties, adaptability, generalizability, and conflict resolution. We show how MAConAuto can be used as a framework to design two applications as human-centric applications, forward collision warning, and vehicle HVAC system with negligible time overhead to the average human response time.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129268751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
BackboneAnalysis: Structured Insights into Compute Platforms from CNN Inference Latency 主干分析:从CNN推理延迟到计算平台的结构化洞察
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827260
Frank M. Hafner, Matthias Zeller, Mark Schutera, Jochen Abhau, Julian F. P. Kooij
Customization of a convolutional neural network (CNN) to a specific compute platform involves finding an optimal pareto state between computational complexity of the CNN and resulting throughput in operations per second on the compute platform. However, existing inference performance benchmarks compare complete backbones that entail many differences between their CNN configurations, which do not provide insights in how fine-grade layer design choices affect this balance.We present BackboneAnalysis, a methodology for extracting structured insights into the trade-off for a chosen target compute platform. Within a one-factor-at-a-time analysis setup, CNN architectures are systematically varied and evaluated based on throughput and latency measurements irrespective of model accuracy. Thereby, we investigate the configuration factors input shape, batch size, kernel size and convolutional layer type.In our experiments, we deploy BackboneAnalysis on a Xavier iGPU and a Coral Edge TPU accelerator. The analysis reveals that the general assumption from optimal Roofline performance that higher operation density in CNNs leads to higher throughput does not always hold. These results highlight the importance for a neural network architect to be aware of platform-specific latency and throughput behavior in order to derive sensible configuration decisions for a custom CNN.
将卷积神经网络(CNN)定制为特定的计算平台需要在CNN的计算复杂性和计算平台上的每秒操作吞吐量之间找到最优的帕累托状态。然而,现有的推理性能基准比较了完整的骨干网,这些骨干网在其CNN配置之间存在许多差异,这并没有提供关于精细层设计选择如何影响这种平衡的见解。我们提出了BackboneAnalysis,这是一种用于提取所选目标计算平台权衡的结构化见解的方法。在单因素一次分析设置中,CNN架构系统地变化,并根据吞吐量和延迟测量进行评估,而不考虑模型准确性。因此,我们研究了输入形状、批大小、核大小和卷积层类型的配置因素。在我们的实验中,我们在Xavier iGPU和Coral Edge TPU加速器上部署了BackboneAnalysis。分析表明,最优rooline性能的一般假设,即cnn中更高的操作密度导致更高的吞吐量并不总是成立。这些结果强调了神经网络架构师了解特定于平台的延迟和吞吐量行为的重要性,以便为自定义CNN得出合理的配置决策。
{"title":"BackboneAnalysis: Structured Insights into Compute Platforms from CNN Inference Latency","authors":"Frank M. Hafner, Matthias Zeller, Mark Schutera, Jochen Abhau, Julian F. P. Kooij","doi":"10.1109/iv51971.2022.9827260","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827260","url":null,"abstract":"Customization of a convolutional neural network (CNN) to a specific compute platform involves finding an optimal pareto state between computational complexity of the CNN and resulting throughput in operations per second on the compute platform. However, existing inference performance benchmarks compare complete backbones that entail many differences between their CNN configurations, which do not provide insights in how fine-grade layer design choices affect this balance.We present BackboneAnalysis, a methodology for extracting structured insights into the trade-off for a chosen target compute platform. Within a one-factor-at-a-time analysis setup, CNN architectures are systematically varied and evaluated based on throughput and latency measurements irrespective of model accuracy. Thereby, we investigate the configuration factors input shape, batch size, kernel size and convolutional layer type.In our experiments, we deploy BackboneAnalysis on a Xavier iGPU and a Coral Edge TPU accelerator. The analysis reveals that the general assumption from optimal Roofline performance that higher operation density in CNNs leads to higher throughput does not always hold. These results highlight the importance for a neural network architect to be aware of platform-specific latency and throughput behavior in order to derive sensible configuration decisions for a custom CNN.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116821800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
What Can be Seen is What You Get: Structure Aware Point Cloud Augmentation 可以看到的是你得到的:结构感知点云增强
Pub Date : 2022-06-05 DOI: 10.1109/IV51971.2022.9827116
Frederik Hasecke, Martin Alsfasser, A. Kummert
To train a well performing neural network for semantic segmentation, it is crucial to have a large dataset with available ground truth for the network to generalize on unseen data. In this paper we present novel point cloud augmentation methods to artificially diversify a dataset. Our sensor-centric methods keep the data structure consistent with the lidar sensor capabilities. Due to these new methods, we are able to enrich low-value data with high-value instances, as well as create entirely new scenes. We validate our methods on multiple neural networks with the public SemanticKITTI [3] dataset and demonstrate that all networks improve compared to their respective baseline. In addition, we show that our methods enable the use of very small datasets, saving annotation time, training time and the associated costs.
为了训练一个性能良好的神经网络进行语义分割,至关重要的是要有一个具有可用基础事实的大型数据集,以便网络对未见过的数据进行泛化。在本文中,我们提出了一种新的点云增强方法来人为地分散数据集。我们以传感器为中心的方法使数据结构与激光雷达传感器功能保持一致。由于这些新方法,我们能够用高价值的实例来丰富低价值的数据,以及创建全新的场景。我们使用公共SemanticKITTI[3]数据集在多个神经网络上验证了我们的方法,并证明所有网络与各自的基线相比都有所改善。此外,我们表明,我们的方法能够使用非常小的数据集,节省注释时间,训练时间和相关成本。
{"title":"What Can be Seen is What You Get: Structure Aware Point Cloud Augmentation","authors":"Frederik Hasecke, Martin Alsfasser, A. Kummert","doi":"10.1109/IV51971.2022.9827116","DOIUrl":"https://doi.org/10.1109/IV51971.2022.9827116","url":null,"abstract":"To train a well performing neural network for semantic segmentation, it is crucial to have a large dataset with available ground truth for the network to generalize on unseen data. In this paper we present novel point cloud augmentation methods to artificially diversify a dataset. Our sensor-centric methods keep the data structure consistent with the lidar sensor capabilities. Due to these new methods, we are able to enrich low-value data with high-value instances, as well as create entirely new scenes. We validate our methods on multiple neural networks with the public SemanticKITTI [3] dataset and demonstrate that all networks improve compared to their respective baseline. In addition, we show that our methods enable the use of very small datasets, saving annotation time, training time and the associated costs.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114239465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Systematization and Identification of Triggering Conditions: A Preliminary Step for Efficient Testing of Autonomous Vehicles 触发条件的系统化和识别:自动驾驶汽车高效测试的初步步骤
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827238
Zhijing Zhu, Robin Philipp, Constanze Hungar, Falk Howar
To achieve safety of high level automated driving, not only functional failures like E/E system malfunctions and software crashes should be excluded, but also functional insufficiencies and performance limitations such as sensor resolution should be thoroughly investigated and considered. The former problem is known as functional safety (FuSa), which is coped with by ISO 26262. The latter focuses on safe vehicle behavior and is summarized as safety of the intended functionality (SOTIF) within the under development standard ISO 21448. For realizing this safety level, it is crucial to understand the system and the triggering conditions that activate its existing functional insufficiencies. However, the concept of triggering condition is new and still lacks relevant research. In this paper, we interpret triggering condition and other SOTIF-relevant terms in the scope of ISO 21448. We summarize the formal formulations of triggering conditions based on several key principles and provide possible categories for facilitating the systematization. We contribute a novel method for the identification of triggering conditions and offer a comparison with two other proposed methods regarding diverse aspects. Furthermore, we show that our method requires less insight into the system and fewer brainstorm efforts and provides well-structured and distinctly formulated triggering conditions.
为了实现高水平自动驾驶的安全性,不仅要排除E/E系统故障、软件崩溃等功能故障,还要深入研究和考虑传感器分辨率等功能不足和性能限制。前一个问题被称为功能安全(FuSa),由ISO 26262解决。后者侧重于安全车辆行为,并在正在开发的标准ISO 21448中总结为预期功能安全(SOTIF)。为了实现这一安全水平,了解系统和激活其现有功能缺陷的触发条件至关重要。然而,触发条件是一个新的概念,目前还缺乏相关的研究。在本文中,我们在ISO 21448范围内解释触发条件和其他sotif相关术语。我们根据几个关键原则总结了触发条件的形式化表述,并为便于系统化提供了可能的分类。我们提出了一种识别触发条件的新方法,并在不同方面与其他两种提出的方法进行了比较。此外,我们表明我们的方法需要更少的对系统的洞察力和更少的头脑风暴努力,并提供结构良好和明确表述的触发条件。
{"title":"Systematization and Identification of Triggering Conditions: A Preliminary Step for Efficient Testing of Autonomous Vehicles","authors":"Zhijing Zhu, Robin Philipp, Constanze Hungar, Falk Howar","doi":"10.1109/iv51971.2022.9827238","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827238","url":null,"abstract":"To achieve safety of high level automated driving, not only functional failures like E/E system malfunctions and software crashes should be excluded, but also functional insufficiencies and performance limitations such as sensor resolution should be thoroughly investigated and considered. The former problem is known as functional safety (FuSa), which is coped with by ISO 26262. The latter focuses on safe vehicle behavior and is summarized as safety of the intended functionality (SOTIF) within the under development standard ISO 21448. For realizing this safety level, it is crucial to understand the system and the triggering conditions that activate its existing functional insufficiencies. However, the concept of triggering condition is new and still lacks relevant research. In this paper, we interpret triggering condition and other SOTIF-relevant terms in the scope of ISO 21448. We summarize the formal formulations of triggering conditions based on several key principles and provide possible categories for facilitating the systematization. We contribute a novel method for the identification of triggering conditions and offer a comparison with two other proposed methods regarding diverse aspects. Furthermore, we show that our method requires less insight into the system and fewer brainstorm efforts and provides well-structured and distinctly formulated triggering conditions.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127329557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
SAN: Scene Anchor Networks for Joint Action-Space Prediction 联合行动空间预测的场景锚网络
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827239
Faris Janjos, Maxim Dolgov, Muhamed Kuric, Yinzhe Shen, J. M. Zöllner
In this work, we present a novel multi-modal trajectory prediction architecture. We decompose the uncertainty of future trajectories along higher-level scene characteristics and lower-level motion characteristics, and model multi-modality along both dimensions separately. The scene uncertainty is captured in a joint manner, where diversity of scene modes is ensured by training multiple separate anchor networks which specialize to different scene realizations. At the same time, each network outputs multiple trajectories that cover smaller deviations given a scene mode, thus capturing motion modes. In addition, we train our architectures with an outlier-robust regression loss function, which offers a trade-off between the outlier-sensitive L2 and outlier-insensitive L1 losses. Our scene anchor model achieves improvements over the state of the art on the INTERACTION dataset, outperforming the StarNet architecture from our previous work.
在这项工作中,我们提出了一种新的多模态轨迹预测架构。我们沿着更高层次的场景特征和更低层次的运动特征分解未来轨迹的不确定性,并沿着这两个维度分别建模多模态。场景不确定性以联合方式捕获,其中通过训练多个独立的锚网络来确保场景模式的多样性,这些锚网络专门用于不同的场景实现。同时,每个网络输出多个轨迹,覆盖给定场景模式的较小偏差,从而捕获运动模式。此外,我们使用离群鲁棒回归损失函数训练我们的架构,该函数提供了离群敏感L2和离群不敏感L1损失之间的权衡。我们的场景锚模型在INTERACTION数据集上实现了对最新技术的改进,优于我们以前工作中的StarNet架构。
{"title":"SAN: Scene Anchor Networks for Joint Action-Space Prediction","authors":"Faris Janjos, Maxim Dolgov, Muhamed Kuric, Yinzhe Shen, J. M. Zöllner","doi":"10.1109/iv51971.2022.9827239","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827239","url":null,"abstract":"In this work, we present a novel multi-modal trajectory prediction architecture. We decompose the uncertainty of future trajectories along higher-level scene characteristics and lower-level motion characteristics, and model multi-modality along both dimensions separately. The scene uncertainty is captured in a joint manner, where diversity of scene modes is ensured by training multiple separate anchor networks which specialize to different scene realizations. At the same time, each network outputs multiple trajectories that cover smaller deviations given a scene mode, thus capturing motion modes. In addition, we train our architectures with an outlier-robust regression loss function, which offers a trade-off between the outlier-sensitive L2 and outlier-insensitive L1 losses. Our scene anchor model achieves improvements over the state of the art on the INTERACTION dataset, outperforming the StarNet architecture from our previous work.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127536115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Fast online parameter estimation of the Intelligent Driver Model for trajectory prediction 用于轨迹预测的智能驾驶员模型的快速在线参数估计
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827115
Karsten Kreutz, J. Eggert
In this paper, we propose and analyze a method for trajectory prediction in longitudinal car-following scenarios. Hereby, the prediction is realized by a longitudinal car-following model (Intelligent Driver Model, IDM) with online estimated parameters. Previous work has shown that IDM online parameter adaptation is possible but difficult and slow, while providing only small improvement of prediction quality over e.g. constant velocity or constant acceleration baseline models.In our approach (Online IDM, OIDM), we use the difference between a parameter-specific trajectory and the real past trajectory as objective function of the optimization. Instead of optimizing the model parameters “directly”, we gain them based on a weighted sum of a set of prototype parameters, optimizing these weights.To show the benefits of the method, we compare the properties of our approach against state-of-the-art prediction methods for longitudinal driving, such as Constant Velocity (CV), Constant Acceleration (CA) and particle filter approaches on an open freeway driving dataset. The evaluation shows significant improvements in several aspects: (I) The prediction accuracy is significantly increased, (II) the obtained parameters exhibit a fast convergence and increased temporal stability and (III) the computational effort is reduced so that an online parameter adaptation becomes feasible.
本文提出并分析了一种纵向车辆跟随场景下的轨迹预测方法。其中,通过在线估计参数的纵向跟车模型(Intelligent Driver model, IDM)实现预测。先前的工作表明,IDM在线参数自适应是可能的,但困难且缓慢,同时仅提供了少量的预测质量改进,例如恒定速度或恒定加速度基线模型。在我们的方法(在线IDM, OIDM)中,我们使用参数特定轨迹与实际过去轨迹之间的差异作为优化的目标函数。我们不是“直接”优化模型参数,而是基于一组原型参数的加权和,优化这些权重来获得模型参数。为了展示该方法的优势,我们将该方法与最先进的纵向驾驶预测方法(如开放高速公路驾驶数据集上的恒速(CV)、恒加速度(CA)和粒子滤波方法)的特性进行了比较。评价结果表明,该方法在以下几个方面有显著改善:(1)预测精度显著提高;(2)得到的参数收敛速度快,时间稳定性增强;(3)减少了计算量,使在线参数自适应成为可能。
{"title":"Fast online parameter estimation of the Intelligent Driver Model for trajectory prediction","authors":"Karsten Kreutz, J. Eggert","doi":"10.1109/iv51971.2022.9827115","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827115","url":null,"abstract":"In this paper, we propose and analyze a method for trajectory prediction in longitudinal car-following scenarios. Hereby, the prediction is realized by a longitudinal car-following model (Intelligent Driver Model, IDM) with online estimated parameters. Previous work has shown that IDM online parameter adaptation is possible but difficult and slow, while providing only small improvement of prediction quality over e.g. constant velocity or constant acceleration baseline models.In our approach (Online IDM, OIDM), we use the difference between a parameter-specific trajectory and the real past trajectory as objective function of the optimization. Instead of optimizing the model parameters “directly”, we gain them based on a weighted sum of a set of prototype parameters, optimizing these weights.To show the benefits of the method, we compare the properties of our approach against state-of-the-art prediction methods for longitudinal driving, such as Constant Velocity (CV), Constant Acceleration (CA) and particle filter approaches on an open freeway driving dataset. The evaluation shows significant improvements in several aspects: (I) The prediction accuracy is significantly increased, (II) the obtained parameters exhibit a fast convergence and increased temporal stability and (III) the computational effort is reduced so that an online parameter adaptation becomes feasible.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125285325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-vehicle Conflict Management with Status and Intent Sharing 基于状态和意图共享的多车冲突管理
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827428
Hong Wang, S. Avedisov, O. Altintas, G. Orosz
In this paper, we extend the conflict analysis framework to resolve conflicts between multiple vehicles with different levels of automation, while utilizing status-sharing and intent-sharing enabled by vehicle-to-everything (V2X) communication. In status-sharing a connected vehicle shares its current state (e.g., position, velocity) with other connected vehicles, whereas in intent-sharing a vehicle shares information about its future trajectory (e.g., velocity bounds). Our conflict analysis framework uses reachability theory to interpret the information contained in status-sharing and intent-sharing messages through conflict charts. These charts enable real-time decision making and control of a connected automated vehicle interacting with multiple remote connected vehicles. Using numerical simulations and real highway traffic data, we demonstrate the effectiveness of the proposed conflict resolution strategies, and reveal the benefits of intent sharing in mixed-autonomy environments.
在本文中,我们扩展了冲突分析框架,以解决具有不同自动化水平的多辆汽车之间的冲突,同时利用车辆对一切(V2X)通信支持的状态共享和意图共享。在状态共享中,连接的车辆与其他连接的车辆共享其当前状态(例如,位置,速度),而在意图共享中,车辆共享其未来轨迹的信息(例如,速度界限)。我们的冲突分析框架使用可达性理论,通过冲突图来解释状态共享和意图共享消息中包含的信息。这些图表可以实现联网自动车辆与多个远程联网车辆交互的实时决策和控制。通过数值模拟和真实公路交通数据,我们证明了所提出的冲突解决策略的有效性,并揭示了混合自治环境中意图共享的好处。
{"title":"Multi-vehicle Conflict Management with Status and Intent Sharing","authors":"Hong Wang, S. Avedisov, O. Altintas, G. Orosz","doi":"10.1109/iv51971.2022.9827428","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827428","url":null,"abstract":"In this paper, we extend the conflict analysis framework to resolve conflicts between multiple vehicles with different levels of automation, while utilizing status-sharing and intent-sharing enabled by vehicle-to-everything (V2X) communication. In status-sharing a connected vehicle shares its current state (e.g., position, velocity) with other connected vehicles, whereas in intent-sharing a vehicle shares information about its future trajectory (e.g., velocity bounds). Our conflict analysis framework uses reachability theory to interpret the information contained in status-sharing and intent-sharing messages through conflict charts. These charts enable real-time decision making and control of a connected automated vehicle interacting with multiple remote connected vehicles. Using numerical simulations and real highway traffic data, we demonstrate the effectiveness of the proposed conflict resolution strategies, and reveal the benefits of intent sharing in mixed-autonomy environments.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125771438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Validating Simulation Environments for Automated Driving Systems Using 3D Object Comparison Metric 使用3D对象比较度量验证自动驾驶系统的仿真环境
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827354
Anne Wallace, S. Khastgir, Xizhe Zhang, S. Brewerton, B. Anctil, Peter Burns, Dominique Charlebois, P. Jennings
One of the main challenges for the introduction of Automated Driving Systems (ADSs) is their verification and validation (V&V). Simulation based testing has been widely accepted as an essential aspect of the ADS V&V processes. Simulations are especially useful when exposing the ADS to challenging driving scenarios, as they offer a safe and efficient alternative to real world testing. It is thus suggested that evidence for the safety case for an ADS will include results from both simulation and real-world testing. However, for simulation results to be trusted as part of the safety case of an ADS for its safety assurance, it is essential to prove that the simulation results are representative of the real world, thus validating the simulation platform itself. In this paper, we propose a novel methodology for validating the simulation environments focusing on comparing point cloud data from real LiDAR sensor and a simulated LiDAR sensor model. A 3D object dissimilarity metric is proposed to compare between the two maps (real and simulated), to quantify how accurate the simulation is. This metric is tested on collected LiDAR point cloud data and the simulated point cloud generated in the simulated environment.
引入自动驾驶系统(ads)的主要挑战之一是其验证和验证(V&V)。基于仿真的测试已被广泛接受为ADS V&V过程的一个重要方面。在将ADS暴露于具有挑战性的驾驶场景时,模拟尤其有用,因为它们为真实世界的测试提供了一种安全高效的替代方案。因此,建议对ADS的安全案例的证据将包括模拟和实际测试的结果。然而,为了使仿真结果作为ADS安全案例的一部分得到信任,以保证其安全性,必须证明仿真结果代表了真实世界,从而验证了仿真平台本身。在本文中,我们提出了一种新的方法来验证仿真环境,重点是比较来自真实激光雷达传感器和模拟激光雷达传感器模型的点云数据。提出了一个三维物体不相似度度量来比较两个地图(真实和模拟),以量化模拟的准确性。在采集的激光雷达点云数据和模拟环境中生成的模拟点云上对该度量进行了测试。
{"title":"Validating Simulation Environments for Automated Driving Systems Using 3D Object Comparison Metric","authors":"Anne Wallace, S. Khastgir, Xizhe Zhang, S. Brewerton, B. Anctil, Peter Burns, Dominique Charlebois, P. Jennings","doi":"10.1109/iv51971.2022.9827354","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827354","url":null,"abstract":"One of the main challenges for the introduction of Automated Driving Systems (ADSs) is their verification and validation (V&V). Simulation based testing has been widely accepted as an essential aspect of the ADS V&V processes. Simulations are especially useful when exposing the ADS to challenging driving scenarios, as they offer a safe and efficient alternative to real world testing. It is thus suggested that evidence for the safety case for an ADS will include results from both simulation and real-world testing. However, for simulation results to be trusted as part of the safety case of an ADS for its safety assurance, it is essential to prove that the simulation results are representative of the real world, thus validating the simulation platform itself. In this paper, we propose a novel methodology for validating the simulation environments focusing on comparing point cloud data from real LiDAR sensor and a simulated LiDAR sensor model. A 3D object dissimilarity metric is proposed to compare between the two maps (real and simulated), to quantify how accurate the simulation is. This metric is tested on collected LiDAR point cloud data and the simulated point cloud generated in the simulated environment.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115108860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Vehicle simulation model chain for virtual testing of automated driving functions and systems* 自动驾驶功能和系统虚拟测试的车辆仿真模型链*
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827074
R. Bartolozzi, V. Landersheim, G. Stoll, H. Holzmann, Riccardo Möller, H. Atzrodt
One of the major challenges of testing and validation of automated vehicles is covering the enormous amount of possible driving situations. Efficient and reliable simulation tools are therefore required to speed up those phases. The SET Level project aims at providing an environment for simulation-based test and development of automated driving functions, focusing, as one of its main objectives, on providing an open, flexible, and extendable simulation environment, compliant to current simulation standards as Functional Mock-up Interface (FMI) and Open Simulation Interface (OSI). Within this context, the authors proposed a vehicle simulation model chain including models of motion control, actuators (with actuator management) and vehicle dynamics with two different detail levels. The models were built in Matlab/Simulink, including a developed OSI wrapper for integration into existing simulation environments. In the paper, the simulation architecture including the OSI wrapper and the single models of the chain is presented, as well as simulation results, showing the potential of the presented model chain in carrying out analyses in the field of testing automated driving functions.
自动驾驶汽车测试和验证的主要挑战之一是涵盖大量可能的驾驶情况。因此,需要高效可靠的仿真工具来加快这些阶段。SET Level项目旨在为基于仿真的自动驾驶功能测试和开发提供一个环境,其主要目标之一是提供一个开放、灵活和可扩展的仿真环境,符合当前的仿真标准,如功能模拟接口(FMI)和开放仿真接口(OSI)。在此背景下,作者提出了一个车辆仿真模型链,包括运动控制模型、执行器模型(带有执行器管理)和具有两个不同细节层次的车辆动力学模型。这些模型是在Matlab/Simulink中构建的,包括一个开发的OSI包装器,用于集成到现有的仿真环境中。本文给出了包括OSI封装器和单个模型在内的仿真体系结构,并给出了仿真结果,显示了所提出的模型链在自动驾驶功能测试领域进行分析的潜力。
{"title":"Vehicle simulation model chain for virtual testing of automated driving functions and systems*","authors":"R. Bartolozzi, V. Landersheim, G. Stoll, H. Holzmann, Riccardo Möller, H. Atzrodt","doi":"10.1109/iv51971.2022.9827074","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827074","url":null,"abstract":"One of the major challenges of testing and validation of automated vehicles is covering the enormous amount of possible driving situations. Efficient and reliable simulation tools are therefore required to speed up those phases. The SET Level project aims at providing an environment for simulation-based test and development of automated driving functions, focusing, as one of its main objectives, on providing an open, flexible, and extendable simulation environment, compliant to current simulation standards as Functional Mock-up Interface (FMI) and Open Simulation Interface (OSI). Within this context, the authors proposed a vehicle simulation model chain including models of motion control, actuators (with actuator management) and vehicle dynamics with two different detail levels. The models were built in Matlab/Simulink, including a developed OSI wrapper for integration into existing simulation environments. In the paper, the simulation architecture including the OSI wrapper and the single models of the chain is presented, as well as simulation results, showing the potential of the presented model chain in carrying out analyses in the field of testing automated driving functions.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126004827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Virtual Reality and Steer-by-Wire Systems to Validate Driver Assistance Concepts 结合虚拟现实和线控转向系统验证驾驶员辅助概念
Pub Date : 2022-06-05 DOI: 10.1109/iv51971.2022.9827282
Elliot Weiss, J. Talbot, J. C. Gerdes
Emerging driver assistance system architectures require new methods for testing and validation. For advanced driver assistance systems (ADASs) that closely blend control with the driver, it is particularly important that tests elicit natural driving behavior. We present a flexible Human&Vehicle-in-the-Loop (Hu&ViL) platform that provides multisensory feedback to the driver during ADAS testing to address this challenge. This platform, which graphically renders scenarios to the driver through a virtual reality (VR) head-mounted display (HMD) while operating a four-wheel steer-by-wire (SBW) vehicle, enables testing in nominal dynamics, low friction, and high speed configurations. We demonstrate the feasibility of our approach by running experiments with a novel ADAS in low friction and highway settings on a limited proving ground. We further connect this work to a formal method for categorizing test bench configurations and demonstrate a possible progression of tests on different configurations of our platform.
新兴的驾驶辅助系统架构需要新的测试和验证方法。对于将控制与驾驶员紧密结合的高级驾驶员辅助系统(ADASs)来说,测试引发自然驾驶行为尤为重要。我们提出了一个灵活的人与车在环(Hu&ViL)平台,该平台在ADAS测试期间为驾驶员提供多感官反馈,以解决这一挑战。该平台通过虚拟现实(VR)头戴式显示器(HMD)向驾驶员呈现图形化场景,同时驾驶四轮线控转向(SBW)车辆,可以在标准动态、低摩擦和高速配置下进行测试。我们通过在有限的试验场进行低摩擦和高速公路设置的新型ADAS实验来证明我们方法的可行性。我们进一步将这项工作与对测试台架配置进行分类的正式方法联系起来,并在我们平台的不同配置上演示测试的可能进展。
{"title":"Combining Virtual Reality and Steer-by-Wire Systems to Validate Driver Assistance Concepts","authors":"Elliot Weiss, J. Talbot, J. C. Gerdes","doi":"10.1109/iv51971.2022.9827282","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827282","url":null,"abstract":"Emerging driver assistance system architectures require new methods for testing and validation. For advanced driver assistance systems (ADASs) that closely blend control with the driver, it is particularly important that tests elicit natural driving behavior. We present a flexible Human&Vehicle-in-the-Loop (Hu&ViL) platform that provides multisensory feedback to the driver during ADAS testing to address this challenge. This platform, which graphically renders scenarios to the driver through a virtual reality (VR) head-mounted display (HMD) while operating a four-wheel steer-by-wire (SBW) vehicle, enables testing in nominal dynamics, low friction, and high speed configurations. We demonstrate the feasibility of our approach by running experiments with a novel ADAS in low friction and highway settings on a limited proving ground. We further connect this work to a formal method for categorizing test bench configurations and demonstrate a possible progression of tests on different configurations of our platform.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126420000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2022 IEEE Intelligent Vehicles Symposium (IV)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1