Pub Date : 2022-06-05DOI: 10.1109/iv51971.2022.9827415
Salma Elmalaki
Automotive is becoming more and more sensor-equipped. Collision avoidance, lane departure warning, and self-parking are examples of applications becoming possible with the adoption of more sensors in the automotive industry. Moreover, the driver is now equipped with sensory systems like wearables and mobile phones. This rich sensory environment and the real-time streaming of contextual data from the vehicle make the human factor integral in the loop of computation. By integrating the human’s behavior and reaction into the advanced driver-assistance systems (ADAS), the vehicles become a more context-aware entity. Hence, we propose MAConAuto, a framework that helps design human-in-the-loop automotive systems by providing a common platform to engage the rich sensory systems in wearables and mobile to have context-aware applications. By personalizing the context adaptation in automotive applications, MAConAuto learns the behavior and reactions of the human to adapt to the personalized preference where interventions are continuously tuned using Reinforcement Learning. Our general framework satisfies three main design properties, adaptability, generalizability, and conflict resolution. We show how MAConAuto can be used as a framework to design two applications as human-centric applications, forward collision warning, and vehicle HVAC system with negligible time overhead to the average human response time.
{"title":"MAConAuto: Framework for Mobile-Assisted Human-in-the-Loop Automotive System","authors":"Salma Elmalaki","doi":"10.1109/iv51971.2022.9827415","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827415","url":null,"abstract":"Automotive is becoming more and more sensor-equipped. Collision avoidance, lane departure warning, and self-parking are examples of applications becoming possible with the adoption of more sensors in the automotive industry. Moreover, the driver is now equipped with sensory systems like wearables and mobile phones. This rich sensory environment and the real-time streaming of contextual data from the vehicle make the human factor integral in the loop of computation. By integrating the human’s behavior and reaction into the advanced driver-assistance systems (ADAS), the vehicles become a more context-aware entity. Hence, we propose MAConAuto, a framework that helps design human-in-the-loop automotive systems by providing a common platform to engage the rich sensory systems in wearables and mobile to have context-aware applications. By personalizing the context adaptation in automotive applications, MAConAuto learns the behavior and reactions of the human to adapt to the personalized preference where interventions are continuously tuned using Reinforcement Learning. Our general framework satisfies three main design properties, adaptability, generalizability, and conflict resolution. We show how MAConAuto can be used as a framework to design two applications as human-centric applications, forward collision warning, and vehicle HVAC system with negligible time overhead to the average human response time.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129268751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-05DOI: 10.1109/iv51971.2022.9827260
Frank M. Hafner, Matthias Zeller, Mark Schutera, Jochen Abhau, Julian F. P. Kooij
Customization of a convolutional neural network (CNN) to a specific compute platform involves finding an optimal pareto state between computational complexity of the CNN and resulting throughput in operations per second on the compute platform. However, existing inference performance benchmarks compare complete backbones that entail many differences between their CNN configurations, which do not provide insights in how fine-grade layer design choices affect this balance.We present BackboneAnalysis, a methodology for extracting structured insights into the trade-off for a chosen target compute platform. Within a one-factor-at-a-time analysis setup, CNN architectures are systematically varied and evaluated based on throughput and latency measurements irrespective of model accuracy. Thereby, we investigate the configuration factors input shape, batch size, kernel size and convolutional layer type.In our experiments, we deploy BackboneAnalysis on a Xavier iGPU and a Coral Edge TPU accelerator. The analysis reveals that the general assumption from optimal Roofline performance that higher operation density in CNNs leads to higher throughput does not always hold. These results highlight the importance for a neural network architect to be aware of platform-specific latency and throughput behavior in order to derive sensible configuration decisions for a custom CNN.
{"title":"BackboneAnalysis: Structured Insights into Compute Platforms from CNN Inference Latency","authors":"Frank M. Hafner, Matthias Zeller, Mark Schutera, Jochen Abhau, Julian F. P. Kooij","doi":"10.1109/iv51971.2022.9827260","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827260","url":null,"abstract":"Customization of a convolutional neural network (CNN) to a specific compute platform involves finding an optimal pareto state between computational complexity of the CNN and resulting throughput in operations per second on the compute platform. However, existing inference performance benchmarks compare complete backbones that entail many differences between their CNN configurations, which do not provide insights in how fine-grade layer design choices affect this balance.We present BackboneAnalysis, a methodology for extracting structured insights into the trade-off for a chosen target compute platform. Within a one-factor-at-a-time analysis setup, CNN architectures are systematically varied and evaluated based on throughput and latency measurements irrespective of model accuracy. Thereby, we investigate the configuration factors input shape, batch size, kernel size and convolutional layer type.In our experiments, we deploy BackboneAnalysis on a Xavier iGPU and a Coral Edge TPU accelerator. The analysis reveals that the general assumption from optimal Roofline performance that higher operation density in CNNs leads to higher throughput does not always hold. These results highlight the importance for a neural network architect to be aware of platform-specific latency and throughput behavior in order to derive sensible configuration decisions for a custom CNN.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116821800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-05DOI: 10.1109/IV51971.2022.9827116
Frederik Hasecke, Martin Alsfasser, A. Kummert
To train a well performing neural network for semantic segmentation, it is crucial to have a large dataset with available ground truth for the network to generalize on unseen data. In this paper we present novel point cloud augmentation methods to artificially diversify a dataset. Our sensor-centric methods keep the data structure consistent with the lidar sensor capabilities. Due to these new methods, we are able to enrich low-value data with high-value instances, as well as create entirely new scenes. We validate our methods on multiple neural networks with the public SemanticKITTI [3] dataset and demonstrate that all networks improve compared to their respective baseline. In addition, we show that our methods enable the use of very small datasets, saving annotation time, training time and the associated costs.
{"title":"What Can be Seen is What You Get: Structure Aware Point Cloud Augmentation","authors":"Frederik Hasecke, Martin Alsfasser, A. Kummert","doi":"10.1109/IV51971.2022.9827116","DOIUrl":"https://doi.org/10.1109/IV51971.2022.9827116","url":null,"abstract":"To train a well performing neural network for semantic segmentation, it is crucial to have a large dataset with available ground truth for the network to generalize on unseen data. In this paper we present novel point cloud augmentation methods to artificially diversify a dataset. Our sensor-centric methods keep the data structure consistent with the lidar sensor capabilities. Due to these new methods, we are able to enrich low-value data with high-value instances, as well as create entirely new scenes. We validate our methods on multiple neural networks with the public SemanticKITTI [3] dataset and demonstrate that all networks improve compared to their respective baseline. In addition, we show that our methods enable the use of very small datasets, saving annotation time, training time and the associated costs.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114239465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-05DOI: 10.1109/iv51971.2022.9827238
Zhijing Zhu, Robin Philipp, Constanze Hungar, Falk Howar
To achieve safety of high level automated driving, not only functional failures like E/E system malfunctions and software crashes should be excluded, but also functional insufficiencies and performance limitations such as sensor resolution should be thoroughly investigated and considered. The former problem is known as functional safety (FuSa), which is coped with by ISO 26262. The latter focuses on safe vehicle behavior and is summarized as safety of the intended functionality (SOTIF) within the under development standard ISO 21448. For realizing this safety level, it is crucial to understand the system and the triggering conditions that activate its existing functional insufficiencies. However, the concept of triggering condition is new and still lacks relevant research. In this paper, we interpret triggering condition and other SOTIF-relevant terms in the scope of ISO 21448. We summarize the formal formulations of triggering conditions based on several key principles and provide possible categories for facilitating the systematization. We contribute a novel method for the identification of triggering conditions and offer a comparison with two other proposed methods regarding diverse aspects. Furthermore, we show that our method requires less insight into the system and fewer brainstorm efforts and provides well-structured and distinctly formulated triggering conditions.
{"title":"Systematization and Identification of Triggering Conditions: A Preliminary Step for Efficient Testing of Autonomous Vehicles","authors":"Zhijing Zhu, Robin Philipp, Constanze Hungar, Falk Howar","doi":"10.1109/iv51971.2022.9827238","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827238","url":null,"abstract":"To achieve safety of high level automated driving, not only functional failures like E/E system malfunctions and software crashes should be excluded, but also functional insufficiencies and performance limitations such as sensor resolution should be thoroughly investigated and considered. The former problem is known as functional safety (FuSa), which is coped with by ISO 26262. The latter focuses on safe vehicle behavior and is summarized as safety of the intended functionality (SOTIF) within the under development standard ISO 21448. For realizing this safety level, it is crucial to understand the system and the triggering conditions that activate its existing functional insufficiencies. However, the concept of triggering condition is new and still lacks relevant research. In this paper, we interpret triggering condition and other SOTIF-relevant terms in the scope of ISO 21448. We summarize the formal formulations of triggering conditions based on several key principles and provide possible categories for facilitating the systematization. We contribute a novel method for the identification of triggering conditions and offer a comparison with two other proposed methods regarding diverse aspects. Furthermore, we show that our method requires less insight into the system and fewer brainstorm efforts and provides well-structured and distinctly formulated triggering conditions.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127329557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-05DOI: 10.1109/iv51971.2022.9827239
Faris Janjos, Maxim Dolgov, Muhamed Kuric, Yinzhe Shen, J. M. Zöllner
In this work, we present a novel multi-modal trajectory prediction architecture. We decompose the uncertainty of future trajectories along higher-level scene characteristics and lower-level motion characteristics, and model multi-modality along both dimensions separately. The scene uncertainty is captured in a joint manner, where diversity of scene modes is ensured by training multiple separate anchor networks which specialize to different scene realizations. At the same time, each network outputs multiple trajectories that cover smaller deviations given a scene mode, thus capturing motion modes. In addition, we train our architectures with an outlier-robust regression loss function, which offers a trade-off between the outlier-sensitive L2 and outlier-insensitive L1 losses. Our scene anchor model achieves improvements over the state of the art on the INTERACTION dataset, outperforming the StarNet architecture from our previous work.
{"title":"SAN: Scene Anchor Networks for Joint Action-Space Prediction","authors":"Faris Janjos, Maxim Dolgov, Muhamed Kuric, Yinzhe Shen, J. M. Zöllner","doi":"10.1109/iv51971.2022.9827239","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827239","url":null,"abstract":"In this work, we present a novel multi-modal trajectory prediction architecture. We decompose the uncertainty of future trajectories along higher-level scene characteristics and lower-level motion characteristics, and model multi-modality along both dimensions separately. The scene uncertainty is captured in a joint manner, where diversity of scene modes is ensured by training multiple separate anchor networks which specialize to different scene realizations. At the same time, each network outputs multiple trajectories that cover smaller deviations given a scene mode, thus capturing motion modes. In addition, we train our architectures with an outlier-robust regression loss function, which offers a trade-off between the outlier-sensitive L2 and outlier-insensitive L1 losses. Our scene anchor model achieves improvements over the state of the art on the INTERACTION dataset, outperforming the StarNet architecture from our previous work.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127536115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-05DOI: 10.1109/iv51971.2022.9827115
Karsten Kreutz, J. Eggert
In this paper, we propose and analyze a method for trajectory prediction in longitudinal car-following scenarios. Hereby, the prediction is realized by a longitudinal car-following model (Intelligent Driver Model, IDM) with online estimated parameters. Previous work has shown that IDM online parameter adaptation is possible but difficult and slow, while providing only small improvement of prediction quality over e.g. constant velocity or constant acceleration baseline models.In our approach (Online IDM, OIDM), we use the difference between a parameter-specific trajectory and the real past trajectory as objective function of the optimization. Instead of optimizing the model parameters “directly”, we gain them based on a weighted sum of a set of prototype parameters, optimizing these weights.To show the benefits of the method, we compare the properties of our approach against state-of-the-art prediction methods for longitudinal driving, such as Constant Velocity (CV), Constant Acceleration (CA) and particle filter approaches on an open freeway driving dataset. The evaluation shows significant improvements in several aspects: (I) The prediction accuracy is significantly increased, (II) the obtained parameters exhibit a fast convergence and increased temporal stability and (III) the computational effort is reduced so that an online parameter adaptation becomes feasible.
{"title":"Fast online parameter estimation of the Intelligent Driver Model for trajectory prediction","authors":"Karsten Kreutz, J. Eggert","doi":"10.1109/iv51971.2022.9827115","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827115","url":null,"abstract":"In this paper, we propose and analyze a method for trajectory prediction in longitudinal car-following scenarios. Hereby, the prediction is realized by a longitudinal car-following model (Intelligent Driver Model, IDM) with online estimated parameters. Previous work has shown that IDM online parameter adaptation is possible but difficult and slow, while providing only small improvement of prediction quality over e.g. constant velocity or constant acceleration baseline models.In our approach (Online IDM, OIDM), we use the difference between a parameter-specific trajectory and the real past trajectory as objective function of the optimization. Instead of optimizing the model parameters “directly”, we gain them based on a weighted sum of a set of prototype parameters, optimizing these weights.To show the benefits of the method, we compare the properties of our approach against state-of-the-art prediction methods for longitudinal driving, such as Constant Velocity (CV), Constant Acceleration (CA) and particle filter approaches on an open freeway driving dataset. The evaluation shows significant improvements in several aspects: (I) The prediction accuracy is significantly increased, (II) the obtained parameters exhibit a fast convergence and increased temporal stability and (III) the computational effort is reduced so that an online parameter adaptation becomes feasible.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125285325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-05DOI: 10.1109/iv51971.2022.9827428
Hong Wang, S. Avedisov, O. Altintas, G. Orosz
In this paper, we extend the conflict analysis framework to resolve conflicts between multiple vehicles with different levels of automation, while utilizing status-sharing and intent-sharing enabled by vehicle-to-everything (V2X) communication. In status-sharing a connected vehicle shares its current state (e.g., position, velocity) with other connected vehicles, whereas in intent-sharing a vehicle shares information about its future trajectory (e.g., velocity bounds). Our conflict analysis framework uses reachability theory to interpret the information contained in status-sharing and intent-sharing messages through conflict charts. These charts enable real-time decision making and control of a connected automated vehicle interacting with multiple remote connected vehicles. Using numerical simulations and real highway traffic data, we demonstrate the effectiveness of the proposed conflict resolution strategies, and reveal the benefits of intent sharing in mixed-autonomy environments.
{"title":"Multi-vehicle Conflict Management with Status and Intent Sharing","authors":"Hong Wang, S. Avedisov, O. Altintas, G. Orosz","doi":"10.1109/iv51971.2022.9827428","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827428","url":null,"abstract":"In this paper, we extend the conflict analysis framework to resolve conflicts between multiple vehicles with different levels of automation, while utilizing status-sharing and intent-sharing enabled by vehicle-to-everything (V2X) communication. In status-sharing a connected vehicle shares its current state (e.g., position, velocity) with other connected vehicles, whereas in intent-sharing a vehicle shares information about its future trajectory (e.g., velocity bounds). Our conflict analysis framework uses reachability theory to interpret the information contained in status-sharing and intent-sharing messages through conflict charts. These charts enable real-time decision making and control of a connected automated vehicle interacting with multiple remote connected vehicles. Using numerical simulations and real highway traffic data, we demonstrate the effectiveness of the proposed conflict resolution strategies, and reveal the benefits of intent sharing in mixed-autonomy environments.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125771438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-05DOI: 10.1109/iv51971.2022.9827354
Anne Wallace, S. Khastgir, Xizhe Zhang, S. Brewerton, B. Anctil, Peter Burns, Dominique Charlebois, P. Jennings
One of the main challenges for the introduction of Automated Driving Systems (ADSs) is their verification and validation (V&V). Simulation based testing has been widely accepted as an essential aspect of the ADS V&V processes. Simulations are especially useful when exposing the ADS to challenging driving scenarios, as they offer a safe and efficient alternative to real world testing. It is thus suggested that evidence for the safety case for an ADS will include results from both simulation and real-world testing. However, for simulation results to be trusted as part of the safety case of an ADS for its safety assurance, it is essential to prove that the simulation results are representative of the real world, thus validating the simulation platform itself. In this paper, we propose a novel methodology for validating the simulation environments focusing on comparing point cloud data from real LiDAR sensor and a simulated LiDAR sensor model. A 3D object dissimilarity metric is proposed to compare between the two maps (real and simulated), to quantify how accurate the simulation is. This metric is tested on collected LiDAR point cloud data and the simulated point cloud generated in the simulated environment.
{"title":"Validating Simulation Environments for Automated Driving Systems Using 3D Object Comparison Metric","authors":"Anne Wallace, S. Khastgir, Xizhe Zhang, S. Brewerton, B. Anctil, Peter Burns, Dominique Charlebois, P. Jennings","doi":"10.1109/iv51971.2022.9827354","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827354","url":null,"abstract":"One of the main challenges for the introduction of Automated Driving Systems (ADSs) is their verification and validation (V&V). Simulation based testing has been widely accepted as an essential aspect of the ADS V&V processes. Simulations are especially useful when exposing the ADS to challenging driving scenarios, as they offer a safe and efficient alternative to real world testing. It is thus suggested that evidence for the safety case for an ADS will include results from both simulation and real-world testing. However, for simulation results to be trusted as part of the safety case of an ADS for its safety assurance, it is essential to prove that the simulation results are representative of the real world, thus validating the simulation platform itself. In this paper, we propose a novel methodology for validating the simulation environments focusing on comparing point cloud data from real LiDAR sensor and a simulated LiDAR sensor model. A 3D object dissimilarity metric is proposed to compare between the two maps (real and simulated), to quantify how accurate the simulation is. This metric is tested on collected LiDAR point cloud data and the simulated point cloud generated in the simulated environment.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115108860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-05DOI: 10.1109/iv51971.2022.9827074
R. Bartolozzi, V. Landersheim, G. Stoll, H. Holzmann, Riccardo Möller, H. Atzrodt
One of the major challenges of testing and validation of automated vehicles is covering the enormous amount of possible driving situations. Efficient and reliable simulation tools are therefore required to speed up those phases. The SET Level project aims at providing an environment for simulation-based test and development of automated driving functions, focusing, as one of its main objectives, on providing an open, flexible, and extendable simulation environment, compliant to current simulation standards as Functional Mock-up Interface (FMI) and Open Simulation Interface (OSI). Within this context, the authors proposed a vehicle simulation model chain including models of motion control, actuators (with actuator management) and vehicle dynamics with two different detail levels. The models were built in Matlab/Simulink, including a developed OSI wrapper for integration into existing simulation environments. In the paper, the simulation architecture including the OSI wrapper and the single models of the chain is presented, as well as simulation results, showing the potential of the presented model chain in carrying out analyses in the field of testing automated driving functions.
{"title":"Vehicle simulation model chain for virtual testing of automated driving functions and systems*","authors":"R. Bartolozzi, V. Landersheim, G. Stoll, H. Holzmann, Riccardo Möller, H. Atzrodt","doi":"10.1109/iv51971.2022.9827074","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827074","url":null,"abstract":"One of the major challenges of testing and validation of automated vehicles is covering the enormous amount of possible driving situations. Efficient and reliable simulation tools are therefore required to speed up those phases. The SET Level project aims at providing an environment for simulation-based test and development of automated driving functions, focusing, as one of its main objectives, on providing an open, flexible, and extendable simulation environment, compliant to current simulation standards as Functional Mock-up Interface (FMI) and Open Simulation Interface (OSI). Within this context, the authors proposed a vehicle simulation model chain including models of motion control, actuators (with actuator management) and vehicle dynamics with two different detail levels. The models were built in Matlab/Simulink, including a developed OSI wrapper for integration into existing simulation environments. In the paper, the simulation architecture including the OSI wrapper and the single models of the chain is presented, as well as simulation results, showing the potential of the presented model chain in carrying out analyses in the field of testing automated driving functions.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126004827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-05DOI: 10.1109/iv51971.2022.9827282
Elliot Weiss, J. Talbot, J. C. Gerdes
Emerging driver assistance system architectures require new methods for testing and validation. For advanced driver assistance systems (ADASs) that closely blend control with the driver, it is particularly important that tests elicit natural driving behavior. We present a flexible Human&Vehicle-in-the-Loop (Hu&ViL) platform that provides multisensory feedback to the driver during ADAS testing to address this challenge. This platform, which graphically renders scenarios to the driver through a virtual reality (VR) head-mounted display (HMD) while operating a four-wheel steer-by-wire (SBW) vehicle, enables testing in nominal dynamics, low friction, and high speed configurations. We demonstrate the feasibility of our approach by running experiments with a novel ADAS in low friction and highway settings on a limited proving ground. We further connect this work to a formal method for categorizing test bench configurations and demonstrate a possible progression of tests on different configurations of our platform.
{"title":"Combining Virtual Reality and Steer-by-Wire Systems to Validate Driver Assistance Concepts","authors":"Elliot Weiss, J. Talbot, J. C. Gerdes","doi":"10.1109/iv51971.2022.9827282","DOIUrl":"https://doi.org/10.1109/iv51971.2022.9827282","url":null,"abstract":"Emerging driver assistance system architectures require new methods for testing and validation. For advanced driver assistance systems (ADASs) that closely blend control with the driver, it is particularly important that tests elicit natural driving behavior. We present a flexible Human&Vehicle-in-the-Loop (Hu&ViL) platform that provides multisensory feedback to the driver during ADAS testing to address this challenge. This platform, which graphically renders scenarios to the driver through a virtual reality (VR) head-mounted display (HMD) while operating a four-wheel steer-by-wire (SBW) vehicle, enables testing in nominal dynamics, low friction, and high speed configurations. We demonstrate the feasibility of our approach by running experiments with a novel ADAS in low friction and highway settings on a limited proving ground. We further connect this work to a formal method for categorizing test bench configurations and demonstrate a possible progression of tests on different configurations of our platform.","PeriodicalId":184622,"journal":{"name":"2022 IEEE Intelligent Vehicles Symposium (IV)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126420000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}