Pub Date : 2023-12-27DOI: 10.1109/mits.2023.3342308
Yan Tong, Licheng Wen, Pinlong Cai, Daocheng Fu, Song Mao, Botian Shi, Yikang Li
With the commercial application of automated vehicles (AVs), the sharing of roads between AVs and human-driven vehicles (HVs) will become a common occurrence in the future. While research has focused on improving the safety and reliability of autonomous driving, it’s also crucial to consider collaboration between AVs and HVs. Human-like interaction is a required capability for AVs, especially at common unsignalized intersections, as human drivers of HVs expect to maintain their driving habits for intervehicle interactions. This article uses the social value orientation (SVO) in the decision making of vehicles to describe the social interaction among multiple vehicles. Specifically, we define the quantitative calculation of the conflict-involved SVO at unsignalized intersections to enhance decision making based on the reinforcement learning method. We use naturalistic driving scenarios with highly interactive motions for the performance evaluation of the proposed method. The experimental results show that SVO is more effective in characterizing intervehicle interactions than conventional motion-state parameters like velocity, and the proposed method can accurately reproduce naturalistic driving trajectories compared to behavior cloning.
{"title":"Human-Like Decision Making at Unsignalized Intersections Using Social Value Orientation","authors":"Yan Tong, Licheng Wen, Pinlong Cai, Daocheng Fu, Song Mao, Botian Shi, Yikang Li","doi":"10.1109/mits.2023.3342308","DOIUrl":"https://doi.org/10.1109/mits.2023.3342308","url":null,"abstract":"With the commercial application of automated vehicles (AVs), the sharing of roads between AVs and human-driven vehicles (HVs) will become a common occurrence in the future. While research has focused on improving the safety and reliability of autonomous driving, it’s also crucial to consider collaboration between AVs and HVs. Human-like interaction is a required capability for AVs, especially at common unsignalized intersections, as human drivers of HVs expect to maintain their driving habits for intervehicle interactions. This article uses the social value orientation (SVO) in the decision making of vehicles to describe the social interaction among multiple vehicles. Specifically, we define the quantitative calculation of the conflict-involved SVO at unsignalized intersections to enhance decision making based on the reinforcement learning method. We use naturalistic driving scenarios with highly interactive motions for the performance evaluation of the proposed method. The experimental results show that SVO is more effective in characterizing intervehicle interactions than conventional motion-state parameters like velocity, and the proposed method can accurately reproduce naturalistic driving trajectories compared to behavior cloning.","PeriodicalId":48826,"journal":{"name":"IEEE Intelligent Transportation Systems Magazine","volume":"123 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140074861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Affective human–vehicle interaction of intelligent cockpits is a key factor affecting the acceptance, trust, and experience for intelligent connected vehicles. Driver emotion detection is the premise of realizing affective human–machine interaction. To achieve accurate and robust driver emotion detection, we propose a novel brain-inspired framework for on-road driver emotion detection using facial expressions. Then, we conduct driver emotion data collection in an on-road context. We develop a data annotation tool, annotate the collected data, and obtain the RoadEmo dataset, a dataset of facial expressions and road scenarios under the driver’s emotional driving. Finally, we validate the detection accuracy of the proposed framework. The experiment results show that our proposed framework achieves excellent detection performance in the on-road driver emotion detection task and outperforms existing frameworks.
{"title":"Brain-Inspired Driver Emotion Detection for Intelligent Cockpits Based on Real Driving Data","authors":"Wenbo Li, Yingzhang Wu, Huafei Xiao, Shen Li, Ruichen Tan, Zejian Deng, Wen Hu, Dongpu Cao, Gang Guo","doi":"10.1109/mits.2023.3339758","DOIUrl":"https://doi.org/10.1109/mits.2023.3339758","url":null,"abstract":"Affective human–vehicle interaction of intelligent cockpits is a key factor affecting the acceptance, trust, and experience for intelligent connected vehicles. Driver emotion detection is the premise of realizing affective human–machine interaction. To achieve accurate and robust driver emotion detection, we propose a novel brain-inspired framework for on-road driver emotion detection using facial expressions. Then, we conduct driver emotion data collection in an on-road context. We develop a data annotation tool, annotate the collected data, and obtain the RoadEmo dataset, a dataset of facial expressions and road scenarios under the driver’s emotional driving. Finally, we validate the detection accuracy of the proposed framework. The experiment results show that our proposed framework achieves excellent detection performance in the on-road driver emotion detection task and outperforms existing frameworks.","PeriodicalId":48826,"journal":{"name":"IEEE Intelligent Transportation Systems Magazine","volume":"75 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141575117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-21DOI: 10.1109/mits.2023.3335126
Vitaly G. Stepanyants, Aleksandr Y. Romanov
Automated and connected vehicles are emerging in the market. Currently, solutions are being proposed to use these technologies for cooperative driving, which can significantly improve road safety. Vehicular safety applications must be tested before deployment. It is challenging to verify and validate them in the real world. Therefore, simulation is used for this purpose. Modeling this technology necessitates coupled use of traffic flow, vehicle dynamics, and communication network simulators. State-of-the-art tools exist in these domains; however, they are difficult to integrate or lack full domain coverage. This article analyzes the requirements for an integrated connected and automated vehicle simulation environment for simulating vehicular cooperative driving automation with consideration of surrounding objects’ influence. For this purpose, we have assessed the existing challenges and practices. Vehicular simulation tools, signal propagation, and cooperative perception models are reviewed and analyzed. In our review, we focus mainly on autonomous driving simulators with 3D graphical environments as they have not yet been assessed for cooperative driving task fitness. Further, the current state of connected and automated vehicle simulation studies using these tools is surveyed, including single-tool and co-simulation approaches. We discuss the shortcomings of existing methods and propose an architecture for an integrated simulation environment (ISE) with full domain coverage using open source tools. The obtained conclusions can be further used in the development of connected and automated vehicle ISEs.
自动驾驶和联网汽车正在市场上兴起。目前,正在提出利用这些技术实现协同驾驶的解决方案,这可以大大提高道路安全。车辆安全应用在部署前必须经过测试。在现实世界中对其进行验证和确认具有挑战性。因此,模拟技术被用于这一目的。对这项技术进行建模需要结合使用交通流、车辆动力学和通信网络模拟器。这些领域都有最先进的工具,但它们难以集成或缺乏全领域覆盖。本文分析了集成互联和自动驾驶车辆模拟环境的要求,以模拟考虑到周围物体影响的车辆协同自动驾驶。为此,我们对现有的挑战和实践进行了评估。我们对车辆仿真工具、信号传播和合作感知模型进行了回顾和分析。在综述中,我们主要关注具有三维图形环境的自动驾驶模拟器,因为这些模拟器尚未针对合作驾驶任务的适配性进行评估。此外,我们还调查了使用这些工具进行互联和自动驾驶汽车模拟研究的现状,包括单一工具和协同模拟方法。我们讨论了现有方法的不足之处,并提出了使用开源工具的全领域覆盖集成仿真环境(ISE)架构。获得的结论可进一步用于互联和自动驾驶汽车 ISE 的开发。
{"title":"A Survey of Integrated Simulation Environments for Connected Automated Vehicles: Requirements, Tools, and Architecture","authors":"Vitaly G. Stepanyants, Aleksandr Y. Romanov","doi":"10.1109/mits.2023.3335126","DOIUrl":"https://doi.org/10.1109/mits.2023.3335126","url":null,"abstract":"Automated and connected vehicles are emerging in the market. Currently, solutions are being proposed to use these technologies for cooperative driving, which can significantly improve road safety. Vehicular safety applications must be tested before deployment. It is challenging to verify and validate them in the real world. Therefore, simulation is used for this purpose. Modeling this technology necessitates coupled use of traffic flow, vehicle dynamics, and communication network simulators. State-of-the-art tools exist in these domains; however, they are difficult to integrate or lack full domain coverage. This article analyzes the requirements for an integrated connected and automated vehicle simulation environment for simulating vehicular cooperative driving automation with consideration of surrounding objects’ influence. For this purpose, we have assessed the existing challenges and practices. Vehicular simulation tools, signal propagation, and cooperative perception models are reviewed and analyzed. In our review, we focus mainly on autonomous driving simulators with 3D graphical environments as they have not yet been assessed for cooperative driving task fitness. Further, the current state of connected and automated vehicle simulation studies using these tools is surveyed, including single-tool and co-simulation approaches. We discuss the shortcomings of existing methods and propose an architecture for an integrated simulation environment (ISE) with full domain coverage using open source tools. The obtained conclusions can be further used in the development of connected and automated vehicle ISEs.","PeriodicalId":48826,"journal":{"name":"IEEE Intelligent Transportation Systems Magazine","volume":"126 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140074859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-13DOI: 10.1109/mits.2023.3331817
Zhuping Zhou, Bowen Liu, Changji Yuan, Ping Zhang
Predicting pedestrian crossing trajectories has become a primary task in aiding autonomous vehicles to assess risks in pedestrian–vehicle interactions. As agile participants with changeable behavior, pedestrians are often capable of choosing from multiple possible crossing trajectories. Current research lacks the ability to predict multimodal trajectories with interpretability, and it also struggles to capture low-probability trajectories effectively. Addressing this gap, this article proposes a multimodal trajectory prediction model that operates by first estimating potential motion trends to prompt the generation of corresponding trajectories. It encompasses three sequential stages. First, pedestrian motion characteristics are analyzed, and prior knowledge of pedestrian motion states is obtained using the Gaussian mixture clustering method. Second, a long short-term memory model is employed to predict future pedestrian motion states, utilizing the acquired prior knowledge as input. Finally, the predicted motion states are discretized into various potential motion patterns, which are then introduced as prompts to the Spatio-Temporal Graph Transformer model for trajectory prediction. Experimental results on the Euro-PVI and BPI datasets demonstrate that the proposed model achieves cutting-edge performance in predicting pedestrian crossing trajectories. Notably, it significantly enhances the diversity, accuracy, and interpretability of pedestrian crossing trajectory predictions.
{"title":"A Multimodal Trajectory Prediction Method for Pedestrian Crossing Considering Pedestrian Motion State","authors":"Zhuping Zhou, Bowen Liu, Changji Yuan, Ping Zhang","doi":"10.1109/mits.2023.3331817","DOIUrl":"https://doi.org/10.1109/mits.2023.3331817","url":null,"abstract":"Predicting pedestrian crossing trajectories has become a primary task in aiding autonomous vehicles to assess risks in pedestrian–vehicle interactions. As agile participants with changeable behavior, pedestrians are often capable of choosing from multiple possible crossing trajectories. Current research lacks the ability to predict multimodal trajectories with interpretability, and it also struggles to capture low-probability trajectories effectively. Addressing this gap, this article proposes a multimodal trajectory prediction model that operates by first estimating potential motion trends to prompt the generation of corresponding trajectories. It encompasses three sequential stages. First, pedestrian motion characteristics are analyzed, and prior knowledge of pedestrian motion states is obtained using the Gaussian mixture clustering method. Second, a long short-term memory model is employed to predict future pedestrian motion states, utilizing the acquired prior knowledge as input. Finally, the predicted motion states are discretized into various potential motion patterns, which are then introduced as prompts to the Spatio-Temporal Graph Transformer model for trajectory prediction. Experimental results on the Euro-PVI and BPI datasets demonstrate that the proposed model achieves cutting-edge performance in predicting pedestrian crossing trajectories. Notably, it significantly enhances the diversity, accuracy, and interpretability of pedestrian crossing trajectory predictions.","PeriodicalId":48826,"journal":{"name":"IEEE Intelligent Transportation Systems Magazine","volume":"26 1","pages":""},"PeriodicalIF":3.6,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140941935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-08DOI: 10.1109/mits.2023.3334769
Shuyi Wang, Yang Ma, Said M. Easa, Hao Zhou, Yuanwen Lai, Weijie Chen
Most existing road infrastructures were constructed before the emergence of automated vehicles (AVs) without considering their operational needs. Whether and how AVs could safely adapt to as-built highway geometry are questions that remain inconclusive, and a plausible concern is a challenge from vertical alignments. To fill this gap, this study uses a virtual simulation to investigate the available sight distance (ASD) of AVs on vertical alignments subject to the current highway geometric design specification and its implications for speed limits. According to the scenario generation framework, several scenarios featuring vertical geometric elements and lidar sensors were created and tested. Moreover, the maximum speed for adequate ASD is calculated to determine the AV speed limit, considering safe sight distance and speed consistency requirements. The results indicate that crest curves are not disadvantaged in ASD compared with either sag curves or tangent grades. Only equipped with multichannel lidar and advanced perception algorithms enabling a lower detection threshold would a level 4 AV be compatible with the as-built vertical alignment with a design speed ( V