The introduction of highly automated vehicles (HAVs) promises safer, more efficient, and inclusive mobility, but also challenges pedestrian-vehicle communication. Without human drivers providing explicit cues, pedestrians rely on other signals for safe interactions. External human-machine interfaces (eHMIs) show potential, yet, their performance in complex scenarios remains insufficiently researched. In particular, research on the combined use of multimodal eHMIs – such as vehicle-mounted LEDs and wearable Augmented Reality (AR) - is scarce, despite their potential to offer both universal visibility and personalized, context-sensitive feedback.
This study examined the impact of light-based and AR-based eHMIs – both individually and in combination - on pedestrian-HAV interaction in a Virtual Reality environment. Forty participants encountered HAVs approaching from both sides of a shared space. Vehicles employed one of four communication strategies: no eHMI, communication of HAV's intention via 360° LED light band, communication via AR, or novel multimodal setup combining LED and AR, as well as four different vehicle kinematics. Objective and subjective measures, including crossing initiation time, perceived safety, mental workload, understandability, and predictability, were collected and analyzed using repeated-measures ANOVAs with Bonferroni-corrected post-hoc tests.
Communicating the HAV's intention via 360° LED or AR significantly improved crossing initiation time, perceived safety, and understandability, and reduced mental workload compared to no eHMI. Additionally, communication of HAV's intention via AR outperformed the LED condition, while the combined LED+AR interface preserved these benefits without increasing mental workload. Although yielding patterns influenced participants' behavior, eHMI benefits remained stable regardless of the actual traffic scenario.
These findings demonstrate AR's potential in enhancing eHMI effectiveness and highlight the added value of a multimodal design. LED + AR combinations may guide the development of inclusive, intuitive, and context-sensitive eHMIs, ultimately supporting confident pedestrian interaction in future automated urban environments.
扫码关注我们
求助内容:
应助结果提醒方式:
