Cooperative perception (CP) has shown great potential in enhancing traffic safety with Vehicle-to-Everything (V2X) communications. However, its substantial communication burden makes resource-efficient CP crucial, especially when a single vehicle’s intelligence with adequate perception is sufficient to handle most traffic scenarios. Therefore, to reduce the resource consumption of CP, it is essential to identify the traffic conditions under which CP should be applied. This paper addresses this issue by identifying the necessity of CP among road users, and evaluating whether their sensory information is adequate for ensuring traffic safety. We propose a practical framework to assess CP necessity by leveraging bird’s-eye view data from roadside cameras. The framework begins with video-based object localization and tracking to identify the position and movement of each road user. Next, we use a stochastic motion prediction model to analyze the collision risks between pairs of road users. Simultaneously, pairwise perception analysis is used to assess the probability of one road user perceiving another, determining if a road user falls within a blind spot. Road users with both collision risk and potential perception blind spots are identified as requiring CP. Field tests are conducted using real-world scenarios at two complex intersections in Madison, WI, which include a diverse range of road users, such as various vehicles and vulnerable pedestrians and cyclists. The results demonstrate that the proposed framework can effectively identify the safety-challenging scenarios that require CP in complex traffic environments. With only 0.1% of situations in our field test requiring CP, implementing the proposed framework can save a significant amount of communication bandwidth and computational costs while ensuring the same level of safety. Our code and data will be made available upon the acceptance of this paper.
扫码关注我们
求助内容:
应助结果提醒方式:
