The Intelligent Transportation System (ITS) continuously generates data that needs to be processed under strict latency and connectivity constraints across a heterogeneous computing architecture (e.g., Vehicular Edge Computing (VEC), Mobile Edge Computing (MEC), and Cloud Computing (CC)). In this context, efficient task offloading requires mobility and server-aware intelligence to optimize communication delay, cost, and resource utilization. In this paper, we propose a mobility-aware Q-learning offloading scheduler that learns optimal tier selection on real-time metrics (e.g., resource availability, signal strength, and Base Station (BS) handover dynamics). Unlike the previous investigation, this approach explicitly incorporates vehicle mobility patterns to the offloading decision using Q-learning. The scheduler favors VEC when underutilized, transitions to MEC when the VEC is overutilized, and falls back to the cloud only when VEC and MEC are infeasible. A structured reward model reinforces decisions that improve resource efficiency and penalizes excessive switching or skipping underutilized resources. The proposed framework is evaluated using DriveNetSim, a custom-developed vehicular simulator that models realistic mobility, signal degradation, and BS switching. Simulation results show a strong preference for VEC, with shifts to MEC only under VEC over-utilization and minimal reliance on the cloud. As a result, the system achieves up to 43% reduction in transmission delay and 38% reduction in processing cost, validating its effectiveness in dynamic vehicular environments.
扫码关注我们
求助内容:
应助结果提醒方式:
