Malik Muhammad Saad;Muhammad Ashar Tariq;Mahnoor Ajmal;Dongkyun Kim;Gautam Srivastava
{"title":"Federated Multiagent Reinforcement Learning for Resource Allocation in NR-V2X Mode 2","authors":"Malik Muhammad Saad;Muhammad Ashar Tariq;Mahnoor Ajmal;Dongkyun Kim;Gautam Srivastava","doi":"10.1109/JIOT.2025.3555195","DOIUrl":null,"url":null,"abstract":"The Third Generation Partnership Project (3GPP) introduced cellular vehicle-to-everything (C-V2X) for vehicular communications. In the standard, C-V2X Mode 4 is defined for the distributed resource selection. Subsequently, in 3GPP Release 16, NR-V2X is introduced with Mode 1 and Mode 2 for vehicular communications. Likewise C-V2X Mode 4, NR-V2X Mode 2 is used for decentralized resource scheduling. The vehicles select the resources based on their local observations by utilizing the semi-persistent scheduling (SPS). Since, the vehicles select the resources based on the local observation, sensing nature of SPS is challenged by the hidden node problem that lead to resource conflict. To resolve the contention, 3GPP also introduced the physical sidelink feedback channel (PSFCH) to assist the distributive resource scheduling based on the receiver feedback. However, this incurred a signaling overhead. In this work, federated learning is exploited for distributive training via offline method and distributive multiagent-based resource scheduling is performed following the principles of NR-V2X Mode 2. Distributed training favors the model accuracy by accommodating the varying affect of the environment due to the high mobile dynamics. Simulation is conducted by integrating SUMO in conjunction with 3GPP NR-V2X standard. Performance results demonstrate a substantial improvement compared to other deep learning methods, where centralized training and random resource selection procedures are employed. This research marks a significant stride toward efficient and conflict-resilient resource allocation in vehicular communications.","PeriodicalId":54347,"journal":{"name":"IEEE Internet of Things Journal","volume":"12 13","pages":"23402-23417"},"PeriodicalIF":8.9000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Internet of Things Journal","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10938993/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The Third Generation Partnership Project (3GPP) introduced cellular vehicle-to-everything (C-V2X) for vehicular communications. In the standard, C-V2X Mode 4 is defined for the distributed resource selection. Subsequently, in 3GPP Release 16, NR-V2X is introduced with Mode 1 and Mode 2 for vehicular communications. Likewise C-V2X Mode 4, NR-V2X Mode 2 is used for decentralized resource scheduling. The vehicles select the resources based on their local observations by utilizing the semi-persistent scheduling (SPS). Since, the vehicles select the resources based on the local observation, sensing nature of SPS is challenged by the hidden node problem that lead to resource conflict. To resolve the contention, 3GPP also introduced the physical sidelink feedback channel (PSFCH) to assist the distributive resource scheduling based on the receiver feedback. However, this incurred a signaling overhead. In this work, federated learning is exploited for distributive training via offline method and distributive multiagent-based resource scheduling is performed following the principles of NR-V2X Mode 2. Distributed training favors the model accuracy by accommodating the varying affect of the environment due to the high mobile dynamics. Simulation is conducted by integrating SUMO in conjunction with 3GPP NR-V2X standard. Performance results demonstrate a substantial improvement compared to other deep learning methods, where centralized training and random resource selection procedures are employed. This research marks a significant stride toward efficient and conflict-resilient resource allocation in vehicular communications.
期刊介绍:
The EEE Internet of Things (IoT) Journal publishes articles and review articles covering various aspects of IoT, including IoT system architecture, IoT enabling technologies, IoT communication and networking protocols such as network coding, and IoT services and applications. Topics encompass IoT's impacts on sensor technologies, big data management, and future internet design for applications like smart cities and smart homes. Fields of interest include IoT architecture such as things-centric, data-centric, service-oriented IoT architecture; IoT enabling technologies and systematic integration such as sensor technologies, big sensor data management, and future Internet design for IoT; IoT services, applications, and test-beds such as IoT service middleware, IoT application programming interface (API), IoT application design, and IoT trials/experiments; IoT standardization activities and technology development in different standard development organizations (SDO) such as IEEE, IETF, ITU, 3GPP, ETSI, etc.