{"title":"应用于车载网络的全分布式多代理处理策略","authors":"Vladimir R. de Lima, Marcello L.R. de Campos","doi":"10.1016/j.vehcom.2024.100806","DOIUrl":null,"url":null,"abstract":"<div><p>This work explores distributed processing techniques, together with recent advances in multi-agent reinforcement learning (MARL) to implement a fully decentralized reward and decision-making scheme to efficiently allocate resources (spectrum and power). The method targets processes with strong dynamics and stringent requirements such as cellular vehicle-to-everything networks (C-V2X). In our approach, the C-V2X is seen as a strongly connected network of intelligent agents which adopt a distributed reward scheme in a cooperative and decentralized manner, taking into consideration their channel conditions and selected actions in order to achieve their goals cooperatively. The simulation results demonstrate the effectiveness of the developed algorithm, named Distributed Multi-Agent Reinforcement Learning (DMARL), achieving performances very close to that of a centralized reward design, with the advantage of not having the limitations and vulnerabilities inherent to a fully or partially centralized solution.</p></div>","PeriodicalId":54346,"journal":{"name":"Vehicular Communications","volume":"49 ","pages":"Article 100806"},"PeriodicalIF":5.8000,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fully distributed multi-agent processing strategy applied to vehicular networks\",\"authors\":\"Vladimir R. de Lima, Marcello L.R. de Campos\",\"doi\":\"10.1016/j.vehcom.2024.100806\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This work explores distributed processing techniques, together with recent advances in multi-agent reinforcement learning (MARL) to implement a fully decentralized reward and decision-making scheme to efficiently allocate resources (spectrum and power). The method targets processes with strong dynamics and stringent requirements such as cellular vehicle-to-everything networks (C-V2X). In our approach, the C-V2X is seen as a strongly connected network of intelligent agents which adopt a distributed reward scheme in a cooperative and decentralized manner, taking into consideration their channel conditions and selected actions in order to achieve their goals cooperatively. The simulation results demonstrate the effectiveness of the developed algorithm, named Distributed Multi-Agent Reinforcement Learning (DMARL), achieving performances very close to that of a centralized reward design, with the advantage of not having the limitations and vulnerabilities inherent to a fully or partially centralized solution.</p></div>\",\"PeriodicalId\":54346,\"journal\":{\"name\":\"Vehicular Communications\",\"volume\":\"49 \",\"pages\":\"Article 100806\"},\"PeriodicalIF\":5.8000,\"publicationDate\":\"2024-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Vehicular Communications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2214209624000810\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"TELECOMMUNICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Vehicular Communications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2214209624000810","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"TELECOMMUNICATIONS","Score":null,"Total":0}
Fully distributed multi-agent processing strategy applied to vehicular networks
This work explores distributed processing techniques, together with recent advances in multi-agent reinforcement learning (MARL) to implement a fully decentralized reward and decision-making scheme to efficiently allocate resources (spectrum and power). The method targets processes with strong dynamics and stringent requirements such as cellular vehicle-to-everything networks (C-V2X). In our approach, the C-V2X is seen as a strongly connected network of intelligent agents which adopt a distributed reward scheme in a cooperative and decentralized manner, taking into consideration their channel conditions and selected actions in order to achieve their goals cooperatively. The simulation results demonstrate the effectiveness of the developed algorithm, named Distributed Multi-Agent Reinforcement Learning (DMARL), achieving performances very close to that of a centralized reward design, with the advantage of not having the limitations and vulnerabilities inherent to a fully or partially centralized solution.
期刊介绍:
Vehicular communications is a growing area of communications between vehicles and including roadside communication infrastructure. Advances in wireless communications are making possible sharing of information through real time communications between vehicles and infrastructure. This has led to applications to increase safety of vehicles and communication between passengers and the Internet. Standardization efforts on vehicular communication are also underway to make vehicular transportation safer, greener and easier.
The aim of the journal is to publish high quality peer–reviewed papers in the area of vehicular communications. The scope encompasses all types of communications involving vehicles, including vehicle–to–vehicle and vehicle–to–infrastructure. The scope includes (but not limited to) the following topics related to vehicular communications:
Vehicle to vehicle and vehicle to infrastructure communications
Channel modelling, modulating and coding
Congestion Control and scalability issues
Protocol design, testing and verification
Routing in vehicular networks
Security issues and countermeasures
Deployment and field testing
Reducing energy consumption and enhancing safety of vehicles
Wireless in–car networks
Data collection and dissemination methods
Mobility and handover issues
Safety and driver assistance applications
UAV
Underwater communications
Autonomous cooperative driving
Social networks
Internet of vehicles
Standardization of protocols.