{"title":"多智能体微分图形游戏:纳什在线自适应学习解决方案","authors":"M. Abouheaf, F. Lewis","doi":"10.1109/CDC.2013.6760804","DOIUrl":null,"url":null,"abstract":"This paper studies a class of multi-agent graphical games denoted by differential graphical games, where interactions between agents are prescribed by a communication graph structure. Ideas from cooperative control are given to achieve synchronization among the agents to a leader dynamics. New coupled Bellman and Hamilton-Jacobi-Bellman equations are developed for this class of games using Integral Reinforcement Learning. Nash solutions are given in terms of solutions to a set of coupled continuous-time Hamilton-Jacobi-Bellman equations. A multi-agent policy iteration algorithm is given to learn the Nash solution in real time without knowing the complete dynamic models of the agents. A proof of convergence for this algorithm is given. An online multi-agent method based on policy iterations is developed using a critic network to solve all the Hamilton-Jacobi-Bellman equations simultaneously for the graphical game.","PeriodicalId":415568,"journal":{"name":"52nd IEEE Conference on Decision and Control","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":"{\"title\":\"Multi-agent differential graphical games: Nash online adaptive learning solutions\",\"authors\":\"M. Abouheaf, F. Lewis\",\"doi\":\"10.1109/CDC.2013.6760804\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper studies a class of multi-agent graphical games denoted by differential graphical games, where interactions between agents are prescribed by a communication graph structure. Ideas from cooperative control are given to achieve synchronization among the agents to a leader dynamics. New coupled Bellman and Hamilton-Jacobi-Bellman equations are developed for this class of games using Integral Reinforcement Learning. Nash solutions are given in terms of solutions to a set of coupled continuous-time Hamilton-Jacobi-Bellman equations. A multi-agent policy iteration algorithm is given to learn the Nash solution in real time without knowing the complete dynamic models of the agents. A proof of convergence for this algorithm is given. An online multi-agent method based on policy iterations is developed using a critic network to solve all the Hamilton-Jacobi-Bellman equations simultaneously for the graphical game.\",\"PeriodicalId\":415568,\"journal\":{\"name\":\"52nd IEEE Conference on Decision and Control\",\"volume\":\"73 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"29\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"52nd IEEE Conference on Decision and Control\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CDC.2013.6760804\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"52nd IEEE Conference on Decision and Control","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CDC.2013.6760804","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This paper studies a class of multi-agent graphical games denoted by differential graphical games, where interactions between agents are prescribed by a communication graph structure. Ideas from cooperative control are given to achieve synchronization among the agents to a leader dynamics. New coupled Bellman and Hamilton-Jacobi-Bellman equations are developed for this class of games using Integral Reinforcement Learning. Nash solutions are given in terms of solutions to a set of coupled continuous-time Hamilton-Jacobi-Bellman equations. A multi-agent policy iteration algorithm is given to learn the Nash solution in real time without knowing the complete dynamic models of the agents. A proof of convergence for this algorithm is given. An online multi-agent method based on policy iterations is developed using a critic network to solve all the Hamilton-Jacobi-Bellman equations simultaneously for the graphical game.