{"title":"一种具有动态学习保障的分布式公共物品配置机制","authors":"Abhinav Sinha, A. Anastasopoulos","doi":"10.1145/3106723.3106725","DOIUrl":null,"url":null,"abstract":"In this paper we consider the public goods resource allocation problem (also known as Lindahl allocation) of determining the level of an infinitely divisible public good with P features, that is shared between strategic agents. We present an efficient mechanism, i.e., a mechanism that produces a unique Nash equilibrium (NE), with the corresponding allocation at NE being the social welfare maximizing allocation and taxes at NE being budget-balanced. The main contribution of this paper is that the designed mechanism has two properties, which have not been addressed together in the literature, and aim to make it practically implementable. First, we assume that agents can communicate only through a given network and thus the designed mechanism obeys the agents' informational constraints. This means that each agent's outcome through the mechanism can be determined by only the messages of his/her neighbors. Second, it is guaranteed that agents can learn the NE induced by the mechanism through repeated play when each agent selects a learning strategy from within the \"adaptive best-response\" dynamics class. This is a class of adaptive learning strategies that includes well-known dynamics such as Cournot best-response, k-period best-response and fictitious play, among others. The convergence result is a consequence of the fact that the best-response of the induced game is a contraction mapping. Finally, we present a numerical study of convergence to NE, for two different underlying communication graphs and two different learning dynamics within the ABR class.","PeriodicalId":130519,"journal":{"name":"Proceedings of the 12th workshop on the Economics of Networks, Systems and Computation","volume":"215 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A distributed mechanism for public goods allocation with dynamic learning guarantees\",\"authors\":\"Abhinav Sinha, A. Anastasopoulos\",\"doi\":\"10.1145/3106723.3106725\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we consider the public goods resource allocation problem (also known as Lindahl allocation) of determining the level of an infinitely divisible public good with P features, that is shared between strategic agents. We present an efficient mechanism, i.e., a mechanism that produces a unique Nash equilibrium (NE), with the corresponding allocation at NE being the social welfare maximizing allocation and taxes at NE being budget-balanced. The main contribution of this paper is that the designed mechanism has two properties, which have not been addressed together in the literature, and aim to make it practically implementable. First, we assume that agents can communicate only through a given network and thus the designed mechanism obeys the agents' informational constraints. This means that each agent's outcome through the mechanism can be determined by only the messages of his/her neighbors. Second, it is guaranteed that agents can learn the NE induced by the mechanism through repeated play when each agent selects a learning strategy from within the \\\"adaptive best-response\\\" dynamics class. This is a class of adaptive learning strategies that includes well-known dynamics such as Cournot best-response, k-period best-response and fictitious play, among others. The convergence result is a consequence of the fact that the best-response of the induced game is a contraction mapping. Finally, we present a numerical study of convergence to NE, for two different underlying communication graphs and two different learning dynamics within the ABR class.\",\"PeriodicalId\":130519,\"journal\":{\"name\":\"Proceedings of the 12th workshop on the Economics of Networks, Systems and Computation\",\"volume\":\"215 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-06-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 12th workshop on the Economics of Networks, Systems and Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3106723.3106725\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 12th workshop on the Economics of Networks, Systems and Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3106723.3106725","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A distributed mechanism for public goods allocation with dynamic learning guarantees
In this paper we consider the public goods resource allocation problem (also known as Lindahl allocation) of determining the level of an infinitely divisible public good with P features, that is shared between strategic agents. We present an efficient mechanism, i.e., a mechanism that produces a unique Nash equilibrium (NE), with the corresponding allocation at NE being the social welfare maximizing allocation and taxes at NE being budget-balanced. The main contribution of this paper is that the designed mechanism has two properties, which have not been addressed together in the literature, and aim to make it practically implementable. First, we assume that agents can communicate only through a given network and thus the designed mechanism obeys the agents' informational constraints. This means that each agent's outcome through the mechanism can be determined by only the messages of his/her neighbors. Second, it is guaranteed that agents can learn the NE induced by the mechanism through repeated play when each agent selects a learning strategy from within the "adaptive best-response" dynamics class. This is a class of adaptive learning strategies that includes well-known dynamics such as Cournot best-response, k-period best-response and fictitious play, among others. The convergence result is a consequence of the fact that the best-response of the induced game is a contraction mapping. Finally, we present a numerical study of convergence to NE, for two different underlying communication graphs and two different learning dynamics within the ABR class.