In this article, we introduce a Bayesian revenue-maximizing mechanism design model where the items have fixed, exogenously given prices. Buyers are unit-demand and have an ordinal ranking over purchasing either one of these items at its given price or purchasing nothing. This model arises naturally from the assortment optimization problem, in that the single-buyer optimization problem over deterministic mechanisms reduces to deciding on an assortment of items to “show.” We study its multi-buyer generalization in the simplest setting of single-winner auctions or, more broadly, any service-constrained environment. Our main result is that if the buyer rankings are drawn independently from Markov chain choice models, then the optimal mechanism is computationally tractable, and structurally a virtual welfare maximizer. We also show that for ranking distributions not induced by Markov chains, the optimal mechanism may not be a virtual welfare maximizer. Finally, we apply our virtual valuation notion for Markov chains, in conjunction with existing prophet inequalities, to improve algorithmic guarantees for online assortment problems.
{"title":"Revenue-Optimal Deterministic Auctions for Multiple Buyers with Ordinal Preferences over Fixed-Price Items","authors":"Will Ma","doi":"10.1145/3555045","DOIUrl":"https://doi.org/10.1145/3555045","url":null,"abstract":"In this article, we introduce a Bayesian revenue-maximizing mechanism design model where the items have fixed, exogenously given prices. Buyers are unit-demand and have an ordinal ranking over purchasing either one of these items at its given price or purchasing nothing. This model arises naturally from the assortment optimization problem, in that the single-buyer optimization problem over deterministic mechanisms reduces to deciding on an assortment of items to “show.” We study its multi-buyer generalization in the simplest setting of single-winner auctions or, more broadly, any service-constrained environment. Our main result is that if the buyer rankings are drawn independently from Markov chain choice models, then the optimal mechanism is computationally tractable, and structurally a virtual welfare maximizer. We also show that for ranking distributions not induced by Markov chains, the optimal mechanism may not be a virtual welfare maximizer. Finally, we apply our virtual valuation notion for Markov chains, in conjunction with existing prophet inequalities, to improve algorithmic guarantees for online assortment problems.","PeriodicalId":42216,"journal":{"name":"ACM Transactions on Economics and Computation","volume":"10 1","pages":"1 - 32"},"PeriodicalIF":1.2,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44670070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Y. Giannakopoulos, Diogo Poças, Alexandros Tsigonias-Dimitriadis
We study the problem of multi-dimensional revenue maximization when selling m items to a buyer that has additive valuations for them, drawn from a (possibly correlated) prior distribution. Unlike traditional Bayesian auction design, we assume that the seller has a very restricted knowledge of this prior: they only know the mean μj and an upper bound σj on the standard deviation of each item’s marginal distribution. Our goal is to design mechanisms that achieve good revenue against an ideal optimal auction that has full knowledge of the distribution in advance. Informally, our main contribution is a tight quantification of the interplay between the dispersity of the priors and the aforementioned robust approximation ratio. Furthermore, this can be achieved by very simple selling mechanisms. More precisely, we show that selling the items via separate price lotteries achieves an O(log r) approximation ratio where r = maxj(σj/μj) is the maximum coefficient of variation across the items. To prove the result, we leverage a price lottery for the single-item case. If forced to restrict ourselves to deterministic mechanisms, this guarantee degrades to O(r2). Assuming independence of the item valuations, these ratios can be further improved by pricing the full bundle. For the case of identical means and variances, in particular, we get a guarantee of O(log (r/m)) that converges to optimality as the number of items grows large. We demonstrate the optimality of the preceding mechanisms by providing matching lower bounds. Our tight analysis for the single-item deterministic case resolves an open gap from the work of Azar and Micali (ITCS’13). As a by-product, we also show how one can directly use our upper bounds to improve and extend previous results related to the parametric auctions of Azar et al. (SODA’13).
{"title":"Robust Revenue Maximization Under Minimal Statistical Information","authors":"Y. Giannakopoulos, Diogo Poças, Alexandros Tsigonias-Dimitriadis","doi":"10.1145/3546606","DOIUrl":"https://doi.org/10.1145/3546606","url":null,"abstract":"We study the problem of multi-dimensional revenue maximization when selling m items to a buyer that has additive valuations for them, drawn from a (possibly correlated) prior distribution. Unlike traditional Bayesian auction design, we assume that the seller has a very restricted knowledge of this prior: they only know the mean μj and an upper bound σj on the standard deviation of each item’s marginal distribution. Our goal is to design mechanisms that achieve good revenue against an ideal optimal auction that has full knowledge of the distribution in advance. Informally, our main contribution is a tight quantification of the interplay between the dispersity of the priors and the aforementioned robust approximation ratio. Furthermore, this can be achieved by very simple selling mechanisms. More precisely, we show that selling the items via separate price lotteries achieves an O(log r) approximation ratio where r = maxj(σj/μj) is the maximum coefficient of variation across the items. To prove the result, we leverage a price lottery for the single-item case. If forced to restrict ourselves to deterministic mechanisms, this guarantee degrades to O(r2). Assuming independence of the item valuations, these ratios can be further improved by pricing the full bundle. For the case of identical means and variances, in particular, we get a guarantee of O(log (r/m)) that converges to optimality as the number of items grows large. We demonstrate the optimality of the preceding mechanisms by providing matching lower bounds. Our tight analysis for the single-item deterministic case resolves an open gap from the work of Azar and Micali (ITCS’13). As a by-product, we also show how one can directly use our upper bounds to improve and extend previous results related to the parametric auctions of Azar et al. (SODA’13).","PeriodicalId":42216,"journal":{"name":"ACM Transactions on Economics and Computation","volume":"10 1","pages":"1 - 34"},"PeriodicalIF":1.2,"publicationDate":"2019-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46975411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The past few years have seen several works exploring learning economic solutions from data, including optimal auction design, function optimization, stable payoffs in cooperative games, and more. In this work, we provide a unified learning-theoretic methodology for modeling such problems and establish tools for determining whether a given solution concept can be efficiently learned from data. Our learning-theoretic framework generalizes a notion of function space dimension—the graph dimension—adapting it to the solution concept learning domain. We identify sufficient conditions for efficient solution learnability and show that results in existing works can be immediately derived using our methodology. Finally, we apply our methods in other economic domains, yielding learning variants of competitive equilibria and Condorcet winners.
{"title":"A Learning Framework for Distribution-Based Game-Theoretic Solution Concepts","authors":"Tushant Jha, Yair Zick","doi":"10.1145/3580374","DOIUrl":"https://doi.org/10.1145/3580374","url":null,"abstract":"The past few years have seen several works exploring learning economic solutions from data, including optimal auction design, function optimization, stable payoffs in cooperative games, and more. In this work, we provide a unified learning-theoretic methodology for modeling such problems and establish tools for determining whether a given solution concept can be efficiently learned from data. Our learning-theoretic framework generalizes a notion of function space dimension—the graph dimension—adapting it to the solution concept learning domain. We identify sufficient conditions for efficient solution learnability and show that results in existing works can be immediately derived using our methodology. Finally, we apply our methods in other economic domains, yielding learning variants of competitive equilibria and Condorcet winners.","PeriodicalId":42216,"journal":{"name":"ACM Transactions on Economics and Computation","volume":"11 1","pages":"1 - 23"},"PeriodicalIF":1.2,"publicationDate":"2019-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48153905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paul Gölz, Anson Kahng, Simon Mackenzie, A. Procaccia
Liquid democracy is the principle of making collective decisions by letting agents transitively delegate their votes. Despite its significant appeal, it has become apparent that a weakness of liquid democracy is that a small subset of agents may gain massive influence. To address this, we propose to change the current practice by allowing agents to specify multiple delegation options instead of just one. Much like in nature, where—fluid mechanics teaches us—liquid maintains an equal level in connected vessels, we seek to control the flow of votes in a way that balances influence as much as possible. Specifically, we analyze the problem of choosing delegations to approximately minimize the maximum number of votes entrusted to any agent by drawing connections to the literature on confluent flow. We also introduce a random graph model for liquid democracy and use it to demonstrate the benefits of our approach both theoretically and empirically.
{"title":"The Fluid Mechanics of Liquid Democracy","authors":"Paul Gölz, Anson Kahng, Simon Mackenzie, A. Procaccia","doi":"10.1145/3485012","DOIUrl":"https://doi.org/10.1145/3485012","url":null,"abstract":"Liquid democracy is the principle of making collective decisions by letting agents transitively delegate their votes. Despite its significant appeal, it has become apparent that a weakness of liquid democracy is that a small subset of agents may gain massive influence. To address this, we propose to change the current practice by allowing agents to specify multiple delegation options instead of just one. Much like in nature, where—fluid mechanics teaches us—liquid maintains an equal level in connected vessels, we seek to control the flow of votes in a way that balances influence as much as possible. Specifically, we analyze the problem of choosing delegations to approximately minimize the maximum number of votes entrusted to any agent by drawing connections to the literature on confluent flow. We also introduce a random graph model for liquid democracy and use it to demonstrate the benefits of our approach both theoretically and empirically.","PeriodicalId":42216,"journal":{"name":"ACM Transactions on Economics and Computation","volume":"9 1","pages":"1 - 39"},"PeriodicalIF":1.2,"publicationDate":"2018-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44786795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Bitcoin payment system involves two agent types: users that transact with the currency and pay fees and miners in charge of authorizing transactions and securing the system in return for these fees. Two of Bitcoin’s challenges are (i) securing sufficient miner revenues as block rewards decrease, and (ii) alleviating the throughput limitation due to a small maximal block size cap. These issues are strongly related as increasing the maximal block size may decrease revenue due to Bitcoin’s pay-your-bid approach. To decouple them, we analyze the “monopolistic auction” [16], showing (i) its revenue does not decrease as the maximal block size increases, (ii) it is resilient to an untrusted auctioneer (the miner), and (iii) simplicity for transaction issuers (bidders), as the average gain from strategic bid shading (relative to bidding one’s value) diminishes as the number of bids increases.
{"title":"Redesigning Bitcoin’s Fee Market","authors":"R. Lavi, Or Sattath, Aviv Zohar","doi":"10.1145/3530799","DOIUrl":"https://doi.org/10.1145/3530799","url":null,"abstract":"The Bitcoin payment system involves two agent types: users that transact with the currency and pay fees and miners in charge of authorizing transactions and securing the system in return for these fees. Two of Bitcoin’s challenges are (i) securing sufficient miner revenues as block rewards decrease, and (ii) alleviating the throughput limitation due to a small maximal block size cap. These issues are strongly related as increasing the maximal block size may decrease revenue due to Bitcoin’s pay-your-bid approach. To decouple them, we analyze the “monopolistic auction” [16], showing (i) its revenue does not decrease as the maximal block size increases, (ii) it is resilient to an untrusted auctioneer (the miner), and (iii) simplicity for transaction issuers (bidders), as the average gain from strategic bid shading (relative to bidding one’s value) diminishes as the number of bids increases.","PeriodicalId":42216,"journal":{"name":"ACM Transactions on Economics and Computation","volume":"10 1","pages":"1 - 31"},"PeriodicalIF":1.2,"publicationDate":"2017-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43284287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although the definition of sequential equilibrium can be applied without change to games of imperfect recall, doing so leads to arguably inappropriate results. We redefine sequential equilibrium so that the definition agrees with the standard definition in games of perfect recall while still giving reasonable results in games of imperfect recall. The definition can be viewed as trying to capture a notion of ex ante sequential equilibrium. The picture here is that players choose their strategies before the game starts and are committed to it, but they choose it in such a way that it remains optimal even off the equilibrium path. A notion of interim sequential equilibrium is also considered.
{"title":"Sequential Equilibrium in Games of Imperfect Recall","authors":"Joseph Y. Halpern, R. Pass","doi":"10.1145/3485002","DOIUrl":"https://doi.org/10.1145/3485002","url":null,"abstract":"Although the definition of sequential equilibrium can be applied without change to games of imperfect recall, doing so leads to arguably inappropriate results. We redefine sequential equilibrium so that the definition agrees with the standard definition in games of perfect recall while still giving reasonable results in games of imperfect recall. The definition can be viewed as trying to capture a notion of ex ante sequential equilibrium. The picture here is that players choose their strategies before the game starts and are committed to it, but they choose it in such a way that it remains optimal even off the equilibrium path. A notion of interim sequential equilibrium is also considered.","PeriodicalId":42216,"journal":{"name":"ACM Transactions on Economics and Computation","volume":"9 1","pages":"1 - 26"},"PeriodicalIF":1.2,"publicationDate":"2016-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64046979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}