Alon Eden, M. Feldman, Ophir Friedler, Inbal Talgam-Cohen, S. M. Weinberg
A seminal result of Bulow and Klemperer [1989] demonstrates the power of competition for extracting revenue: when selling a single item to n bidders whose values are drawn i.i.d. from a regular distribution, the simple welfare-maximizing VCG mechanism (in this case, a second price-auction) with one additional bidder extracts at least as much revenue in expectation as the optimal mechanism. The beauty of this theorem stems from the fact that VCG is a prior-independent mechanism, where the seller possesses no information about the distribution, and yet, by recruiting one additional bidder it performs better than any prior-dependent mechanism tailored exactly to the distribution at hand (without the additional bidder). In this work, we establish the first full Bulow-Klemperer results in multi-dimensional environments, proving that by recruiting additional bidders, the revenue of the VCG mechanism surpasses that of the optimal (possibly randomized, Bayesian incentive compatible) mechanism. For a given environment with i.i.d. bidders, we term the number of additional bidders needed to achieve this guarantee the environment's competition complexity. Using the recent duality-based framework of Cai et al. [2016] for reasoning about optimal revenue, we show that the competition complexity of n bidders with additive valuations over m independent, regular items is at most n+2m-2 and at least log(m). We extend our results to bidders with additive valuations subject to downward-closed constraints, showing that these significantly more general valuations increase the competition complexity by at most an additive m-1 factor. We further improve this bound for the special case of matroid constraints, and provide additional extensions as well.
{"title":"The Competition Complexity of Auctions: A Bulow-Klemperer Result for Multi-Dimensional Bidders","authors":"Alon Eden, M. Feldman, Ophir Friedler, Inbal Talgam-Cohen, S. M. Weinberg","doi":"10.1145/3033274.3085115","DOIUrl":"https://doi.org/10.1145/3033274.3085115","url":null,"abstract":"A seminal result of Bulow and Klemperer [1989] demonstrates the power of competition for extracting revenue: when selling a single item to n bidders whose values are drawn i.i.d. from a regular distribution, the simple welfare-maximizing VCG mechanism (in this case, a second price-auction) with one additional bidder extracts at least as much revenue in expectation as the optimal mechanism. The beauty of this theorem stems from the fact that VCG is a prior-independent mechanism, where the seller possesses no information about the distribution, and yet, by recruiting one additional bidder it performs better than any prior-dependent mechanism tailored exactly to the distribution at hand (without the additional bidder). In this work, we establish the first full Bulow-Klemperer results in multi-dimensional environments, proving that by recruiting additional bidders, the revenue of the VCG mechanism surpasses that of the optimal (possibly randomized, Bayesian incentive compatible) mechanism. For a given environment with i.i.d. bidders, we term the number of additional bidders needed to achieve this guarantee the environment's competition complexity. Using the recent duality-based framework of Cai et al. [2016] for reasoning about optimal revenue, we show that the competition complexity of n bidders with additive valuations over m independent, regular items is at most n+2m-2 and at least log(m). We extend our results to bidders with additive valuations subject to downward-closed constraints, showing that these significantly more general valuations increase the competition complexity by at most an additive m-1 factor. We further improve this bound for the special case of matroid constraints, and provide additional extensions as well.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127314053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alon Eden, M. Feldman, Ophir Friedler, Inbal Talgam-Cohen, S. M. Weinberg
We consider a revenue-maximizing seller with m heterogeneous items and a single buyer whose valuation v for the items may exhibit both substitutes (i.e., for some S, T, v(S ∪ T) < v(S) + v(T)) and complements (i.e., for some S, T, v(S ∪ T) > v(S) + v(T)). We show that the mechanism first proposed by Babaioff et al. [2014] -- the better of selling the items separately and bundling them together -- guarantees a Θ(d) fraction of the optimal revenue, where $d$ is a measure on the degree of complementarity. Note that this is the first approximately optimal mechanism for a buyer whose valuation exhibits any kind of complementarity. It extends the work of Rubinstein and Weinberg [2015], which proved that the same simple mechanisms achieve a constant factor approximation when buyer valuations are subadditive, the most general class of complement-free valuations. Our proof is enabled by the recent duality framework developed in Cai et al. [2016], which we use to obtain a bound on the optimal revenue in this setting. Our main technical contributions are specialized to handle the intricacies of settings with complements, and include an algorithm for partitioning edges in a hypergraph. Even nailing down the right model and notion of "degree of complementarity" to obtain meaningful results is of interest, as the natural extensions of previous definitions provably fail.
{"title":"A Simple and Approximately Optimal Mechanism for a Buyer with Complements: Abstract","authors":"Alon Eden, M. Feldman, Ophir Friedler, Inbal Talgam-Cohen, S. M. Weinberg","doi":"10.1145/3033274.3085116","DOIUrl":"https://doi.org/10.1145/3033274.3085116","url":null,"abstract":"We consider a revenue-maximizing seller with m heterogeneous items and a single buyer whose valuation v for the items may exhibit both substitutes (i.e., for some S, T, v(S ∪ T) < v(S) + v(T)) and complements (i.e., for some S, T, v(S ∪ T) > v(S) + v(T)). We show that the mechanism first proposed by Babaioff et al. [2014] -- the better of selling the items separately and bundling them together -- guarantees a Θ(d) fraction of the optimal revenue, where $d$ is a measure on the degree of complementarity. Note that this is the first approximately optimal mechanism for a buyer whose valuation exhibits any kind of complementarity. It extends the work of Rubinstein and Weinberg [2015], which proved that the same simple mechanisms achieve a constant factor approximation when buyer valuations are subadditive, the most general class of complement-free valuations. Our proof is enabled by the recent duality framework developed in Cai et al. [2016], which we use to obtain a bound on the optimal revenue in this setting. Our main technical contributions are specialized to handle the intricacies of settings with complements, and include an algorithm for partitioning edges in a hypergraph. Even nailing down the right model and notion of \"degree of complementarity\" to obtain meaningful results is of interest, as the natural extensions of previous definitions provably fail.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116464885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study social choice rules under the utilitarian distortion framework, with an additional metric assumption on the agents' costs over the alternatives. In this approach, these costs are given by an underlying metric on the set of all agents plus alternatives. Social choice rules have access to only the ordinal preferences of agents but not the latent cardinal costs that induce them. Distortion is then defined as the ratio between the social cost (typically the sum of agent costs) of the alternative chosen by the mechanism at hand, and that of the optimal alternative chosen by an omniscient algorithm. The worst-case distortion of a social choice rule is, therefore, a measure of how close it always gets to the optimal alternative without any knowledge of the underlying costs. Under this model, it has been conjectured that Ranked Pairs, the well-known weighted-tournament rule, achieves a distortion of at most 3 (Anshelevich et al. 2015). We disprove this conjecture by constructing a sequence of instances which shows that the worst-case distortion of Ranked Pairs is at least 5. Our lower bound on the worst-case distortion of Ranked Pairs matches a previously known upper bound for the Copeland rule, proving that in the worst case, the simpler Copeland rule is at least as good as Ranked Pairs. And as long as we are limited to (weighted or unweighted) tournament rules, we demonstrate that randomization cannot help achieve an expected worst-case distortion of less than 3. Using the concept of approximate majorization within the distortion framework, we prove that Copeland and Randomized Dictatorship achieve low constant factor fairness-ratios (5 and 3 respectively), which is a considerable generalization of similar results for the sum of costs and single largest cost objectives. In addition to all of the above, we outline several interesting directions for further research in this space.
我们在功利主义扭曲框架下研究社会选择规则,并对代理的替代成本进行了额外的度量假设。在这种方法中,这些成本是由所有代理加上备选方案的集合的基本度量给出的。社会选择规则只能接触到行为主体的有序偏好,而不能接触到诱发这些偏好的潜在基数成本。然后将扭曲定义为由现有机制选择的替代方案的社会成本(通常是代理成本的总和)与由全知算法选择的最优替代方案之间的比率。因此,社会选择规则的最坏情况是,在不知道潜在成本的情况下,衡量它与最优选择的接近程度。在这个模型下,据推测,众所周知的加权比赛规则排名赛(rank Pairs)最多实现了3的扭曲(Anshelevich et al. 2015)。我们通过构造一个实例序列来证明排序对的最坏情况失真至少为5。我们关于排名配对的最坏情况失真的下界与先前已知的Copeland规则的上界相匹配,证明了在最坏情况下,更简单的Copeland规则至少与排名配对一样好。只要我们受限于(加权或未加权)锦标赛规则,我们就可以证明随机化无法帮助实现小于3的预期最坏情况失真。利用扭曲框架内的近似多数化概念,我们证明了Copeland和randomrandomdictatorship实现了较低的常数因子公平比率(分别为5和3),这是对成本总和和单个最大成本目标的类似结果的相当大的推广。除此之外,我们还概述了该领域进一步研究的几个有趣方向。
{"title":"Metric Distortion of Social Choice Rules: Lower Bounds and Fairness Properties","authors":"Ashish Goel, A. Krishnaswamy, Kamesh Munagala","doi":"10.1145/3033274.3085138","DOIUrl":"https://doi.org/10.1145/3033274.3085138","url":null,"abstract":"We study social choice rules under the utilitarian distortion framework, with an additional metric assumption on the agents' costs over the alternatives. In this approach, these costs are given by an underlying metric on the set of all agents plus alternatives. Social choice rules have access to only the ordinal preferences of agents but not the latent cardinal costs that induce them. Distortion is then defined as the ratio between the social cost (typically the sum of agent costs) of the alternative chosen by the mechanism at hand, and that of the optimal alternative chosen by an omniscient algorithm. The worst-case distortion of a social choice rule is, therefore, a measure of how close it always gets to the optimal alternative without any knowledge of the underlying costs. Under this model, it has been conjectured that Ranked Pairs, the well-known weighted-tournament rule, achieves a distortion of at most 3 (Anshelevich et al. 2015). We disprove this conjecture by constructing a sequence of instances which shows that the worst-case distortion of Ranked Pairs is at least 5. Our lower bound on the worst-case distortion of Ranked Pairs matches a previously known upper bound for the Copeland rule, proving that in the worst case, the simpler Copeland rule is at least as good as Ranked Pairs. And as long as we are limited to (weighted or unweighted) tournament rules, we demonstrate that randomization cannot help achieve an expected worst-case distortion of less than 3. Using the concept of approximate majorization within the distortion framework, we prove that Copeland and Randomized Dictatorship achieve low constant factor fairness-ratios (5 and 3 respectively), which is a considerable generalization of similar results for the sum of costs and single largest cost objectives. In addition to all of the above, we outline several interesting directions for further research in this space.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117317260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Riccardo Colini-Baldeschi, Paul W Goldberg, B. D. Keijzer, S. Leonardi, T. Roughgarden, S. Turchetta
We develop and extend a line of recent work on the design of mechanisms for two-sided markets. The markets we consider consist of buyers and sellers of a number of items, and the aim of a mechanism is to improve the social welfare by arranging purchases and sales of the items. A mechanism is given prior distributions on the agents' valuations of the items, but not the actual valuations; thus the aim is to maximise the expected social welfare over these distributions. As in previous work, we are interested in the worst-case ratio between the social welfare achieved by a truthful mechanism, and the best social welfare possible. Our main result is an incentive compatible and budget balanced constant-factor approximation mechanism in a setting where buyers have XOS valuations and sellers' valuations are additive. This is the first such approximation mechanism for a two-sided market setting where the agents have combinatorial valuation functions. To achieve this result, we introduce a more general kind of demand query that seems to be needed in this situation. In the simpler case that sellers have unit supply (each having just one item to sell), we give a new mechanism whose welfare guarantee improves on a recent one in the literature. We also introduce a more demanding version of the strong budget balance (SBB) criterion, aimed at ruling out certain "unnatural" transactions satisfied by SBB. We show that the stronger version is satisfied by our mechanisms.
{"title":"Approximately Efficient Two-Sided Combinatorial Auctions","authors":"Riccardo Colini-Baldeschi, Paul W Goldberg, B. D. Keijzer, S. Leonardi, T. Roughgarden, S. Turchetta","doi":"10.1145/3033274.3085128","DOIUrl":"https://doi.org/10.1145/3033274.3085128","url":null,"abstract":"We develop and extend a line of recent work on the design of mechanisms for two-sided markets. The markets we consider consist of buyers and sellers of a number of items, and the aim of a mechanism is to improve the social welfare by arranging purchases and sales of the items. A mechanism is given prior distributions on the agents' valuations of the items, but not the actual valuations; thus the aim is to maximise the expected social welfare over these distributions. As in previous work, we are interested in the worst-case ratio between the social welfare achieved by a truthful mechanism, and the best social welfare possible. Our main result is an incentive compatible and budget balanced constant-factor approximation mechanism in a setting where buyers have XOS valuations and sellers' valuations are additive. This is the first such approximation mechanism for a two-sided market setting where the agents have combinatorial valuation functions. To achieve this result, we introduce a more general kind of demand query that seems to be needed in this situation. In the simpler case that sellers have unit supply (each having just one item to sell), we give a new mechanism whose welfare guarantee improves on a recent one in the literature. We also introduce a more demanding version of the strong budget balance (SBB) criterion, aimed at ruling out certain \"unnatural\" transactions satisfied by SBB. We show that the stronger version is satisfied by our mechanisms.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128659106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, and, unlike the classic setting, a decision can provide positive utility to multiple players. We extend the popular fairness notion of proportionality (which is not guaranteeable) to our more general setting, and introduce three novel relaxations --- proportionality up to one issue, round robin share, and pessimistic proportional share --- that are also interesting in the classic goods allocation setting. We show that the Maximum Nash Welfare solution, which is known to satisfy appealing fairness properties in the classic setting, satisfies or approximates all three relaxations in our framework. We also provide polynomial time algorithms and hardness results for finding allocations satisfying these axioms, with or without insisting on Pareto optimality.
{"title":"Fair Public Decision Making","authors":"Vincent Conitzer, Rupert Freeman, Nisarg Shah","doi":"10.1145/3033274.3085125","DOIUrl":"https://doi.org/10.1145/3033274.3085125","url":null,"abstract":"We generalize the classic problem of fairly allocating indivisible goods to the problem of fair public decision making, in which a decision must be made on several social issues simultaneously, and, unlike the classic setting, a decision can provide positive utility to multiple players. We extend the popular fairness notion of proportionality (which is not guaranteeable) to our more general setting, and introduce three novel relaxations --- proportionality up to one issue, round robin share, and pessimistic proportional share --- that are also interesting in the classic goods allocation setting. We show that the Maximum Nash Welfare solution, which is known to satisfy appealing fairness properties in the classic setting, satisfies or approximates all three relaxations in our framework. We also provide polynomial time algorithms and hardness results for finding allocations satisfying these axioms, with or without insisting on Pareto optimality.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127146977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study approximation algorithms for revenue maximization based on static item pricing, where a seller chooses prices for various goods in the market, and then the buyers purchase utility-maximizing bundles at these given prices. We formulate two somewhat general techniques for designing good pricing algorithms for this setting: Price Doubling and Item Halving. Using these techniques, we unify many of the existing results in the item pricing literature under a common framework, as well as provide several new bicriteria algorithms for approximating both revenue and social welfare simultaneously. The main technical contribution of this paper is a O((log m + log k)2)-approximation algorithm for revenue maximization based on the item halving technique, for settings where buyers have XoS valuations, where m is the number of goods and k is the average supply. Surprisingly, ours is the first known item pricing algorithm with polylogarithmic approximation for such general classes of valuations, and partially resolves an important open question from the algorithmic pricing literature about the existence of item pricing algorithms with logarithmic factors for general valuations
我们研究了基于静态商品定价的收益最大化近似算法,其中卖方选择市场上各种商品的价格,然后买方在这些给定价格下购买效用最大化的捆绑包。我们制定了两种通用的技术来为这种设置设计良好的定价算法:价格加倍和项目减半。使用这些技术,我们将物品定价文献中的许多现有结果统一在一个共同的框架下,并提供了几个新的双标准算法来同时近似收入和社会福利。本文的主要技术贡献是基于物品减半技术的收益最大化O((log m + log k)2)近似算法,适用于买家有XoS估值的设置,其中m是商品数量,k是平均供应量。令人惊讶的是,我们的算法是已知的第一个对这类估值具有多对数逼近的项目定价算法,并且部分地解决了算法定价文献中关于一般估值具有对数因子的项目定价算法存在性的重要开放问题
{"title":"Price Doubling and Item Halving: Robust Revenue Guarantees for Item Pricing","authors":"Elliot Anshelevich, S. Sekar","doi":"10.1145/3033274.3085117","DOIUrl":"https://doi.org/10.1145/3033274.3085117","url":null,"abstract":"We study approximation algorithms for revenue maximization based on static item pricing, where a seller chooses prices for various goods in the market, and then the buyers purchase utility-maximizing bundles at these given prices. We formulate two somewhat general techniques for designing good pricing algorithms for this setting: Price Doubling and Item Halving. Using these techniques, we unify many of the existing results in the item pricing literature under a common framework, as well as provide several new bicriteria algorithms for approximating both revenue and social welfare simultaneously. The main technical contribution of this paper is a O((log m + log k)2)-approximation algorithm for revenue maximization based on the item halving technique, for settings where buyers have XoS valuations, where m is the number of goods and k is the average supply. Surprisingly, ours is the first known item pricing algorithm with polylogarithmic approximation for such general classes of valuations, and partially resolves an important open question from the algorithmic pricing literature about the existence of item pricing algorithms with logarithmic factors for general valuations","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125730239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a multidimensional search problem that is motivated by questions in contextual decision-making, such as dynamic pricing and personalized medicine. Nature selects a state from a d-dimensional unit ball and then generates a sequence of d-dimensional directions. We are given access to the directions, but not access to the state. After receiving a direction, we have to guess the value of the dot product between the state and the direction. Our goal is to minimize the number of times when our guess is more than ε away from the true answer. We construct a polynomial time algorithm that we call Projected Volume achieving regret O(dlog(d/ε)), which is optimal up to a logd factor. The algorithm combines a volume cutting strategy with a new geometric technique that we call cylindrification.
{"title":"Multidimensional Binary Search for Contextual Decision-Making","authors":"I. Lobel, R. Leme, Adrian Vladu","doi":"10.1145/3033274.3085100","DOIUrl":"https://doi.org/10.1145/3033274.3085100","url":null,"abstract":"We consider a multidimensional search problem that is motivated by questions in contextual decision-making, such as dynamic pricing and personalized medicine. Nature selects a state from a d-dimensional unit ball and then generates a sequence of d-dimensional directions. We are given access to the directions, but not access to the state. After receiving a direction, we have to guess the value of the dot product between the state and the direction. Our goal is to minimize the number of times when our guess is more than ε away from the true answer. We construct a polynomial time algorithm that we call Projected Volume achieving regret O(dlog(d/ε)), which is optimal up to a logd factor. The algorithm combines a volume cutting strategy with a new geometric technique that we call cylindrification.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114816639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Gibbard-Satterthwaite Impossibility Theorem [Gibbard, 1973, Satterthwaite, 1975] holds that dictatorship is the only Pareto optimal and strategyproof social choice function on the full domain of preferences. Much of the work in mechanism design aims at getting around this impossibility theorem. Three grand success stories stand out. On the domains of single-peaked preferences, of object assignment, and of quasilinear preferences, there are appealing Pareto optimal and strategyproof social choice functions. We investigate whether these success stories are robust to strengthening strategyproofness to obvious strategyproofness, a stronger incentive property that was recently introduced by Li [2015] and has since garnered considerable attention. For single-peaked preferences, we characterize the class of OSP-implementable and unanimous social choice functions as dictatorships with safeguards against extremism -- mechanisms (which turn out to also be Pareto optimal) in which the dictator can choose the outcome, but other agents may prevent the dictator from choosing an outcome that is too extreme. Median voting is consequently not OSP-implementable. Moreover, even when there are only two possible outcomes, majority voting is not OSP-implementable, and unanimity is the only OSP-implementable supermajority rule. For object assignment, we characterize the class of OSP-implementable and Pareto optimal matching rules as sequential barter with lurkers -- a significant generalization over bossy variants of bipolar serially dictatorial rules. While Li [2015] shows that second-price auctions are OSP-implementable when only one good is sold, we show that this positive result does not extend to the case of multiple goods. Even when all agents' preferences over goods are quasilinear and additive, no welfare-maximizing auction where losers pay nothing is OSP-implementable when more than one good is sold. Our analysis makes use of a gradual revelation principle, an analog of the (direct) revelation principle for OSP mechanisms that we present and prove, and believe to be of independent interest. An integrated examination, of all of these negative and positive results, on the one hand reveals that the various mechanics that come into play within obviously strategyproof mechanisms are considerably richer and more diverse than previously demonstrated and can give rise to rather exotic and quite intricate mechanisms in some domains, however on the other hand suggests that the boundaries of obvious strategyproofness are significantly less far-reaching than one may hope in other domains. We thus observe that in a natural sense, obvious strategyproofness is neither "too strong" nor "too weak" a definition for capturing "strategyproofness that is easy to see," but in fact while it performs as intuitively expected on some domains, it "overshoots" on some other domains, and "undershoots" on yet other domains.
{"title":"Gibbard-Satterthwaite Success Stories and Obvious Strategyproofness","authors":"Sophie Bade, Yannai A. Gonczarowski","doi":"10.1145/3033274.3085104","DOIUrl":"https://doi.org/10.1145/3033274.3085104","url":null,"abstract":"The Gibbard-Satterthwaite Impossibility Theorem [Gibbard, 1973, Satterthwaite, 1975] holds that dictatorship is the only Pareto optimal and strategyproof social choice function on the full domain of preferences. Much of the work in mechanism design aims at getting around this impossibility theorem. Three grand success stories stand out. On the domains of single-peaked preferences, of object assignment, and of quasilinear preferences, there are appealing Pareto optimal and strategyproof social choice functions. We investigate whether these success stories are robust to strengthening strategyproofness to obvious strategyproofness, a stronger incentive property that was recently introduced by Li [2015] and has since garnered considerable attention. For single-peaked preferences, we characterize the class of OSP-implementable and unanimous social choice functions as dictatorships with safeguards against extremism -- mechanisms (which turn out to also be Pareto optimal) in which the dictator can choose the outcome, but other agents may prevent the dictator from choosing an outcome that is too extreme. Median voting is consequently not OSP-implementable. Moreover, even when there are only two possible outcomes, majority voting is not OSP-implementable, and unanimity is the only OSP-implementable supermajority rule. For object assignment, we characterize the class of OSP-implementable and Pareto optimal matching rules as sequential barter with lurkers -- a significant generalization over bossy variants of bipolar serially dictatorial rules. While Li [2015] shows that second-price auctions are OSP-implementable when only one good is sold, we show that this positive result does not extend to the case of multiple goods. Even when all agents' preferences over goods are quasilinear and additive, no welfare-maximizing auction where losers pay nothing is OSP-implementable when more than one good is sold. Our analysis makes use of a gradual revelation principle, an analog of the (direct) revelation principle for OSP mechanisms that we present and prove, and believe to be of independent interest. An integrated examination, of all of these negative and positive results, on the one hand reveals that the various mechanics that come into play within obviously strategyproof mechanisms are considerably richer and more diverse than previously demonstrated and can give rise to rather exotic and quite intricate mechanisms in some domains, however on the other hand suggests that the boundaries of obvious strategyproofness are significantly less far-reaching than one may hope in other domains. We thus observe that in a natural sense, obvious strategyproofness is neither \"too strong\" nor \"too weak\" a definition for capturing \"strategyproofness that is easy to see,\" but in fact while it performs as intuitively expected on some domains, it \"overshoots\" on some other domains, and \"undershoots\" on yet other domains.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"63 11-12","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131496685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the algorithmics of information structure design --- a.k.a. persuasion or signaling --- in a fundamental special case introduced by Arieli and Babichenko: multiple agents, binary actions, and no inter-agent externalities. Unlike prior work on this model, we allow many states of nature. We assume that the principal's objective is a monotone set function, and study the problem both in the public signal and private signal models, drawing a sharp contrast between the two in terms of both efficacy and computational complexity. When private signals are allowed, our results are largely positive and quite general. First, we use linear programming duality and the equivalence of separation and optimization to show polynomial-time equivalence between (exactly) optimal signaling and the problem of maximizing the objective function plus an additive function. This yields an efficient implementation of the optimal scheme when the objective is supermodular or anonymous. Second, we exhibit a (1-1/e)-approximation of the optimal private signaling scheme, modulo an additive loss of ε, when the objective function is submodular. These two results simplify, unify, and generalize results of [Arieli and Babichenko, 2016] and [Babichenko and Barman, 2016], extending them from a binary state of nature to many states (modulo the additive loss in the latter result). Third, we consider the binary-state case with a submodular objective, and simplify and slightly strengthen the result of [Babichenko and Barman, 2016] to obtain a (1-1/e)-approximation via a scheme which (i) signals independently to each receiver and (ii) is "oblivious" in that it does not depend on the objective function so long as it is monotone submodular. When only a public signal is allowed, our results are negative. First, we show that it is NP-hard to approximate the optimal public scheme, within any constant factor, even when the objective is additive. Second, we show that the optimal private scheme can outperform the optimal public scheme, in terms of maximizing the sender's objective, by a polynomial factor.
我们在Arieli和Babichenko介绍的一个基本特例中研究了信息结构设计的算法——也就是说服或信号——多主体、二元行为和无主体间外部性。与之前在这个模型上的工作不同,我们允许许多自然状态。我们假设委托人的目标是一个单调的集合函数,并在公共信号和私有信号模型中研究了这个问题,在效率和计算复杂度方面得出了两者之间的鲜明对比。当私人信号被允许时,我们的结果基本上是积极的和相当普遍的。首先,我们利用线性规划对偶性和分离与优化的等价性来证明(完全)最优信号与目标函数加可加函数最大化问题之间的多项式时间等价。当目标是超模或匿名时,这产生了最优方案的有效实现。其次,当目标函数是次模时,我们展示了最优私有信令方案的(1-1/e)近似,模加性损失ε。这两个结果简化、统一和推广了[Arieli and Babichenko, 2016]和[Babichenko and Barman, 2016]的结果,将它们从自然的二元状态扩展到许多状态(对后一个结果中的加性损失取模)。第三,我们考虑具有子模目标的二元状态情况,并简化并稍微加强[Babichenko和Barman, 2016]的结果,通过(i)独立向每个接收器发送信号的方案获得(1-1/e)-近似,(ii)是“遗忘的”,因为只要它是单调的子模,它就不依赖于目标函数。当只允许一个公共信号时,我们的结果是负面的。首先,我们证明了在任意常数因子范围内,即使目标是可加性的,逼近最优公共方案是np困难的。其次,我们表明,在最大化发送方目标方面,最优私有方案可以通过多项式因子优于最优公共方案。
{"title":"Algorithmic Persuasion with No Externalities","authors":"S. Dughmi, Haifeng Xu","doi":"10.1145/3033274.3085152","DOIUrl":"https://doi.org/10.1145/3033274.3085152","url":null,"abstract":"We study the algorithmics of information structure design --- a.k.a. persuasion or signaling --- in a fundamental special case introduced by Arieli and Babichenko: multiple agents, binary actions, and no inter-agent externalities. Unlike prior work on this model, we allow many states of nature. We assume that the principal's objective is a monotone set function, and study the problem both in the public signal and private signal models, drawing a sharp contrast between the two in terms of both efficacy and computational complexity. When private signals are allowed, our results are largely positive and quite general. First, we use linear programming duality and the equivalence of separation and optimization to show polynomial-time equivalence between (exactly) optimal signaling and the problem of maximizing the objective function plus an additive function. This yields an efficient implementation of the optimal scheme when the objective is supermodular or anonymous. Second, we exhibit a (1-1/e)-approximation of the optimal private signaling scheme, modulo an additive loss of ε, when the objective function is submodular. These two results simplify, unify, and generalize results of [Arieli and Babichenko, 2016] and [Babichenko and Barman, 2016], extending them from a binary state of nature to many states (modulo the additive loss in the latter result). Third, we consider the binary-state case with a submodular objective, and simplify and slightly strengthen the result of [Babichenko and Barman, 2016] to obtain a (1-1/e)-approximation via a scheme which (i) signals independently to each receiver and (ii) is \"oblivious\" in that it does not depend on the objective function so long as it is monotone submodular. When only a public signal is allowed, our results are negative. First, we show that it is NP-hard to approximate the optimal public scheme, within any constant factor, even when the objective is additive. Second, we show that the optimal private scheme can outperform the optimal public scheme, in terms of maximizing the sender's objective, by a polynomial factor.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133893613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Cole, Nikhil R. Devanur, Vasilis Gkatzelis, K. Jain, Tung Mai, V. Vazirani, Sadra Yazdanbod
We study Fisher markets and the problem of maximizing the Nash social welfare (NSW), and show several closely related new results. In particular, we obtain: A new integer program for the NSW maximization problem whose fractional relaxation has a bounded integrality gap. In contrast, the natural integer program has an unbounded integrality gap. An improved, and tight, factor 2 analysis of the algorithm of [7]; in turn showing that the integrality gap of the above relaxation is at most 2. The approximation factor shown by [7] was 2e 1/e ≈ 2.89. A lower bound of e 1/e ≈ 1.44 on the integrality gap of this relaxation. New convex programs for natural generalizations of linear Fisher markets and proofs that these markets admit rational equilibria. These results were obtained by establishing connections between previously known disparate results, and they help uncover their mathematical underpinnings. We show a formal connection between the convex programs of Eisenberg and Gale and that of Shmyrev, namely that their duals are equivalent up to a change of variables. Both programs capture equilibria of linear Fisher markets. By adding suitable constraints to Shmyrev’s program, we obtain a convex program that captures equilibria of the spendingrestricted market model defined by [7] in the context of the NSW maximization problem. Further, adding certain integral constraints to this program we get the integer program for the NSW mentioned above. The basic tool we use is convex programming duality. In the special case of convex programs with linear constraints (but convex objectives), we show a particularly simple way of obtaining dual programs, putting it almost at par with linear program duality. This simple way of finding duals has been used subsequently for many other applications.
{"title":"Convex Program Duality, Fisher Markets, and Nash Social Welfare","authors":"R. Cole, Nikhil R. Devanur, Vasilis Gkatzelis, K. Jain, Tung Mai, V. Vazirani, Sadra Yazdanbod","doi":"10.1145/3033274.3085109","DOIUrl":"https://doi.org/10.1145/3033274.3085109","url":null,"abstract":"We study Fisher markets and the problem of maximizing the Nash social welfare (NSW), and show several closely related new results. In particular, we obtain: A new integer program for the NSW maximization problem whose fractional relaxation has a bounded integrality gap. In contrast, the natural integer program has an unbounded integrality gap. An improved, and tight, factor 2 analysis of the algorithm of [7]; in turn showing that the integrality gap of the above relaxation is at most 2. The approximation factor shown by [7] was 2e 1/e ≈ 2.89. A lower bound of e 1/e ≈ 1.44 on the integrality gap of this relaxation. New convex programs for natural generalizations of linear Fisher markets and proofs that these markets admit rational equilibria. These results were obtained by establishing connections between previously known disparate results, and they help uncover their mathematical underpinnings. We show a formal connection between the convex programs of Eisenberg and Gale and that of Shmyrev, namely that their duals are equivalent up to a change of variables. Both programs capture equilibria of linear Fisher markets. By adding suitable constraints to Shmyrev’s program, we obtain a convex program that captures equilibria of the spendingrestricted market model defined by [7] in the context of the NSW maximization problem. Further, adding certain integral constraints to this program we get the integer program for the NSW mentioned above. The basic tool we use is convex programming duality. In the special case of convex programs with linear constraints (but convex objectives), we show a particularly simple way of obtaining dual programs, putting it almost at par with linear program duality. This simple way of finding duals has been used subsequently for many other applications.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127852790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}