Advances in information technologies have enhanced firms' ability to personalize their offers based on consumer data. A central regulatory question regarding consumer privacy is to what extent, if at all, a firm's ability to collect consumer data should be limited. As a 2012 report by the Federal Trade Commission puts it, ?The Commission recognizes the need for flexibility to permit [...] uses of data that benefit consumers. At the same time, [...] there must be some reasonable limit on the collection of consumer data."1 We study consumer surplus when a multi product firm uses data to segment the market and make segment-specific offers. Consider a multi product seller, for example an online retailer such as Amazon. There is a finite number of consumer types. The type of a consumer specifies her valuation for every possible product or bundles of products. We refer to the distribution of consumer types in a population as a market. The seller may be able to observe certain characteristics of its buyers, perhaps noisily, such as age, sex, or location. Based on the available information, the seller may be able to segment the market and offer each market segment a potentially different menu of products and bundles of products. For instance, the seller may offer bundle discounts to consumers in certain locations, or offer products exclusively to different age groups. The resulting producer and consumer surplus depend on how the market is segmented (the "segmentation"), which in turn depends on the information available to the seller.
{"title":"Consumer-Optimal Market Segmentation","authors":"Nima Haghpapanah, Ron Siegel","doi":"10.2139/ssrn.3333940","DOIUrl":"https://doi.org/10.2139/ssrn.3333940","url":null,"abstract":"Advances in information technologies have enhanced firms' ability to personalize their offers based on consumer data. A central regulatory question regarding consumer privacy is to what extent, if at all, a firm's ability to collect consumer data should be limited. As a 2012 report by the Federal Trade Commission puts it, ?The Commission recognizes the need for flexibility to permit [...] uses of data that benefit consumers. At the same time, [...] there must be some reasonable limit on the collection of consumer data.\"1 We study consumer surplus when a multi product firm uses data to segment the market and make segment-specific offers. Consider a multi product seller, for example an online retailer such as Amazon. There is a finite number of consumer types. The type of a consumer specifies her valuation for every possible product or bundles of products. We refer to the distribution of consumer types in a population as a market. The seller may be able to observe certain characteristics of its buyers, perhaps noisily, such as age, sex, or location. Based on the available information, the seller may be able to segment the market and offer each market segment a potentially different menu of products and bundles of products. For instance, the seller may offer bundle discounts to consumers in certain locations, or offer products exclusively to different age groups. The resulting producer and consumer surplus depend on how the market is segmented (the \"segmentation\"), which in turn depends on the information available to the seller.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122398253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several fairness concepts have been proposed recently in attempts to approximate envy-freeness in settings with indivisible goods. Among them, the concept of envy-freeness up to any item (EFX) is arguably the closest to envy-freeness. Unfortunately, EFX allocations are not known to exist except in a few special cases. We make significant progress in this direction. We show that for every instance with additive valuations, there is an EFX allocation of a subset of items with a Nash welfare that is at least half of the maximum possible Nash welfare for the original set of items. That is, after donating some items to a charity, one can distribute the remaining items in a fair way with high efficiency. This bound is proved to be best possible. Our proof is constructive and highlights the importance of maximum Nash welfare allocation. Starting with such an allocation, our algorithm decides which items to donate and redistributes the initial bundles to the agents, eventually obtaining an allocation with the claimed efficiency guarantee. The application of our algorithm to large markets, where the valuations of an agent for every item is relatively small, yields EFX with almost optimal Nash welfare. We also show that our algorithm can be modified to compute, in polynomial-time, EFX allocations that approximate optimal Nash welfare within a factor of at most 2ρ, using a ρ-approximate allocation on input instead of the maximum Nash welfare one.
{"title":"Envy-Freeness Up to Any Item with High Nash Welfare: The Virtue of Donating Items","authors":"I. Caragiannis, N. Gravin, Xin Huang","doi":"10.1145/3328526.3329574","DOIUrl":"https://doi.org/10.1145/3328526.3329574","url":null,"abstract":"Several fairness concepts have been proposed recently in attempts to approximate envy-freeness in settings with indivisible goods. Among them, the concept of envy-freeness up to any item (EFX) is arguably the closest to envy-freeness. Unfortunately, EFX allocations are not known to exist except in a few special cases. We make significant progress in this direction. We show that for every instance with additive valuations, there is an EFX allocation of a subset of items with a Nash welfare that is at least half of the maximum possible Nash welfare for the original set of items. That is, after donating some items to a charity, one can distribute the remaining items in a fair way with high efficiency. This bound is proved to be best possible. Our proof is constructive and highlights the importance of maximum Nash welfare allocation. Starting with such an allocation, our algorithm decides which items to donate and redistributes the initial bundles to the agents, eventually obtaining an allocation with the claimed efficiency guarantee. The application of our algorithm to large markets, where the valuations of an agent for every item is relatively small, yields EFX with almost optimal Nash welfare. We also show that our algorithm can be modified to compute, in polynomial-time, EFX allocations that approximate optimal Nash welfare within a factor of at most 2ρ, using a ρ-approximate allocation on input instead of the maximum Nash welfare one.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122434095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we obtain the tight approximation guarantees for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with an approximation ratio of $2$, improving the previous best known result of $3$. Our bound is tight with respect to either the optimal offline benchmark or its fractional relaxation. We also present a simple deterministic mechanism with the tight approximation guarantee of $3$ against the fractional optimum, improving the best known result of $(sqrt2 + 2)$ against the weaker integral benchmark.
{"title":"Optimal Budget-Feasible Mechanisms for Additive Valuations","authors":"N. Gravin, Yaonan Jin, P. Lu, Chenhao Zhang","doi":"10.1145/3328526.3329586","DOIUrl":"https://doi.org/10.1145/3328526.3329586","url":null,"abstract":"In this paper, we obtain the tight approximation guarantees for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with an approximation ratio of $2$, improving the previous best known result of $3$. Our bound is tight with respect to either the optimal offline benchmark or its fractional relaxation. We also present a simple deterministic mechanism with the tight approximation guarantee of $3$ against the fractional optimum, improving the best known result of $(sqrt2 + 2)$ against the weaker integral benchmark.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133304206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we revisit the mining strategies in Proof of Work based cryptocurrencies and propose two strategies, which we call smart and smarter mining, that in many cases strictly dominate honest mining. In contrast to other known attacks, such as selfish mining, which induce zero-sum games among the miners, the strategies proposed in this paper increase miners' profit by reducing their variable costs (i.e., electricity). Moreover, the proposed strategies are viable for much smaller miners than previously known attacks and, surprisingly, an attack launched by one miner can be profitable for all other miners as well. While saving electricity is very encouraging for the environment, it may affect the coin's security. The smart and smarter mining strategies expose the coin to under 50% attacks, and this vulnerability might only grow when new miners join the coin in response to the increased profit margins induced by these strategies.
{"title":"Mind the Mining","authors":"G. Goren, A. Spiegelman","doi":"10.1145/3328526.3329566","DOIUrl":"https://doi.org/10.1145/3328526.3329566","url":null,"abstract":"In this paper we revisit the mining strategies in Proof of Work based cryptocurrencies and propose two strategies, which we call smart and smarter mining, that in many cases strictly dominate honest mining. In contrast to other known attacks, such as selfish mining, which induce zero-sum games among the miners, the strategies proposed in this paper increase miners' profit by reducing their variable costs (i.e., electricity). Moreover, the proposed strategies are viable for much smaller miners than previously known attacks and, surprisingly, an attack launched by one miner can be profitable for all other miners as well. While saving electricity is very encouraging for the environment, it may affect the coin's security. The smart and smarter mining strategies expose the coin to under 50% attacks, and this vulnerability might only grow when new miners join the coin in response to the increased profit margins induced by these strategies.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130041189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Kroer, A. Peysakhovich, Eric Sodomka, N. Stier-Moses
Computing market equilibria is an important practical problem for market design (e.g. fair division, item allocation). However, computing equilibria requires large amounts of information (e.g. all valuations for all buyers for all items) and compute power. We consider ameliorating these issues by applying a method used for solving complex games: constructing a coarsened abstraction of a given market, solving for the equilibrium in the abstraction, and lifting the prices and allocations back to the original market. We show how to bound important quantities such as regret, envy, Nash social welfare, Pareto optimality, and maximin share when the abstracted prices and allocations are used in place of the real equilibrium. We then study two abstraction methods of interest for practitioners: 1) filling in unknown valuations using techniques from matrix completion, 2) reducing the problem size by aggregating groups of buyers/items into smaller numbers of representative buyers/items and solving for equilibrium in this coarsened market. We find that in real data allocations/prices that are relatively close to equilibria can be computed from even very coarse abstractions.
{"title":"Computing Large Market Equilibria using Abstractions","authors":"Christian Kroer, A. Peysakhovich, Eric Sodomka, N. Stier-Moses","doi":"10.1145/3328526.3329553","DOIUrl":"https://doi.org/10.1145/3328526.3329553","url":null,"abstract":"Computing market equilibria is an important practical problem for market design (e.g. fair division, item allocation). However, computing equilibria requires large amounts of information (e.g. all valuations for all buyers for all items) and compute power. We consider ameliorating these issues by applying a method used for solving complex games: constructing a coarsened abstraction of a given market, solving for the equilibrium in the abstraction, and lifting the prices and allocations back to the original market. We show how to bound important quantities such as regret, envy, Nash social welfare, Pareto optimality, and maximin share when the abstracted prices and allocations are used in place of the real equilibrium. We then study two abstraction methods of interest for practitioners: 1) filling in unknown valuations using techniques from matrix completion, 2) reducing the problem size by aggregating groups of buyers/items into smaller numbers of representative buyers/items and solving for equilibrium in this coarsened market. We find that in real data allocations/prices that are relatively close to equilibria can be computed from even very coarse abstractions.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114013270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soheil Behnezhad, Avrim Blum, M. Derakhshan, M. Hajiaghayi, C. Papadimitriou, Saeed Seddighin
The Colonel Blotto game, first introduced by Borel in 1921, is a well-studied game theory classic. Two colonels each have a pool of troops that they divide simultaneously among a set of battlefields. The winner of each battlefield is the colonel who puts more troops in it and the overall utility of each colonel is the sum of weights of the battlefields that s/he wins. Over the past century, the Colonel Blotto game has found applications in many different forms of competition from advertisements to politics to sports. Two main objectives have been proposed for this game in the literature: (i) maximizing the guaranteed expected payoff, and (ii) maximizing the probability of obtaining a minimum payoff u. The former corresponds to the conventional utility maximization and the latter concerns scenarios such as elections where the candidates' goal is to maximize the probability of getting at least half of the votes (rather than the expected number of votes). In this paper, we consider both of these objectives and show how it is possible to obtain (almost) optimal solutions that have few strategies in their support. One of the main technical challenges in obtaining bounded support strategies for the Colonel Blotto game is that the solution space becomes non-convex. This prevents us from using convex programming techniques in finding optimal strategies which are essentially the main tools that are used in the literature. However, we show through a set of structural results that the solution space can, interestingly, be partitioned into polynomially many disjoint convex polytopes that can be considered independently. Coupled with a number of other combinatorial observations, this leads to polynomial time approximation schemes for both of the aforementioned objectives. We also provide the first complexity result for finding the maximin of Blotto-like games: we show that computing the maximin of a generalization of the Colonel Blotto game that we call General Colonel Blotto is exponential time-complete.
{"title":"Optimal Strategies of Blotto Games: Beyond Convexity","authors":"Soheil Behnezhad, Avrim Blum, M. Derakhshan, M. Hajiaghayi, C. Papadimitriou, Saeed Seddighin","doi":"10.1145/3328526.3329608","DOIUrl":"https://doi.org/10.1145/3328526.3329608","url":null,"abstract":"The Colonel Blotto game, first introduced by Borel in 1921, is a well-studied game theory classic. Two colonels each have a pool of troops that they divide simultaneously among a set of battlefields. The winner of each battlefield is the colonel who puts more troops in it and the overall utility of each colonel is the sum of weights of the battlefields that s/he wins. Over the past century, the Colonel Blotto game has found applications in many different forms of competition from advertisements to politics to sports. Two main objectives have been proposed for this game in the literature: (i) maximizing the guaranteed expected payoff, and (ii) maximizing the probability of obtaining a minimum payoff u. The former corresponds to the conventional utility maximization and the latter concerns scenarios such as elections where the candidates' goal is to maximize the probability of getting at least half of the votes (rather than the expected number of votes). In this paper, we consider both of these objectives and show how it is possible to obtain (almost) optimal solutions that have few strategies in their support. One of the main technical challenges in obtaining bounded support strategies for the Colonel Blotto game is that the solution space becomes non-convex. This prevents us from using convex programming techniques in finding optimal strategies which are essentially the main tools that are used in the literature. However, we show through a set of structural results that the solution space can, interestingly, be partitioned into polynomially many disjoint convex polytopes that can be considered independently. Coupled with a number of other combinatorial observations, this leads to polynomial time approximation schemes for both of the aforementioned objectives. We also provide the first complexity result for finding the maximin of Blotto-like games: we show that computing the maximin of a generalization of the Colonel Blotto game that we call General Colonel Blotto is exponential time-complete.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125206420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since many critical decisions impacting human lives are increasingly being made by algorithms, it is important to ensure that the treatment of individuals under such algorithms is demonstrably fair under reasonable notions of fairness. One compelling notion proposed in the literature is that of individual fairness (IF), which advocates that similar individuals should be treated similarly (Dwork et al. 2012). Originally proposed for offline decisions, this notion does not, however, account for temporal considerations relevant for online decision-making. In this paper, we extend the notion of IF to account for the time at which a decision is made, in settings where there exists a notion of conduciveness of decisions as perceived by the affected individuals. We introduce two definitions: (i) fairness-across-time (FT) and (ii) fairness-in-hindsight (FH). FT is the simplest temporal extension of IF where treatment of individuals is required to be individually fair relative to the past as well as future, while in FH, we require a one-sided notion of individual fairness that is defined relative to only the past decisions. We show that these two definitions can have drastically different implications in the setting where the principal needs to learn the utility model. Linear regret relative to optimal individually fair decisions is inevitable under FT for non-trivial examples. On the other hand, we design a new algorithm: Cautious Fair Exploration (CAFE), which satisfies FH and achieves sub-linear regret guarantees for a broad range of settings. We characterize lower bounds showing that these guarantees are order-optimal in the worst case. FH can thus be embedded as a primary safeguard against unfair discrimination in algorithmic deployments, without hindering the ability to take good decisions in the long-run.
{"title":"Individual Fairness in Hindsight","authors":"Swati Gupta, Vijay Kamble","doi":"10.1145/3328526.3329605","DOIUrl":"https://doi.org/10.1145/3328526.3329605","url":null,"abstract":"Since many critical decisions impacting human lives are increasingly being made by algorithms, it is important to ensure that the treatment of individuals under such algorithms is demonstrably fair under reasonable notions of fairness. One compelling notion proposed in the literature is that of individual fairness (IF), which advocates that similar individuals should be treated similarly (Dwork et al. 2012). Originally proposed for offline decisions, this notion does not, however, account for temporal considerations relevant for online decision-making. In this paper, we extend the notion of IF to account for the time at which a decision is made, in settings where there exists a notion of conduciveness of decisions as perceived by the affected individuals. We introduce two definitions: (i) fairness-across-time (FT) and (ii) fairness-in-hindsight (FH). FT is the simplest temporal extension of IF where treatment of individuals is required to be individually fair relative to the past as well as future, while in FH, we require a one-sided notion of individual fairness that is defined relative to only the past decisions. We show that these two definitions can have drastically different implications in the setting where the principal needs to learn the utility model. Linear regret relative to optimal individually fair decisions is inevitable under FT for non-trivial examples. On the other hand, we design a new algorithm: Cautious Fair Exploration (CAFE), which satisfies FH and achieves sub-linear regret guarantees for a broad range of settings. We characterize lower bounds showing that these guarantees are order-optimal in the worst case. FH can thus be embedded as a primary safeguard against unfair discrimination in algorithmic deployments, without hindering the ability to take good decisions in the long-run.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"2023 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129773287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study a data analyst's problem of acquiring data from self-interested individuals to obtain an accurate estimation of some statistic of a population, subject to an expected budget constraint. Each data holder incurs a cost, which is unknown to the data analyst, to acquire and report his data. The cost can be arbitrarily correlated with the data. The data analyst has an expected budget that she can use to incentivize individuals to provide their data. The goal is to design a joint acquisition-estimation mechanism to optimize the performance of the produced estimator, without any prior information on the underlying distribution of cost and data. We investigate two types of estimations: unbiased point estimation and confidence interval estimation. Unbiased estimators: We design a truthful, individually rational, online mechanism to acquire data from individuals and output an unbiased estimator of the population mean when the data analyst has no prior information on the cost-data distribution and individuals arrive in a random order. The performance of this mechanism matches that of the optimal mechanism, which knows the true cost distribution, within a constant factor. The performance of an estimator is evaluated by its variance under the worst-case cost-data correlation. Confidence intervals: We characterize an approximately optimal (within a factor 2) mechanism for obtaining a confidence interval of the population mean when the data analyst knows the true cost distribution at the beginning. This mechanism is efficiently computable. We then design a truthful, individually rational, online algorithm that is only worse than the approximately optimal mechanism by a constant factor. The performance of an estimator is evaluated by its expected length under the worst-case cost-data correlation.
{"title":"Prior-free Data Acquisition for Accurate Statistical Estimation","authors":"Yiling Chen, Shuran Zheng","doi":"10.1145/3328526.3329564","DOIUrl":"https://doi.org/10.1145/3328526.3329564","url":null,"abstract":"We study a data analyst's problem of acquiring data from self-interested individuals to obtain an accurate estimation of some statistic of a population, subject to an expected budget constraint. Each data holder incurs a cost, which is unknown to the data analyst, to acquire and report his data. The cost can be arbitrarily correlated with the data. The data analyst has an expected budget that she can use to incentivize individuals to provide their data. The goal is to design a joint acquisition-estimation mechanism to optimize the performance of the produced estimator, without any prior information on the underlying distribution of cost and data. We investigate two types of estimations: unbiased point estimation and confidence interval estimation. Unbiased estimators: We design a truthful, individually rational, online mechanism to acquire data from individuals and output an unbiased estimator of the population mean when the data analyst has no prior information on the cost-data distribution and individuals arrive in a random order. The performance of this mechanism matches that of the optimal mechanism, which knows the true cost distribution, within a constant factor. The performance of an estimator is evaluated by its variance under the worst-case cost-data correlation. Confidence intervals: We characterize an approximately optimal (within a factor 2) mechanism for obtaining a confidence interval of the population mean when the data analyst knows the true cost distribution at the beginning. This mechanism is efficiently computable. We then design a truthful, individually rational, online algorithm that is only worse than the approximately optimal mechanism by a constant factor. The performance of an estimator is evaluated by its expected length under the worst-case cost-data correlation.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114259232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Consider a seller with m heterogeneous items for sale to a single additive buyer whose values for the items are arbitrarily correlated. It was previously shown that, in such settings, distributions exist for which the seller's optimal revenue is infinite, but the best "simple" mechanism achieves revenue at most one (Briest et al. 2015, Hart and Nisan 2012), even when m=2. This result has long served as a cautionary tale discouraging the study of multi-item auctions without some notion of "independent items". In this work we initiate a smoothed analysis of such multi-item auction settings. We consider a buyer whose item values are drawn from an arbitrarily correlated multi-dimensional distribution then randomly perturbed with magnitude δ under several natural perturbation models. On one hand, we prove that the above construction is surprisingly robust to certain natural perturbations of this form, and the infinite gap remains. On the other hand, we provide a smoothed model such that the approximation guarantee of simple mechanisms is smoothed-finite. We show that when the perturbation has magnitude δ, pricing only the grand bundle guarantees an O(1/δ)-approximation to the optimal revenue. That is, no matter the (worst-case) initially correlated distribution, these tiny perturbations suffice to bring the gap down from infinite to finite. We further show that the same guarantees hold when n buyers have values drawn from an arbitrarily correlated mn-dimensional distribution (without any dependence on n). Taken together, these analyses further pin down key properties of correlated distributions that result in large gaps between simplicity and optimality.
考虑一个拥有m个异质商品的卖家,这些商品的价值是任意相关的。之前的研究表明,在这种情况下,存在卖方的最优收益是无限的分布,但最好的“简单”机制即使在m=2的情况下也最多只能实现1的收益(Briest et al. 2015, Hart and Nisan 2012)。长期以来,这一结果一直被视为一个警世故事,劝阻人们在没有“独立物品”概念的情况下研究多物品拍卖。在这项工作中,我们开始对这种多项目拍卖设置进行平滑分析。我们考虑一个买家,其项目值是从任意相关的多维分布中提取的,然后在几个自然扰动模型下随机受到δ量级的扰动。一方面,我们证明了上述构造对这种形式的某些自然扰动具有惊人的鲁棒性,并且无限间隙仍然存在。另一方面,我们提供了一个光滑模型,使得简单机构的近似保证是光滑有限的。我们证明,当扰动的量级为δ时,仅对大束定价保证了最优收益的近似为O(1/δ)。也就是说,无论(最坏情况下)最初的相关分布如何,这些微小的扰动足以将差距从无限缩小到有限。我们进一步表明,当n个买家的值来自任意相关的n维分布(不依赖于n)时,同样的保证是有效的。总之,这些分析进一步确定了导致简单性和最优性之间存在巨大差距的相关分布的关键属性。
{"title":"Smoothed Analysis of Multi-Item Auctions with Correlated Values","authors":"Alexandros Psomas, Ariel Schvartzman, S. Weinberg","doi":"10.1145/3328526.3329563","DOIUrl":"https://doi.org/10.1145/3328526.3329563","url":null,"abstract":"Consider a seller with m heterogeneous items for sale to a single additive buyer whose values for the items are arbitrarily correlated. It was previously shown that, in such settings, distributions exist for which the seller's optimal revenue is infinite, but the best \"simple\" mechanism achieves revenue at most one (Briest et al. 2015, Hart and Nisan 2012), even when m=2. This result has long served as a cautionary tale discouraging the study of multi-item auctions without some notion of \"independent items\". In this work we initiate a smoothed analysis of such multi-item auction settings. We consider a buyer whose item values are drawn from an arbitrarily correlated multi-dimensional distribution then randomly perturbed with magnitude δ under several natural perturbation models. On one hand, we prove that the above construction is surprisingly robust to certain natural perturbations of this form, and the infinite gap remains. On the other hand, we provide a smoothed model such that the approximation guarantee of simple mechanisms is smoothed-finite. We show that when the perturbation has magnitude δ, pricing only the grand bundle guarantees an O(1/δ)-approximation to the optimal revenue. That is, no matter the (worst-case) initially correlated distribution, these tiny perturbations suffice to bring the gap down from infinite to finite. We further show that the same guarantees hold when n buyers have values drawn from an arbitrarily correlated mn-dimensional distribution (without any dependence on n). Taken together, these analyses further pin down key properties of correlated distributions that result in large gaps between simplicity and optimality.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129234477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the relationship between capacity and performance for a service firm with spatial operations, in the sense that requests arrive with origin-destination pairs. An example of such a system is a ride-hailing platform in which each customer arrives in the system with the need to travel from an origin to a destination. We propose a state-dependent queueing model that captures spatial frictions as well as spatial economies of scale through the service rate. In a classical M/M/n queueing model, the square root safety (SRS) staffing rule is known to balance server utilization and customer wait times. By contrast, we find that the SRS rule does not lead to such a balance in spatial systems. In a spatial environment, pickup times increase the load in the system; furthermore, they are an endogenous source of extra workload that leads the system to only operate efficiently if there is sufficient imbalance between supply and demand. In heavy traffic, we derive the mapping from load to operating regimes and establish implications on various metrics of interest. In particular, to obtain a balance of utilization and wait times, the service firm should use a higher safety factor, proportional to the offered load to the power of 2/3. We also discuss implications of these results for general systems.
{"title":"Spatial Capacity Planning","authors":"Omar Besbes, Francisco Castro, I. Lobel","doi":"10.2139/ssrn.3292651","DOIUrl":"https://doi.org/10.2139/ssrn.3292651","url":null,"abstract":"We study the relationship between capacity and performance for a service firm with spatial operations, in the sense that requests arrive with origin-destination pairs. An example of such a system is a ride-hailing platform in which each customer arrives in the system with the need to travel from an origin to a destination. We propose a state-dependent queueing model that captures spatial frictions as well as spatial economies of scale through the service rate. In a classical M/M/n queueing model, the square root safety (SRS) staffing rule is known to balance server utilization and customer wait times. By contrast, we find that the SRS rule does not lead to such a balance in spatial systems. In a spatial environment, pickup times increase the load in the system; furthermore, they are an endogenous source of extra workload that leads the system to only operate efficiently if there is sufficient imbalance between supply and demand. In heavy traffic, we derive the mapping from load to operating regimes and establish implications on various metrics of interest. In particular, to obtain a balance of utilization and wait times, the service firm should use a higher safety factor, proportional to the offered load to the power of 2/3. We also discuss implications of these results for general systems.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"978 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133843193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}