首页 > 最新文献

Proceedings of the 2019 ACM Conference on Economics and Computation最新文献

英文 中文
Consumer-Optimal Market Segmentation 消费者最优市场细分
Pub Date : 2019-02-12 DOI: 10.2139/ssrn.3333940
Nima Haghpapanah, Ron Siegel
Advances in information technologies have enhanced firms' ability to personalize their offers based on consumer data. A central regulatory question regarding consumer privacy is to what extent, if at all, a firm's ability to collect consumer data should be limited. As a 2012 report by the Federal Trade Commission puts it, ?The Commission recognizes the need for flexibility to permit [...] uses of data that benefit consumers. At the same time, [...] there must be some reasonable limit on the collection of consumer data."1 We study consumer surplus when a multi product firm uses data to segment the market and make segment-specific offers. Consider a multi product seller, for example an online retailer such as Amazon. There is a finite number of consumer types. The type of a consumer specifies her valuation for every possible product or bundles of products. We refer to the distribution of consumer types in a population as a market. The seller may be able to observe certain characteristics of its buyers, perhaps noisily, such as age, sex, or location. Based on the available information, the seller may be able to segment the market and offer each market segment a potentially different menu of products and bundles of products. For instance, the seller may offer bundle discounts to consumers in certain locations, or offer products exclusively to different age groups. The resulting producer and consumer surplus depend on how the market is segmented (the "segmentation"), which in turn depends on the information available to the seller.
信息技术的进步提高了企业根据消费者数据提供个性化服务的能力。关于消费者隐私的一个核心监管问题是,公司收集消费者数据的能力应该受到多大程度的限制,如果有的话。正如美国联邦贸易委员会(Federal Trade Commission) 2012年的一份报告所言,该委员会认识到需要灵活性,以允许[…]使用有利于消费者的数据。与此同时,……收集消费者数据必须有合理的限制。“当一个多产品公司使用数据来细分市场并提供特定细分市场的报价时,我们研究消费者剩余。考虑一个多种产品的销售商,例如像亚马逊这样的在线零售商。消费者的类型是有限的。消费者的类型指定了她对每个可能的产品或产品包的估价。我们把消费者类型在人口中的分布称为市场。卖家可以观察到买家的某些特征,可能是嘈杂的,比如年龄、性别或位置。根据现有信息,卖方可以细分市场,并为每个细分市场提供可能不同的产品菜单和产品捆绑。例如,卖家可能会向某些地区的消费者提供捆绑折扣,或者专门向不同年龄段的消费者提供产品。由此产生的生产者和消费者剩余取决于市场如何分割(“分割”),而市场如何分割又取决于卖方可获得的信息。
{"title":"Consumer-Optimal Market Segmentation","authors":"Nima Haghpapanah, Ron Siegel","doi":"10.2139/ssrn.3333940","DOIUrl":"https://doi.org/10.2139/ssrn.3333940","url":null,"abstract":"Advances in information technologies have enhanced firms' ability to personalize their offers based on consumer data. A central regulatory question regarding consumer privacy is to what extent, if at all, a firm's ability to collect consumer data should be limited. As a 2012 report by the Federal Trade Commission puts it, ?The Commission recognizes the need for flexibility to permit [...] uses of data that benefit consumers. At the same time, [...] there must be some reasonable limit on the collection of consumer data.\"1 We study consumer surplus when a multi product firm uses data to segment the market and make segment-specific offers. Consider a multi product seller, for example an online retailer such as Amazon. There is a finite number of consumer types. The type of a consumer specifies her valuation for every possible product or bundles of products. We refer to the distribution of consumer types in a population as a market. The seller may be able to observe certain characteristics of its buyers, perhaps noisily, such as age, sex, or location. Based on the available information, the seller may be able to segment the market and offer each market segment a potentially different menu of products and bundles of products. For instance, the seller may offer bundle discounts to consumers in certain locations, or offer products exclusively to different age groups. The resulting producer and consumer surplus depend on how the market is segmented (the \"segmentation\"), which in turn depends on the information available to the seller.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122398253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Envy-Freeness Up to Any Item with High Nash Welfare: The Virtue of Donating Items 对任何纳什福利高的物品的自由嫉妒:捐赠物品的美德
Pub Date : 2019-02-12 DOI: 10.1145/3328526.3329574
I. Caragiannis, N. Gravin, Xin Huang
Several fairness concepts have been proposed recently in attempts to approximate envy-freeness in settings with indivisible goods. Among them, the concept of envy-freeness up to any item (EFX) is arguably the closest to envy-freeness. Unfortunately, EFX allocations are not known to exist except in a few special cases. We make significant progress in this direction. We show that for every instance with additive valuations, there is an EFX allocation of a subset of items with a Nash welfare that is at least half of the maximum possible Nash welfare for the original set of items. That is, after donating some items to a charity, one can distribute the remaining items in a fair way with high efficiency. This bound is proved to be best possible. Our proof is constructive and highlights the importance of maximum Nash welfare allocation. Starting with such an allocation, our algorithm decides which items to donate and redistributes the initial bundles to the agents, eventually obtaining an allocation with the claimed efficiency guarantee. The application of our algorithm to large markets, where the valuations of an agent for every item is relatively small, yields EFX with almost optimal Nash welfare. We also show that our algorithm can be modified to compute, in polynomial-time, EFX allocations that approximate optimal Nash welfare within a factor of at most 2ρ, using a ρ-approximate allocation on input instead of the maximum Nash welfare one.
最近提出了几个公平概念,试图在具有不可分割的商品的环境中近似嫉妒自由。其中,对任何物品的无嫉妒(EFX)的概念可以说是最接近无嫉妒的。不幸的是,除了在少数特殊情况下,EFX拨款并不存在。我们在这方面取得了重大进展。我们证明,对于每一个具有附加估值的实例,存在一个项目子集的EFX分配,其纳什福利至少是原始项目集最大可能纳什福利的一半。也就是说,在捐赠一些物品给慈善机构后,可以公平高效地分配剩下的物品。这个界被证明是最好的可能。我们的证明具有建设性,突出了纳什福利分配最大化的重要性。从这样的分配开始,我们的算法决定捐赠哪些物品,并将初始捆绑包重新分配给代理,最终获得具有声称的效率保证的分配。将我们的算法应用于大型市场,其中每个项目的代理估值相对较小,产生的EFX几乎具有最优纳什福利。我们还表明,我们的算法可以修改为在多项式时间内计算最优纳什福利的EFX分配,使用对输入的ρ近似分配而不是最大纳什福利分配。
{"title":"Envy-Freeness Up to Any Item with High Nash Welfare: The Virtue of Donating Items","authors":"I. Caragiannis, N. Gravin, Xin Huang","doi":"10.1145/3328526.3329574","DOIUrl":"https://doi.org/10.1145/3328526.3329574","url":null,"abstract":"Several fairness concepts have been proposed recently in attempts to approximate envy-freeness in settings with indivisible goods. Among them, the concept of envy-freeness up to any item (EFX) is arguably the closest to envy-freeness. Unfortunately, EFX allocations are not known to exist except in a few special cases. We make significant progress in this direction. We show that for every instance with additive valuations, there is an EFX allocation of a subset of items with a Nash welfare that is at least half of the maximum possible Nash welfare for the original set of items. That is, after donating some items to a charity, one can distribute the remaining items in a fair way with high efficiency. This bound is proved to be best possible. Our proof is constructive and highlights the importance of maximum Nash welfare allocation. Starting with such an allocation, our algorithm decides which items to donate and redistributes the initial bundles to the agents, eventually obtaining an allocation with the claimed efficiency guarantee. The application of our algorithm to large markets, where the valuations of an agent for every item is relatively small, yields EFX with almost optimal Nash welfare. We also show that our algorithm can be modified to compute, in polynomial-time, EFX allocations that approximate optimal Nash welfare within a factor of at most 2ρ, using a ρ-approximate allocation on input instead of the maximum Nash welfare one.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122434095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
Optimal Budget-Feasible Mechanisms for Additive Valuations 可加性估值的最优预算-可行机制
Pub Date : 2019-02-12 DOI: 10.1145/3328526.3329586
N. Gravin, Yaonan Jin, P. Lu, Chenhao Zhang
In this paper, we obtain the tight approximation guarantees for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with an approximation ratio of $2$, improving the previous best known result of $3$. Our bound is tight with respect to either the optimal offline benchmark or its fractional relaxation. We also present a simple deterministic mechanism with the tight approximation guarantee of $3$ against the fractional optimum, improving the best known result of $(sqrt2 + 2)$ against the weaker integral benchmark.
本文给出了具有可加购买者的预算可行机制的紧逼近保证。我们提出了一种新的简单随机机制,其近似比为$2$,改进了之前最著名的结果$3$。对于最优离线基准或者它的分数松弛,我们的界是紧的。我们还提出了一个简单的确定性机制,该机制对分数阶最优具有严格的近似保证,从而改进了对较弱积分基准的最已知结果$(sqrt2 + 2)$。
{"title":"Optimal Budget-Feasible Mechanisms for Additive Valuations","authors":"N. Gravin, Yaonan Jin, P. Lu, Chenhao Zhang","doi":"10.1145/3328526.3329586","DOIUrl":"https://doi.org/10.1145/3328526.3329586","url":null,"abstract":"In this paper, we obtain the tight approximation guarantees for budget-feasible mechanisms with an additive buyer. We propose a new simple randomized mechanism with an approximation ratio of $2$, improving the previous best known result of $3$. Our bound is tight with respect to either the optimal offline benchmark or its fractional relaxation. We also present a simple deterministic mechanism with the tight approximation guarantee of $3$ against the fractional optimum, improving the best known result of $(sqrt2 + 2)$ against the weaker integral benchmark.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133304206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Mind the Mining 小心采矿
Pub Date : 2019-02-11 DOI: 10.1145/3328526.3329566
G. Goren, A. Spiegelman
In this paper we revisit the mining strategies in Proof of Work based cryptocurrencies and propose two strategies, which we call smart and smarter mining, that in many cases strictly dominate honest mining. In contrast to other known attacks, such as selfish mining, which induce zero-sum games among the miners, the strategies proposed in this paper increase miners' profit by reducing their variable costs (i.e., electricity). Moreover, the proposed strategies are viable for much smaller miners than previously known attacks and, surprisingly, an attack launched by one miner can be profitable for all other miners as well. While saving electricity is very encouraging for the environment, it may affect the coin's security. The smart and smarter mining strategies expose the coin to under 50% attacks, and this vulnerability might only grow when new miners join the coin in response to the increased profit margins induced by these strategies.
在本文中,我们重新审视了基于工作量证明的加密货币中的挖矿策略,并提出了两种策略,我们称之为智能和智能挖矿,它们在许多情况下严格主导诚实挖矿。与其他已知的攻击(如自私挖矿)导致矿工之间的零和博弈不同,本文提出的策略通过降低矿工的可变成本(即电力)来增加矿工的利润。此外,与之前已知的攻击相比,所提出的策略对于小得多的矿工来说是可行的,令人惊讶的是,一个矿工发起的攻击也可以让所有其他矿工获利。虽然节约电力对环境来说是非常令人鼓舞的,但它可能会影响硬币的安全性。更智能的挖矿策略使比特币遭受不到50%的攻击,而且只有当新的矿工加入比特币以应对这些策略带来的利润率增加时,这种脆弱性才会增加。
{"title":"Mind the Mining","authors":"G. Goren, A. Spiegelman","doi":"10.1145/3328526.3329566","DOIUrl":"https://doi.org/10.1145/3328526.3329566","url":null,"abstract":"In this paper we revisit the mining strategies in Proof of Work based cryptocurrencies and propose two strategies, which we call smart and smarter mining, that in many cases strictly dominate honest mining. In contrast to other known attacks, such as selfish mining, which induce zero-sum games among the miners, the strategies proposed in this paper increase miners' profit by reducing their variable costs (i.e., electricity). Moreover, the proposed strategies are viable for much smaller miners than previously known attacks and, surprisingly, an attack launched by one miner can be profitable for all other miners as well. While saving electricity is very encouraging for the environment, it may affect the coin's security. The smart and smarter mining strategies expose the coin to under 50% attacks, and this vulnerability might only grow when new miners join the coin in response to the increased profit margins induced by these strategies.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130041189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Computing Large Market Equilibria using Abstractions 利用抽象计算大市场均衡
Pub Date : 2019-01-18 DOI: 10.1145/3328526.3329553
Christian Kroer, A. Peysakhovich, Eric Sodomka, N. Stier-Moses
Computing market equilibria is an important practical problem for market design (e.g. fair division, item allocation). However, computing equilibria requires large amounts of information (e.g. all valuations for all buyers for all items) and compute power. We consider ameliorating these issues by applying a method used for solving complex games: constructing a coarsened abstraction of a given market, solving for the equilibrium in the abstraction, and lifting the prices and allocations back to the original market. We show how to bound important quantities such as regret, envy, Nash social welfare, Pareto optimality, and maximin share when the abstracted prices and allocations are used in place of the real equilibrium. We then study two abstraction methods of interest for practitioners: 1) filling in unknown valuations using techniques from matrix completion, 2) reducing the problem size by aggregating groups of buyers/items into smaller numbers of representative buyers/items and solving for equilibrium in this coarsened market. We find that in real data allocations/prices that are relatively close to equilibria can be computed from even very coarse abstractions.
计算市场均衡是市场设计(如公平分配、物品分配)的一个重要实际问题。然而,计算均衡需要大量的信息(例如,所有买家对所有商品的所有估值)和计算能力。我们考虑通过应用一种用于解决复杂博弈的方法来改善这些问题:构建给定市场的粗略抽象,在抽象中求解均衡,并将价格和分配提高到原始市场。我们展示了当抽象的价格和分配代替真实的均衡时,如何约束诸如后悔、嫉妒、纳什社会福利、帕累托最优和最大份额等重要数量。然后,我们研究了从业者感兴趣的两种抽象方法:1)使用矩阵补全技术填充未知估值,2)通过将买家/项目群体聚集到较小数量的代表性买家/项目中来减小问题规模,并在这个粗糙的市场中求解均衡。我们发现,在真实的数据分配/价格相对接近均衡可以从非常粗糙的抽象计算。
{"title":"Computing Large Market Equilibria using Abstractions","authors":"Christian Kroer, A. Peysakhovich, Eric Sodomka, N. Stier-Moses","doi":"10.1145/3328526.3329553","DOIUrl":"https://doi.org/10.1145/3328526.3329553","url":null,"abstract":"Computing market equilibria is an important practical problem for market design (e.g. fair division, item allocation). However, computing equilibria requires large amounts of information (e.g. all valuations for all buyers for all items) and compute power. We consider ameliorating these issues by applying a method used for solving complex games: constructing a coarsened abstraction of a given market, solving for the equilibrium in the abstraction, and lifting the prices and allocations back to the original market. We show how to bound important quantities such as regret, envy, Nash social welfare, Pareto optimality, and maximin share when the abstracted prices and allocations are used in place of the real equilibrium. We then study two abstraction methods of interest for practitioners: 1) filling in unknown valuations using techniques from matrix completion, 2) reducing the problem size by aggregating groups of buyers/items into smaller numbers of representative buyers/items and solving for equilibrium in this coarsened market. We find that in real data allocations/prices that are relatively close to equilibria can be computed from even very coarse abstractions.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114013270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Optimal Strategies of Blotto Games: Beyond Convexity 纸牌游戏的最优策略:超越凹凸性
Pub Date : 2019-01-14 DOI: 10.1145/3328526.3329608
Soheil Behnezhad, Avrim Blum, M. Derakhshan, M. Hajiaghayi, C. Papadimitriou, Saeed Seddighin
The Colonel Blotto game, first introduced by Borel in 1921, is a well-studied game theory classic. Two colonels each have a pool of troops that they divide simultaneously among a set of battlefields. The winner of each battlefield is the colonel who puts more troops in it and the overall utility of each colonel is the sum of weights of the battlefields that s/he wins. Over the past century, the Colonel Blotto game has found applications in many different forms of competition from advertisements to politics to sports. Two main objectives have been proposed for this game in the literature: (i) maximizing the guaranteed expected payoff, and (ii) maximizing the probability of obtaining a minimum payoff u. The former corresponds to the conventional utility maximization and the latter concerns scenarios such as elections where the candidates' goal is to maximize the probability of getting at least half of the votes (rather than the expected number of votes). In this paper, we consider both of these objectives and show how it is possible to obtain (almost) optimal solutions that have few strategies in their support. One of the main technical challenges in obtaining bounded support strategies for the Colonel Blotto game is that the solution space becomes non-convex. This prevents us from using convex programming techniques in finding optimal strategies which are essentially the main tools that are used in the literature. However, we show through a set of structural results that the solution space can, interestingly, be partitioned into polynomially many disjoint convex polytopes that can be considered independently. Coupled with a number of other combinatorial observations, this leads to polynomial time approximation schemes for both of the aforementioned objectives. We also provide the first complexity result for finding the maximin of Blotto-like games: we show that computing the maximin of a generalization of the Colonel Blotto game that we call General Colonel Blotto is exponential time-complete.
由Borel于1921年首次提出的“Blotto上校”游戏是博弈论的经典之作。两个上校各有一群军队,他们同时在一系列战场上分配。每个战场的赢家是派遣更多军队的上校,每个上校的整体效用是他/他赢得的战场的权重之和。在过去的一个世纪里,从广告到政治再到体育,上校布托游戏在许多不同形式的竞争中得到了应用。文献中提出了这个博弈的两个主要目标:(i)最大化保证的预期收益,(ii)最大化获得最小收益u的概率。前者对应于传统的效用最大化,后者涉及选举等场景,候选人的目标是最大化获得至少一半选票的概率(而不是预期的选票数量)。在本文中,我们考虑了这两个目标,并展示了如何获得(几乎)最优的解决方案,在他们的支持很少的策略。在Colonel Blotto游戏中获得有界支持策略的主要技术挑战之一是解空间变得非凸。这使我们无法使用凸规划技术来寻找最优策略,而这些策略本质上是文献中使用的主要工具。然而,我们通过一组结构结果表明,解空间可以,有趣的是,被划分成多项式许多可以独立考虑的不相交凸多面体。与许多其他组合观测相结合,这导致了上述两个目标的多项式时间近似方案。我们还提供了寻找Blotto类游戏的最大值的第一个复杂性结果:我们表明,计算Blotto上校游戏(我们称之为General Colonel Blotto)泛化的最大值是指数时间完备的。
{"title":"Optimal Strategies of Blotto Games: Beyond Convexity","authors":"Soheil Behnezhad, Avrim Blum, M. Derakhshan, M. Hajiaghayi, C. Papadimitriou, Saeed Seddighin","doi":"10.1145/3328526.3329608","DOIUrl":"https://doi.org/10.1145/3328526.3329608","url":null,"abstract":"The Colonel Blotto game, first introduced by Borel in 1921, is a well-studied game theory classic. Two colonels each have a pool of troops that they divide simultaneously among a set of battlefields. The winner of each battlefield is the colonel who puts more troops in it and the overall utility of each colonel is the sum of weights of the battlefields that s/he wins. Over the past century, the Colonel Blotto game has found applications in many different forms of competition from advertisements to politics to sports. Two main objectives have been proposed for this game in the literature: (i) maximizing the guaranteed expected payoff, and (ii) maximizing the probability of obtaining a minimum payoff u. The former corresponds to the conventional utility maximization and the latter concerns scenarios such as elections where the candidates' goal is to maximize the probability of getting at least half of the votes (rather than the expected number of votes). In this paper, we consider both of these objectives and show how it is possible to obtain (almost) optimal solutions that have few strategies in their support. One of the main technical challenges in obtaining bounded support strategies for the Colonel Blotto game is that the solution space becomes non-convex. This prevents us from using convex programming techniques in finding optimal strategies which are essentially the main tools that are used in the literature. However, we show through a set of structural results that the solution space can, interestingly, be partitioned into polynomially many disjoint convex polytopes that can be considered independently. Coupled with a number of other combinatorial observations, this leads to polynomial time approximation schemes for both of the aforementioned objectives. We also provide the first complexity result for finding the maximin of Blotto-like games: we show that computing the maximin of a generalization of the Colonel Blotto game that we call General Colonel Blotto is exponential time-complete.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125206420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Individual Fairness in Hindsight 事后诸葛亮的个人公平
Pub Date : 2018-12-10 DOI: 10.1145/3328526.3329605
Swati Gupta, Vijay Kamble
Since many critical decisions impacting human lives are increasingly being made by algorithms, it is important to ensure that the treatment of individuals under such algorithms is demonstrably fair under reasonable notions of fairness. One compelling notion proposed in the literature is that of individual fairness (IF), which advocates that similar individuals should be treated similarly (Dwork et al. 2012). Originally proposed for offline decisions, this notion does not, however, account for temporal considerations relevant for online decision-making. In this paper, we extend the notion of IF to account for the time at which a decision is made, in settings where there exists a notion of conduciveness of decisions as perceived by the affected individuals. We introduce two definitions: (i) fairness-across-time (FT) and (ii) fairness-in-hindsight (FH). FT is the simplest temporal extension of IF where treatment of individuals is required to be individually fair relative to the past as well as future, while in FH, we require a one-sided notion of individual fairness that is defined relative to only the past decisions. We show that these two definitions can have drastically different implications in the setting where the principal needs to learn the utility model. Linear regret relative to optimal individually fair decisions is inevitable under FT for non-trivial examples. On the other hand, we design a new algorithm: Cautious Fair Exploration (CAFE), which satisfies FH and achieves sub-linear regret guarantees for a broad range of settings. We characterize lower bounds showing that these guarantees are order-optimal in the worst case. FH can thus be embedded as a primary safeguard against unfair discrimination in algorithmic deployments, without hindering the ability to take good decisions in the long-run.
由于影响人类生活的许多关键决策越来越多地由算法做出,因此确保在这种算法下对个人的处理在合理的公平概念下是明显公平的,这一点很重要。文献中提出的一个令人信服的概念是个体公平(IF),它主张相似的个体应该得到相似的对待(Dwork等人,2012)。然而,这个概念最初是为离线决策提出的,并没有考虑到与在线决策相关的时间考虑。在本文中,我们扩展了IF的概念,以解释决策做出的时间,在受影响个人感知的决策传导概念存在的情况下。我们引入两个定义:(i)跨时间公平(FT)和(ii)后见之明公平(FH)。FT是IF的最简单的时间延伸,其中要求个人的待遇相对于过去和未来是公平的,而在FH中,我们需要一个片面的个人公平概念,它只相对于过去的决定来定义。我们表明,在委托人需要学习实用新型的情况下,这两种定义可能会产生截然不同的含义。对于非平凡的例子,在FT条件下,相对于最优个人公平决策的线性后悔是不可避免的。另一方面,我们设计了一种新的算法:谨慎公平探索(CAFE),该算法满足FH并在广泛的设置范围内实现亚线性后悔保证。我们描述了下界,表明在最坏情况下这些保证是有序最优的。因此,跳频可以作为算法部署中防止不公平歧视的主要保障,而不会妨碍长期做出正确决策的能力。
{"title":"Individual Fairness in Hindsight","authors":"Swati Gupta, Vijay Kamble","doi":"10.1145/3328526.3329605","DOIUrl":"https://doi.org/10.1145/3328526.3329605","url":null,"abstract":"Since many critical decisions impacting human lives are increasingly being made by algorithms, it is important to ensure that the treatment of individuals under such algorithms is demonstrably fair under reasonable notions of fairness. One compelling notion proposed in the literature is that of individual fairness (IF), which advocates that similar individuals should be treated similarly (Dwork et al. 2012). Originally proposed for offline decisions, this notion does not, however, account for temporal considerations relevant for online decision-making. In this paper, we extend the notion of IF to account for the time at which a decision is made, in settings where there exists a notion of conduciveness of decisions as perceived by the affected individuals. We introduce two definitions: (i) fairness-across-time (FT) and (ii) fairness-in-hindsight (FH). FT is the simplest temporal extension of IF where treatment of individuals is required to be individually fair relative to the past as well as future, while in FH, we require a one-sided notion of individual fairness that is defined relative to only the past decisions. We show that these two definitions can have drastically different implications in the setting where the principal needs to learn the utility model. Linear regret relative to optimal individually fair decisions is inevitable under FT for non-trivial examples. On the other hand, we design a new algorithm: Cautious Fair Exploration (CAFE), which satisfies FH and achieves sub-linear regret guarantees for a broad range of settings. We characterize lower bounds showing that these guarantees are order-optimal in the worst case. FH can thus be embedded as a primary safeguard against unfair discrimination in algorithmic deployments, without hindering the ability to take good decisions in the long-run.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"2023 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129773287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Prior-free Data Acquisition for Accurate Statistical Estimation 用于准确统计估计的无先验数据采集
Pub Date : 2018-11-30 DOI: 10.1145/3328526.3329564
Yiling Chen, Shuran Zheng
We study a data analyst's problem of acquiring data from self-interested individuals to obtain an accurate estimation of some statistic of a population, subject to an expected budget constraint. Each data holder incurs a cost, which is unknown to the data analyst, to acquire and report his data. The cost can be arbitrarily correlated with the data. The data analyst has an expected budget that she can use to incentivize individuals to provide their data. The goal is to design a joint acquisition-estimation mechanism to optimize the performance of the produced estimator, without any prior information on the underlying distribution of cost and data. We investigate two types of estimations: unbiased point estimation and confidence interval estimation. Unbiased estimators: We design a truthful, individually rational, online mechanism to acquire data from individuals and output an unbiased estimator of the population mean when the data analyst has no prior information on the cost-data distribution and individuals arrive in a random order. The performance of this mechanism matches that of the optimal mechanism, which knows the true cost distribution, within a constant factor. The performance of an estimator is evaluated by its variance under the worst-case cost-data correlation. Confidence intervals: We characterize an approximately optimal (within a factor 2) mechanism for obtaining a confidence interval of the population mean when the data analyst knows the true cost distribution at the beginning. This mechanism is efficiently computable. We then design a truthful, individually rational, online algorithm that is only worse than the approximately optimal mechanism by a constant factor. The performance of an estimator is evaluated by its expected length under the worst-case cost-data correlation.
我们研究了一个数据分析师的问题,即在预期预算约束下,从自利的个人那里获取数据,以获得对群体某些统计数据的准确估计。每个数据持有者都有获取和报告数据的成本,这是数据分析师所不知道的。成本可以任意地与数据相关联。数据分析师有一个预期预算,她可以用它来激励个人提供他们的数据。目标是设计一个联合获取-估计机制来优化生成的估计器的性能,而不需要任何关于成本和数据的潜在分布的先验信息。我们研究了两种类型的估计:无偏点估计和置信区间估计。无偏估计:我们设计了一个真实的,个体理性的,在线的机制,从个体获取数据,当数据分析师没有关于成本数据分布的先验信息,并且个体以随机顺序到达时,输出总体均值的无偏估计。该机制的性能与知道真实成本分布的最优机制的性能在一个常数因子内相匹配。估计器的性能是通过在最坏情况下成本-数据相关性下的方差来评价的。置信区间:当数据分析师一开始就知道真实的成本分布时,我们描述了一种近似最优(在因子2以内)的机制,用于获得总体均值的置信区间。这种机制是可有效计算的。然后,我们设计了一个真实的、个体理性的在线算法,它只比近似最优机制差一个常数因子。估计器的性能是通过其在最坏情况下的期望长度来评估的。
{"title":"Prior-free Data Acquisition for Accurate Statistical Estimation","authors":"Yiling Chen, Shuran Zheng","doi":"10.1145/3328526.3329564","DOIUrl":"https://doi.org/10.1145/3328526.3329564","url":null,"abstract":"We study a data analyst's problem of acquiring data from self-interested individuals to obtain an accurate estimation of some statistic of a population, subject to an expected budget constraint. Each data holder incurs a cost, which is unknown to the data analyst, to acquire and report his data. The cost can be arbitrarily correlated with the data. The data analyst has an expected budget that she can use to incentivize individuals to provide their data. The goal is to design a joint acquisition-estimation mechanism to optimize the performance of the produced estimator, without any prior information on the underlying distribution of cost and data. We investigate two types of estimations: unbiased point estimation and confidence interval estimation. Unbiased estimators: We design a truthful, individually rational, online mechanism to acquire data from individuals and output an unbiased estimator of the population mean when the data analyst has no prior information on the cost-data distribution and individuals arrive in a random order. The performance of this mechanism matches that of the optimal mechanism, which knows the true cost distribution, within a constant factor. The performance of an estimator is evaluated by its variance under the worst-case cost-data correlation. Confidence intervals: We characterize an approximately optimal (within a factor 2) mechanism for obtaining a confidence interval of the population mean when the data analyst knows the true cost distribution at the beginning. This mechanism is efficiently computable. We then design a truthful, individually rational, online algorithm that is only worse than the approximately optimal mechanism by a constant factor. The performance of an estimator is evaluated by its expected length under the worst-case cost-data correlation.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114259232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Smoothed Analysis of Multi-Item Auctions with Correlated Values 具有相关价值的多件拍卖的平滑分析
Pub Date : 2018-11-29 DOI: 10.1145/3328526.3329563
Alexandros Psomas, Ariel Schvartzman, S. Weinberg
Consider a seller with m heterogeneous items for sale to a single additive buyer whose values for the items are arbitrarily correlated. It was previously shown that, in such settings, distributions exist for which the seller's optimal revenue is infinite, but the best "simple" mechanism achieves revenue at most one (Briest et al. 2015, Hart and Nisan 2012), even when m=2. This result has long served as a cautionary tale discouraging the study of multi-item auctions without some notion of "independent items". In this work we initiate a smoothed analysis of such multi-item auction settings. We consider a buyer whose item values are drawn from an arbitrarily correlated multi-dimensional distribution then randomly perturbed with magnitude δ under several natural perturbation models. On one hand, we prove that the above construction is surprisingly robust to certain natural perturbations of this form, and the infinite gap remains. On the other hand, we provide a smoothed model such that the approximation guarantee of simple mechanisms is smoothed-finite. We show that when the perturbation has magnitude δ, pricing only the grand bundle guarantees an O(1/δ)-approximation to the optimal revenue. That is, no matter the (worst-case) initially correlated distribution, these tiny perturbations suffice to bring the gap down from infinite to finite. We further show that the same guarantees hold when n buyers have values drawn from an arbitrarily correlated mn-dimensional distribution (without any dependence on n). Taken together, these analyses further pin down key properties of correlated distributions that result in large gaps between simplicity and optimality.
考虑一个拥有m个异质商品的卖家,这些商品的价值是任意相关的。之前的研究表明,在这种情况下,存在卖方的最优收益是无限的分布,但最好的“简单”机制即使在m=2的情况下也最多只能实现1的收益(Briest et al. 2015, Hart and Nisan 2012)。长期以来,这一结果一直被视为一个警世故事,劝阻人们在没有“独立物品”概念的情况下研究多物品拍卖。在这项工作中,我们开始对这种多项目拍卖设置进行平滑分析。我们考虑一个买家,其项目值是从任意相关的多维分布中提取的,然后在几个自然扰动模型下随机受到δ量级的扰动。一方面,我们证明了上述构造对这种形式的某些自然扰动具有惊人的鲁棒性,并且无限间隙仍然存在。另一方面,我们提供了一个光滑模型,使得简单机构的近似保证是光滑有限的。我们证明,当扰动的量级为δ时,仅对大束定价保证了最优收益的近似为O(1/δ)。也就是说,无论(最坏情况下)最初的相关分布如何,这些微小的扰动足以将差距从无限缩小到有限。我们进一步表明,当n个买家的值来自任意相关的n维分布(不依赖于n)时,同样的保证是有效的。总之,这些分析进一步确定了导致简单性和最优性之间存在巨大差距的相关分布的关键属性。
{"title":"Smoothed Analysis of Multi-Item Auctions with Correlated Values","authors":"Alexandros Psomas, Ariel Schvartzman, S. Weinberg","doi":"10.1145/3328526.3329563","DOIUrl":"https://doi.org/10.1145/3328526.3329563","url":null,"abstract":"Consider a seller with m heterogeneous items for sale to a single additive buyer whose values for the items are arbitrarily correlated. It was previously shown that, in such settings, distributions exist for which the seller's optimal revenue is infinite, but the best \"simple\" mechanism achieves revenue at most one (Briest et al. 2015, Hart and Nisan 2012), even when m=2. This result has long served as a cautionary tale discouraging the study of multi-item auctions without some notion of \"independent items\". In this work we initiate a smoothed analysis of such multi-item auction settings. We consider a buyer whose item values are drawn from an arbitrarily correlated multi-dimensional distribution then randomly perturbed with magnitude δ under several natural perturbation models. On one hand, we prove that the above construction is surprisingly robust to certain natural perturbations of this form, and the infinite gap remains. On the other hand, we provide a smoothed model such that the approximation guarantee of simple mechanisms is smoothed-finite. We show that when the perturbation has magnitude δ, pricing only the grand bundle guarantees an O(1/δ)-approximation to the optimal revenue. That is, no matter the (worst-case) initially correlated distribution, these tiny perturbations suffice to bring the gap down from infinite to finite. We further show that the same guarantees hold when n buyers have values drawn from an arbitrarily correlated mn-dimensional distribution (without any dependence on n). Taken together, these analyses further pin down key properties of correlated distributions that result in large gaps between simplicity and optimality.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129234477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Spatial Capacity Planning 空间容量规划
Pub Date : 2018-11-26 DOI: 10.2139/ssrn.3292651
Omar Besbes, Francisco Castro, I. Lobel
We study the relationship between capacity and performance for a service firm with spatial operations, in the sense that requests arrive with origin-destination pairs. An example of such a system is a ride-hailing platform in which each customer arrives in the system with the need to travel from an origin to a destination. We propose a state-dependent queueing model that captures spatial frictions as well as spatial economies of scale through the service rate. In a classical M/M/n queueing model, the square root safety (SRS) staffing rule is known to balance server utilization and customer wait times. By contrast, we find that the SRS rule does not lead to such a balance in spatial systems. In a spatial environment, pickup times increase the load in the system; furthermore, they are an endogenous source of extra workload that leads the system to only operate efficiently if there is sufficient imbalance between supply and demand. In heavy traffic, we derive the mapping from load to operating regimes and establish implications on various metrics of interest. In particular, to obtain a balance of utilization and wait times, the service firm should use a higher safety factor, proportional to the offered load to the power of 2/3. We also discuss implications of these results for general systems.
我们研究了具有空间运营的服务公司的能力和绩效之间的关系,在某种意义上,请求以始发目的地对到达。这种系统的一个例子是叫车平台,其中每个到达系统的客户都需要从起点到目的地旅行。我们提出了一个状态依赖的排队模型,该模型通过服务率捕获空间摩擦和空间规模经济。在经典的M/M/n队列模型中,平方根安全(SRS)人员配置规则用于平衡服务器利用率和客户等待时间。相比之下,我们发现SRS规则在空间系统中并没有导致这种平衡。在空间环境中,取货次数增加了系统的负载;此外,它们是额外工作量的内生来源,导致系统只有在供需之间存在充分不平衡的情况下才能有效运行。在繁忙的交通中,我们导出从负载到操作制度的映射,并建立各种感兴趣的度量的含义。特别是,为了获得利用率和等待时间的平衡,服务公司应该使用更高的安全系数,与提供的负载的2/3次方成正比。我们还讨论了这些结果对一般系统的影响。
{"title":"Spatial Capacity Planning","authors":"Omar Besbes, Francisco Castro, I. Lobel","doi":"10.2139/ssrn.3292651","DOIUrl":"https://doi.org/10.2139/ssrn.3292651","url":null,"abstract":"We study the relationship between capacity and performance for a service firm with spatial operations, in the sense that requests arrive with origin-destination pairs. An example of such a system is a ride-hailing platform in which each customer arrives in the system with the need to travel from an origin to a destination. We propose a state-dependent queueing model that captures spatial frictions as well as spatial economies of scale through the service rate. In a classical M/M/n queueing model, the square root safety (SRS) staffing rule is known to balance server utilization and customer wait times. By contrast, we find that the SRS rule does not lead to such a balance in spatial systems. In a spatial environment, pickup times increase the load in the system; furthermore, they are an endogenous source of extra workload that leads the system to only operate efficiently if there is sufficient imbalance between supply and demand. In heavy traffic, we derive the mapping from load to operating regimes and establish implications on various metrics of interest. In particular, to obtain a balance of utilization and wait times, the service firm should use a higher safety factor, proportional to the offered load to the power of 2/3. We also discuss implications of these results for general systems.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"978 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133843193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
期刊
Proceedings of the 2019 ACM Conference on Economics and Computation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1