Nicole Immorlica, Brendan Lucier, Emmanouil Pountourakis, Sam Taggart
In a market with repeated sales of a single item to a single buyer, prior work has established the existence of a zero revenue perfect Bayesian equilibrium in the absence of a commitment device for the seller. This counter-intuitive outcome is the result of strategic purchasing decisions, where the buyer worries that the seller will update future prices in response to past purchasing behavior. We first show that in fact almost any revenue can be achieved in equilibrium, but the zero revenue equilibrium uniquely survives natural refinements. This establishes that single buyer markets without commitment are subject to market failure. However, our main result shows that this market failure depends crucially on the assumption of a single buyer. If there are multiple buyers, the seller can approximate the revenue that is possible with commitment. We construct an intuitive equilibrium for multiple buyers that survives our refinements, in which the seller learns from past purchasing behavior and obtains a constant factor of the per-round Myerson optimal revenue. The seller's pricing policy has a natural explore-exploit structure, where the seller starts with low prices that gradually ascend to learn buyers' values, and in later rounds exploits the surviving high-valued buyers. The result resembles an ascending-price auction, implemented over time. This relates to the intuition from the Coase conjecture in the durable goods literature [Coase 1972] which states that in the absence of commitment, one should expect the VCG outcome (which, for multiple buyers, yields non-trivial revenue for the seller). We further explore this relationship to the Coase conjecture by considering a setting with unlimited supply of goods each round. The Coasian intuition would suggest that the seller makes no revenue in this case, since the VCG outcome gives each item away for a trivial price. However, we show that this intuition does not hold for our setting with non-durable goods. As in the single-item setting, when the seller is constrained to posting a single, anonymous price to all buyers, there exist equilibria for which the seller's revenue is within a constant factor of the Myerson optimal revenue. Finally, we consider the importance of our restriction to anonymous prices. We show that if the seller is permitted to offer different prices to each agent then the Coasian intuition from the single-item setting binds once more: the seller is no longer able to extract nontrivial revenue from any equilibrium with sufficiently natural structure. In other words, the restriction of the seller to an anonymous price was crucial in deriving nontrivial revenue with unlimited supply. Intuitively, an anonymous price mitigates the ability of the seller to use the information an individual buyer leaks with each purchasing decision. Consequently, buyers are more willing to make nontrivial purchasing decisions, which in turn allows the seller to learn.
{"title":"Repeated Sales with Multiple Strategic Buyers","authors":"Nicole Immorlica, Brendan Lucier, Emmanouil Pountourakis, Sam Taggart","doi":"10.1145/3033274.3085130","DOIUrl":"https://doi.org/10.1145/3033274.3085130","url":null,"abstract":"In a market with repeated sales of a single item to a single buyer, prior work has established the existence of a zero revenue perfect Bayesian equilibrium in the absence of a commitment device for the seller. This counter-intuitive outcome is the result of strategic purchasing decisions, where the buyer worries that the seller will update future prices in response to past purchasing behavior. We first show that in fact almost any revenue can be achieved in equilibrium, but the zero revenue equilibrium uniquely survives natural refinements. This establishes that single buyer markets without commitment are subject to market failure. However, our main result shows that this market failure depends crucially on the assumption of a single buyer. If there are multiple buyers, the seller can approximate the revenue that is possible with commitment. We construct an intuitive equilibrium for multiple buyers that survives our refinements, in which the seller learns from past purchasing behavior and obtains a constant factor of the per-round Myerson optimal revenue. The seller's pricing policy has a natural explore-exploit structure, where the seller starts with low prices that gradually ascend to learn buyers' values, and in later rounds exploits the surviving high-valued buyers. The result resembles an ascending-price auction, implemented over time. This relates to the intuition from the Coase conjecture in the durable goods literature [Coase 1972] which states that in the absence of commitment, one should expect the VCG outcome (which, for multiple buyers, yields non-trivial revenue for the seller). We further explore this relationship to the Coase conjecture by considering a setting with unlimited supply of goods each round. The Coasian intuition would suggest that the seller makes no revenue in this case, since the VCG outcome gives each item away for a trivial price. However, we show that this intuition does not hold for our setting with non-durable goods. As in the single-item setting, when the seller is constrained to posting a single, anonymous price to all buyers, there exist equilibria for which the seller's revenue is within a constant factor of the Myerson optimal revenue. Finally, we consider the importance of our restriction to anonymous prices. We show that if the seller is permitted to offer different prices to each agent then the Coasian intuition from the single-item setting binds once more: the seller is no longer able to extract nontrivial revenue from any equilibrium with sufficiently natural structure. In other words, the restriction of the seller to an anonymous price was crucial in deriving nontrivial revenue with unlimited supply. Intuitively, an anonymous price mitigates the ability of the seller to use the information an individual buyer leaks with each purchasing decision. Consequently, buyers are more willing to make nontrivial purchasing decisions, which in turn allows the seller to learn.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129630476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the single-resource dynamic fair division framework there is a homogeneous resource that is shared between agents dynamically arriving and departing over time. When n agents are present, there is only one truly ``fair'' allocation: each agent receives 1/n of the resource. Implementing this static solution in the dynamic world is notoriously impractical; there are too many disruptions to existing allocations: for a new agent to get her fair share, all other agents must give up a small piece. A natural remedy is simply to restrict the number of allowed disruptions when a new agent arrives. [16] considered this setting, and introduced a natural benchmark - the fairness ratio - the ratio of the minimal share to the ideal share (1/k when there are k agents in the system). They described an algorithm that obtains the optimal fairness ratio when d ≥ 1 disruptions are allowed per arriving agent. However, in systems with high arrival rates even one disruption per arrival can be too costly. We consider the scenario when fewer than one disruption per arrival is allowed. We show that we can maintain high levels of fairness even with significantly fewer than one disruption per arrival. In particular, we present an instance-optimal algorithm (the input to the algorithm is a vector of allowed disruptions) and show that the fairness ratio of this algorithm decays logarithmically with c, where c is the longest number of consecutive time steps in which we are not allowed any disruptions. We then consider dynamic fair division with multiple, heterogeneous resources. In this model, agents demand the resources in fixed proportions, known in economics as Leontief preferences. We show that the general problem is NP-hard, even if the resource demands are binary and known in advance. We study the case where the fairness criterion is Dominant Resource Fairness (DRF), and the demand vectors are binary. We design a generic algorithm for this setting using a reduction to the single-resource case. To prove an impossibility result, we take an integer program for the problem and analyze an algorithm for constructing dual solutions to a ``residual'' linear program; this approach may be of independent interest.
{"title":"Controlled Dynamic Fair Division","authors":"E. Friedman, Alexandros Psomas, Shai Vardi","doi":"10.1145/3033274.3085123","DOIUrl":"https://doi.org/10.1145/3033274.3085123","url":null,"abstract":"In the single-resource dynamic fair division framework there is a homogeneous resource that is shared between agents dynamically arriving and departing over time. When n agents are present, there is only one truly ``fair'' allocation: each agent receives 1/n of the resource. Implementing this static solution in the dynamic world is notoriously impractical; there are too many disruptions to existing allocations: for a new agent to get her fair share, all other agents must give up a small piece. A natural remedy is simply to restrict the number of allowed disruptions when a new agent arrives. [16] considered this setting, and introduced a natural benchmark - the fairness ratio - the ratio of the minimal share to the ideal share (1/k when there are k agents in the system). They described an algorithm that obtains the optimal fairness ratio when d ≥ 1 disruptions are allowed per arriving agent. However, in systems with high arrival rates even one disruption per arrival can be too costly. We consider the scenario when fewer than one disruption per arrival is allowed. We show that we can maintain high levels of fairness even with significantly fewer than one disruption per arrival. In particular, we present an instance-optimal algorithm (the input to the algorithm is a vector of allowed disruptions) and show that the fairness ratio of this algorithm decays logarithmically with c, where c is the longest number of consecutive time steps in which we are not allowed any disruptions. We then consider dynamic fair division with multiple, heterogeneous resources. In this model, agents demand the resources in fixed proportions, known in economics as Leontief preferences. We show that the general problem is NP-hard, even if the resource demands are binary and known in advance. We study the case where the fairness criterion is Dominant Resource Fairness (DRF), and the demand vectors are binary. We design a generic algorithm for this setting using a reduction to the single-resource case. To prove an impossibility result, we take an integer program for the problem and analyze an algorithm for constructing dual solutions to a ``residual'' linear program; this approach may be of independent interest.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131206215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information Elicitation without Verification (IEWV) is a classic problem where a principal wants to truthfully elicit high-quality answers of some tasks from strategic agents despite that she cannot evaluate the quality of agents' contributions. The established solution to this problem is a class of peer prediction mechanisms, where each agent is rewarded based on how his answers compare with those of his peer agents. These peer prediction mechanisms are designed by exploring the stochastic correlation of agents' answers. The prior distribution of agents' true answers is often assumed to be known to the principal or at least to the agents. In this paper, we consider the problem of IEWV for heterogeneous binary signal tasks, where the answer distributions for different tasks are different and unknown a priori. A concrete setting is eliciting labels for training data. Here, data points are represented by their feature vectors x's and the principal wants to obtain corresponding binary labels y's from strategic agents. We design peer prediction mechanisms that leverage not only the stochastic correlation of agents' labels for the same feature vector x but also the (learned) correlation between feature vectors x's and the ground-truth labels y's. In our mechanism, each agent is rewarded by how his answer compares with a reference answer generated by a classification algorithm specialized for dealing with noisy data. Every agent truthfully reporting and exerting high effort form a Bayesian Nash Equilibrium. Some benefits of this approach include: (1) we do not need to always re-assign each task to multiple workers to obtain redundant answers. (2) A class of surrogate loss functions for binary classification can help us design new reward functions for peer prediction. (3) Symmetric uninformative reporting strategy (pure or mixed) is not an equilibrium strategy. (4) The principal does not need to know the joint distribution of workers' information a priori. We hope this work can point to a new and promising direction of information elicitation via more intelligent algorithms.
没有验证的信息引出(Information Elicitation without Verification, IEWV)是一个典型的问题,委托人想要从战略代理人那里真实地引出一些任务的高质量答案,尽管她无法评估代理人贡献的质量。这个问题的既定解决方案是一类同伴预测机制,其中每个代理根据他的答案与同伴代理的答案进行比较而获得奖励。这些同伴预测机制是通过探索代理人回答的随机相关性而设计的。代理人真实答案的先验分布通常被假设为委托人或至少为代理人所知。在本文中,我们考虑了异构二进制信号任务的IEWV问题,其中不同任务的答案分布是不同的,并且是先验未知的。一个具体的设置是引出训练数据的标签。在这里,数据点由它们的特征向量x表示,主体希望从策略代理获得相应的二进制标签y。我们设计了对等预测机制,该机制不仅利用了相同特征向量x的代理标签的随机相关性,而且利用了特征向量x和基本事实标签y之间的(学习到的)相关性。在我们的机制中,每个智能体通过将其答案与专门用于处理噪声数据的分类算法生成的参考答案进行比较来获得奖励。每个主体如实报告并付出巨大努力形成贝叶斯纳什均衡。这种方法的一些好处包括:(1)我们不需要总是将每个任务重新分配给多个工人来获得冗余的答案。(2)一类用于二元分类的代理损失函数可以帮助我们设计新的同伴预测奖励函数。(3)对称无信息报告策略(纯或混合)不是均衡策略。(4)委托人不需要先验地知道工人信息的共同分布情况。我们希望这项工作可以通过更智能的算法为信息提取指明一个新的和有前途的方向。
{"title":"Machine-Learning Aided Peer Prediction","authors":"Yang Liu, Yiling Chen","doi":"10.1145/3033274.3085126","DOIUrl":"https://doi.org/10.1145/3033274.3085126","url":null,"abstract":"Information Elicitation without Verification (IEWV) is a classic problem where a principal wants to truthfully elicit high-quality answers of some tasks from strategic agents despite that she cannot evaluate the quality of agents' contributions. The established solution to this problem is a class of peer prediction mechanisms, where each agent is rewarded based on how his answers compare with those of his peer agents. These peer prediction mechanisms are designed by exploring the stochastic correlation of agents' answers. The prior distribution of agents' true answers is often assumed to be known to the principal or at least to the agents. In this paper, we consider the problem of IEWV for heterogeneous binary signal tasks, where the answer distributions for different tasks are different and unknown a priori. A concrete setting is eliciting labels for training data. Here, data points are represented by their feature vectors x's and the principal wants to obtain corresponding binary labels y's from strategic agents. We design peer prediction mechanisms that leverage not only the stochastic correlation of agents' labels for the same feature vector x but also the (learned) correlation between feature vectors x's and the ground-truth labels y's. In our mechanism, each agent is rewarded by how his answer compares with a reference answer generated by a classification algorithm specialized for dealing with noisy data. Every agent truthfully reporting and exerting high effort form a Bayesian Nash Equilibrium. Some benefits of this approach include: (1) we do not need to always re-assign each task to multiple workers to obtain redundant answers. (2) A class of surrogate loss functions for binary classification can help us design new reward functions for peer prediction. (3) Symmetric uninformative reporting strategy (pure or mixed) is not an equilibrium strategy. (4) The principal does not need to know the joint distribution of workers' information a priori. We hope this work can point to a new and promising direction of information elicitation via more intelligent algorithms.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128292378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Goldstein, R. McAfee, Siddharth Suri, J. R. Wright
In the classical secretary problem, one attempts to find the maximum of an unknown and unlearnable distribution through sequential search. In many real-world searches, however, distributions are not entirely unknown and can be learned through experience. To investigate learning in such a repeated secretary problem we conduct a large-scale behavioral experiment in which people search repeatedly from fixed distributions. In contrast to prior investigations that find no evidence for learning in the classical scenario, in the repeated setting we observe substantial learning resulting in near-optimal stopping behavior. We conduct a Bayesian comparison of multiple behavioral models which shows that participants' behavior is best described by a class of threshold-based models that contains the theoretically optimal strategy. In fact, fitting such a threshold-based model to data reveals players' estimated thresholds to be surprisingly close to the optimal thresholds after only a small number of games.
{"title":"Learning in the Repeated Secretary Problem","authors":"D. Goldstein, R. McAfee, Siddharth Suri, J. R. Wright","doi":"10.1145/3033274.3085112","DOIUrl":"https://doi.org/10.1145/3033274.3085112","url":null,"abstract":"In the classical secretary problem, one attempts to find the maximum of an unknown and unlearnable distribution through sequential search. In many real-world searches, however, distributions are not entirely unknown and can be learned through experience. To investigate learning in such a repeated secretary problem we conduct a large-scale behavioral experiment in which people search repeatedly from fixed distributions. In contrast to prior investigations that find no evidence for learning in the classical scenario, in the repeated setting we observe substantial learning resulting in near-optimal stopping behavior. We conduct a Bayesian comparison of multiple behavioral models which shows that participants' behavior is best described by a class of threshold-based models that contains the theoretically optimal strategy. In fact, fitting such a threshold-based model to data reveals players' estimated thresholds to be surprisingly close to the optimal thresholds after only a small number of games.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116175401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A critical, yet underappreciated feature of market design is that centralized markets operate within a broader economic context; often market designers cannot force participants to join a centralized market. As such, well-designed centralized markets must induce participants to join voluntarily, in spite of pre-existing decentralized institutions they may already be using. Utilizing the general framework of Monderer and Tennenholtz (2006), we take the view that centralizing a market is akin to designing a mediator to which people may sign away their decision rights. The mediator is voluntary in the sense that it cannot condition the actions of those who participate on the actions of those who do not. Within this setting we propose a new desideratum for market design: Dominant Individual Rationality (D-IR). A mediator is D-IR if every decentralized strategy is weakly dominated by some centralized strategy. While such a criterion does not offer a prediction about how people will behave within the centralized market, it does provide a strong guarantee that all players will use centralized strategies rather than opting out of the centralized market. We show that suitable modification of the Boston mechanism satisfies D-IR and a similar modification of any stable matching mechanism satisfies an approximation of D-IR. In both cases the modification relies on allowing the receiving end of the market to accept offers in either the centralized or decentralized part of the market. This design closely resembles the suggestion of Niederle and Roth (2006) about centralizing the market for gastroenterologists. Relative to their analysis, ours highlights why this design feature coupled with some, but not all, matching algorithms is effective in inducing participation of the proposing side of the market. Further, by highlighting its role in attaining (approximate) D-IR our analysis provides a new non-cooperative justification for stability. In other applications we demonstrate that, suitably modified, Top Trading Cycles satisfies D-IR, and double auctions satisfy approximate D-IR.
{"title":"Making it Safe to Use Centralized Markets: Epsilon - Dominant Individual Rationality and Applications to Market Design","authors":"Benjamin N. Roth, Ran I. Shorrer","doi":"10.1145/3033274.3085139","DOIUrl":"https://doi.org/10.1145/3033274.3085139","url":null,"abstract":"A critical, yet underappreciated feature of market design is that centralized markets operate within a broader economic context; often market designers cannot force participants to join a centralized market. As such, well-designed centralized markets must induce participants to join voluntarily, in spite of pre-existing decentralized institutions they may already be using. Utilizing the general framework of Monderer and Tennenholtz (2006), we take the view that centralizing a market is akin to designing a mediator to which people may sign away their decision rights. The mediator is voluntary in the sense that it cannot condition the actions of those who participate on the actions of those who do not. Within this setting we propose a new desideratum for market design: Dominant Individual Rationality (D-IR). A mediator is D-IR if every decentralized strategy is weakly dominated by some centralized strategy. While such a criterion does not offer a prediction about how people will behave within the centralized market, it does provide a strong guarantee that all players will use centralized strategies rather than opting out of the centralized market. We show that suitable modification of the Boston mechanism satisfies D-IR and a similar modification of any stable matching mechanism satisfies an approximation of D-IR. In both cases the modification relies on allowing the receiving end of the market to accept offers in either the centralized or decentralized part of the market. This design closely resembles the suggestion of Niederle and Roth (2006) about centralizing the market for gastroenterologists. Relative to their analysis, ours highlights why this design feature coupled with some, but not all, matching algorithms is effective in inducing participation of the proposing side of the market. Further, by highlighting its role in attaining (approximate) D-IR our analysis provides a new non-cooperative justification for stability. In other applications we demonstrate that, suitably modified, Top Trading Cycles satisfies D-IR, and double auctions satisfy approximate D-IR.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122465793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arpit Agarwal, Debmalya Mandal, D. Parkes, Nisarg Shah
Peer prediction mechanisms incentivize agents to truthfully report their signals, in the absence of a verification mechanism, by comparing their reports with those of their peers. Prior work in this area is essentially restricted to the case of homogeneous agents, whose signal distributions are identical. This is limiting in many domains, where we would expect agents to differ in taste, judgment and reliability. Although the Correlated Agreement (CA) mechanism [30] can be extended to handle heterogeneous agents, the new challenge is with the efficient estimation of agent signal types. We solve this problem by clustering agents based on their reporting behavior, proposing a mechanism that works with clusters of agents and designing algorithms that learn such a clustering. In this way, we also connect peer prediction with the Dawid and Skene [5] literature on latent types. We retain the robustness against coordinated misreports of the CA mechanism, achieving an approximate incentive guarantee of ε-informed truthfulness. We show on real data that this incentive approximation is reasonable in practice, and even with a small number of clusters.
{"title":"Peer Prediction with Heterogeneous Users","authors":"Arpit Agarwal, Debmalya Mandal, D. Parkes, Nisarg Shah","doi":"10.1145/3033274.3085127","DOIUrl":"https://doi.org/10.1145/3033274.3085127","url":null,"abstract":"Peer prediction mechanisms incentivize agents to truthfully report their signals, in the absence of a verification mechanism, by comparing their reports with those of their peers. Prior work in this area is essentially restricted to the case of homogeneous agents, whose signal distributions are identical. This is limiting in many domains, where we would expect agents to differ in taste, judgment and reliability. Although the Correlated Agreement (CA) mechanism [30] can be extended to handle heterogeneous agents, the new challenge is with the efficient estimation of agent signal types. We solve this problem by clustering agents based on their reporting behavior, proposing a mechanism that works with clusters of agents and designing algorithms that learn such a clustering. In this way, we also connect peer prediction with the Dawid and Skene [5] literature on latent types. We retain the robustness against coordinated misreports of the CA mechanism, achieving an approximate incentive guarantee of ε-informed truthfulness. We show on real data that this incentive approximation is reasonable in practice, and even with a small number of clusters.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"195 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123008708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Ashlagi, M. Braverman, Yashodhan Kanoria, Peng Shi
We study how much communication is needed to find a stable matching in a two-sided matching market with private preferences. Segal (2007) and Gonczarowski et al.~(2015) showed that in the worst case, any protocol that computes a stable matching requires the communication cost per agent to scale linearly in the total number of agents. In real-world markets with many agents, this communication requirement is implausibly high. This casts doubts on whether stable matching can arise in large markets. We study markets with realistic structure on the preferences and information of agents, and show that in "typical" markets, a stable matching can be found with much less communication effort. In our model, the preferences of workers are unrestricted, and the preferences of firms follow an additively separable latent utility model. Our efficient communication protocol modifies workers-proposing DA, by having firms signal workers they especially like, while also broadcasting qualification requirements to discourage other workers who have no realistic chances from applying. In the special case of tiered random markets, the protocol can be modified to run in two-rounds and involve only private messages. Our protocols have good incentive properties and give insights on how to mediate large matching markets to reduce congestion.
我们研究了在具有私人偏好的双边匹配市场中,需要多少沟通才能找到稳定的匹配。Segal(2007)和Gonczarowski et al.~(2015)表明,在最坏的情况下,任何计算稳定匹配的协议都要求每个代理的通信成本在代理总数中呈线性增长。在具有许多代理的现实市场中,这种通信需求高得令人难以置信。这让人怀疑在大型市场中能否出现稳定的匹配。我们研究了具有现实结构的市场,研究了代理人的偏好和信息,并表明在“典型”市场中,可以用更少的沟通努力找到稳定的匹配。在我们的模型中,工人的偏好是不受限制的,企业的偏好遵循一个可加性可分离的潜在效用模型。我们的高效通信协议修改了工人提议的DA,通过让公司向他们特别喜欢的工人发出信号,同时也广播资格要求,以阻止其他没有实际机会的工人申请。在分层随机市场的特殊情况下,该协议可以修改为两轮运行,并且只涉及私人消息。我们的协议具有良好的激励特性,并为如何调解大型匹配市场以减少拥塞提供了见解。
{"title":"Communication Requirements and Informative Signaling in Matching Markets","authors":"I. Ashlagi, M. Braverman, Yashodhan Kanoria, Peng Shi","doi":"10.1145/3033274.3084093","DOIUrl":"https://doi.org/10.1145/3033274.3084093","url":null,"abstract":"We study how much communication is needed to find a stable matching in a two-sided matching market with private preferences. Segal (2007) and Gonczarowski et al.~(2015) showed that in the worst case, any protocol that computes a stable matching requires the communication cost per agent to scale linearly in the total number of agents. In real-world markets with many agents, this communication requirement is implausibly high. This casts doubts on whether stable matching can arise in large markets. We study markets with realistic structure on the preferences and information of agents, and show that in \"typical\" markets, a stable matching can be found with much less communication effort. In our model, the preferences of workers are unrestricted, and the preferences of firms follow an additively separable latent utility model. Our efficient communication protocol modifies workers-proposing DA, by having firms signal workers they especially like, while also broadcasting qualification requirements to discourage other workers who have no realistic chances from applying. In the special case of tiered random markets, the protocol can be modified to run in two-rounds and involve only private messages. Our protocols have good incentive properties and give insights on how to mediate large matching markets to reduce congestion.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121018379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We revisit the classic problem of designing voting rules that aggregate objective opinions, in a setting where voters have noisy estimates of a true ranking of the alternatives. Previous work has replaced structural assumptions on the noise with a worst-case approach that aims to choose an outcome that minimizes the maximum error with respect to any feasible true ranking. This approach underlies algorithms that have recently been deployed on the social choice website RoboVote.org. We take a less conservative viewpoint by minimizing the average error with respect to the set of feasible ground truth rankings. We derive (mostly sharp) analytical bounds on the expected error and establish the practical benefits of our approach through experiments.
{"title":"Making Right Decisions Based on Wrong Opinions","authors":"Gerdus Benade, Anson Kahng, A. Procaccia","doi":"10.1145/3033274.3085108","DOIUrl":"https://doi.org/10.1145/3033274.3085108","url":null,"abstract":"We revisit the classic problem of designing voting rules that aggregate objective opinions, in a setting where voters have noisy estimates of a true ranking of the alternatives. Previous work has replaced structural assumptions on the noise with a worst-case approach that aims to choose an outcome that minimizes the maximum error with respect to any feasible true ranking. This approach underlies algorithms that have recently been deployed on the social choice website RoboVote.org. We take a less conservative viewpoint by minimizing the average error with respect to the set of feasible ground truth rankings. We derive (mostly sharp) analytical bounds on the expected error and establish the practical benefits of our approach through experiments.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115888542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many-to-one matching with contracts, agents on one side of the market, e.g., workers, can fulfill at most one contract, while agents on the other side of the market, e.g., firms, may desire multiple contracts. Hatfield and Molgrom [6] showed that when firms' preferences are substitutable and size monotonic, the worker-proposing cumulative offer mechanism is stable and strategy-proof (for workers). Recently, stable and strategy-proof matching has been shown to be possible in a number of real-world settings in which preferences are not necessarily substitutable (see, e.g., Sönmez ans Switzer, [13], Sönmez [12] Kamada and Kojima [7], and Aygün and Turhan [1]; this has motivated a search for weakened substitutability conditions that guarantee the existence of stable and strategy-proof mechanisms. Hatfield and Kojima [3] introduced unilateral substitutability and showed that when all firms' preferences are unilaterally substitutable (and size monotonic), the cumulative offer mechanism is stable and strategy-proof. Kominers and Sönmez [9] identified a novel class of preferences, called slot-specific priorities, and showed that if each firm's preferences are in this class, then the cumulative offer mechanism is again stable and strategy-proof. Subsequently, Hatfield and Kominers [4] developed a concept of substitutable completion and showed that when each firm's preferences admit a size monotonic substitutable completion, the cumulative offer mechanism is once more stable and strategy-proof. In this paper, we introduce three novel conditions---observable substitutability, observable size monotonicity, and non-manipulability via contractual terms---and show that when these conditions are satisfied, the cumulative offer mechanism is the unique mechanism that is stable and strategy-proof. Moreover, when the choice function of any firm fails one of our three conditions, we can construct unit-demand choice functions for the other firms such that no stable and strategy-proof mechanism exists. Our results give the first characterization of sufficient and necessary conditions for the guaranteed existence of stable and strategy-proof mechanisms for many-to-one matching with contracts. Our conditions are strictly weaker than the previously known sufficient conditions for the existence of stable and strategy-proof mechanisms; this enables new applications, as well as a new interpretation of prior models of matching with distributional constraints (Hatfield et al. [5]; see also Kamada and Kojima [7,8]). Additionally, our work gives a foundation for the use of cumulative offer mechanisms in many-to-one matching markets with contracts: Whenever a stable and strategy-proof matching mechanism exists, either it must coincide with a cumulative offer mechanism, or its stability and/or strategy-proofness depends crucially on some specific interdependence of preferences across hospitals that rules out certain unit-demand choice functions.
在多对一契约匹配中,市场一侧的代理人(如工人)最多只能履行一份契约,而市场另一侧的代理人(如企业)可能希望履行多个契约。Hatfield和Molgrom[6]表明,当企业的偏好是可替代的,且规模单调时,工人提议的累积报价机制是稳定的,且(对工人而言)是不受策略影响的。最近,稳定和策略验证匹配已被证明在许多现实世界的设置中是可能的,其中偏好不一定是可替代的(参见,例如Sönmez ans Switzer, [13], Sönmez [12] Kamada和Kojima[7],以及ayg n和Turhan [1];这促使人们寻求弱化的可替代性条件,以保证存在稳定和不受战略影响的机制。Hatfield和Kojima[3]引入了单边可替代性,并证明当所有企业的偏好都是单边可替代性(且规模单调)时,累积报价机制是稳定且不受策略约束的。Kominers和Sönmez[9]确定了一种新的偏好类别,称为槽位特定优先级,并表明如果每个公司的偏好都在这一类中,那么累积报价机制再次稳定且不受策略影响。随后,Hatfield和Kominers提出了可替代完成度的概念,并表明当每个企业的偏好允许一个规模单调的可替代完成度时,累积出价机制再次变得更加稳定和不受策略影响。在本文中,我们引入了三个新的条件——可观察的可替代性、可观察的大小单调性和通过契约条款的不可操纵性,并证明了当这些条件满足时,累积提供机制是唯一的稳定且不受策略影响的机制。此外,当任何企业的选择函数不满足我们的三个条件之一时,我们可以为其他企业构建单位需求选择函数,使得不存在稳定的、不受策略约束的机制。我们的研究结果首次刻画了多对一契约匹配的稳定和防策略机制的保证存在的充要条件。我们的条件严格弱于先前已知的存在稳定和不受策略影响的机制的充分条件;这使得新的应用成为可能,也为与分布约束匹配的先前模型提供了新的解释(Hatfield et al. [5];另见Kamada和Kojima[7,8])。此外,我们的工作为在多对一的合同匹配市场中使用累积提供机制提供了基础:只要存在稳定且不受策略影响的匹配机制,它要么必须与累积提供机制相吻合,要么其稳定性和/或不受策略影响,这在很大程度上取决于医院之间某些特定的偏好相互依赖,从而排除某些单位需求选择函数。
{"title":"Stability, Strategy-Proofness, and Cumulative Offer Mechanisms","authors":"J. Hatfield, S. Kominers, Alexander Westkamp","doi":"10.2139/ssrn.3120463","DOIUrl":"https://doi.org/10.2139/ssrn.3120463","url":null,"abstract":"In many-to-one matching with contracts, agents on one side of the market, e.g., workers, can fulfill at most one contract, while agents on the other side of the market, e.g., firms, may desire multiple contracts. Hatfield and Molgrom [6] showed that when firms' preferences are substitutable and size monotonic, the worker-proposing cumulative offer mechanism is stable and strategy-proof (for workers). Recently, stable and strategy-proof matching has been shown to be possible in a number of real-world settings in which preferences are not necessarily substitutable (see, e.g., Sönmez ans Switzer, [13], Sönmez [12] Kamada and Kojima [7], and Aygün and Turhan [1]; this has motivated a search for weakened substitutability conditions that guarantee the existence of stable and strategy-proof mechanisms. Hatfield and Kojima [3] introduced unilateral substitutability and showed that when all firms' preferences are unilaterally substitutable (and size monotonic), the cumulative offer mechanism is stable and strategy-proof. Kominers and Sönmez [9] identified a novel class of preferences, called slot-specific priorities, and showed that if each firm's preferences are in this class, then the cumulative offer mechanism is again stable and strategy-proof. Subsequently, Hatfield and Kominers [4] developed a concept of substitutable completion and showed that when each firm's preferences admit a size monotonic substitutable completion, the cumulative offer mechanism is once more stable and strategy-proof. In this paper, we introduce three novel conditions---observable substitutability, observable size monotonicity, and non-manipulability via contractual terms---and show that when these conditions are satisfied, the cumulative offer mechanism is the unique mechanism that is stable and strategy-proof. Moreover, when the choice function of any firm fails one of our three conditions, we can construct unit-demand choice functions for the other firms such that no stable and strategy-proof mechanism exists. Our results give the first characterization of sufficient and necessary conditions for the guaranteed existence of stable and strategy-proof mechanisms for many-to-one matching with contracts. Our conditions are strictly weaker than the previously known sufficient conditions for the existence of stable and strategy-proof mechanisms; this enables new applications, as well as a new interpretation of prior models of matching with distributional constraints (Hatfield et al. [5]; see also Kamada and Kojima [7,8]). Additionally, our work gives a foundation for the use of cumulative offer mechanisms in many-to-one matching markets with contracts: Whenever a stable and strategy-proof matching mechanism exists, either it must coincide with a cumulative offer mechanism, or its stability and/or strategy-proofness depends crucially on some specific interdependence of preferences across hospitals that rules out certain unit-demand choice functions.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"44 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126070258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When we test a theory using data, it is common to focus on correctness: do the predictions of the theory match what we see in the data? But we also care about completeness: how much of the predictable variation in the data is captured by the theory? This question is difficult to answer, because in general we do not know how much "predictable variation" there is in the problem. In this paper, we consider approaches motivated by machine learning algorithms as a means of constructing a benchmark for the best attainable level of prediction. We illustrate our methods on the task of prediction of human-generated random sequences. Relative to an atheoretical machine learning algorithm benchmark, we find that existing behavioral models explain roughly 10 to 12 percent of the predictable variation in this problem. This fraction is robust across several variations on the problem. We also consider a version of this approach for analyzing field data from domains in which human perception and generation of randomness has been used as a conceptual framework; these include sequential decision-making and repeated zero-sum games. In these domains, our framework for testing the completeness of theories suggest that existing theoretical models may be more complete in their predictions for some domains than for others, suggesting that our methods can offer a comparative perspective across settings. Overall, our results indicate that (i) there is a significant amount of structure in this problem that existing models have yet to capture and (ii) there are rich domains in which machine learning may provide a viable approach to testing completeness.
{"title":"The Theory is Predictive, but is it Complete?: An Application to Human Perception of Randomness","authors":"J. Kleinberg, Annie Liang, S. Mullainathan","doi":"10.1145/3033274.3084094","DOIUrl":"https://doi.org/10.1145/3033274.3084094","url":null,"abstract":"When we test a theory using data, it is common to focus on correctness: do the predictions of the theory match what we see in the data? But we also care about completeness: how much of the predictable variation in the data is captured by the theory? This question is difficult to answer, because in general we do not know how much \"predictable variation\" there is in the problem. In this paper, we consider approaches motivated by machine learning algorithms as a means of constructing a benchmark for the best attainable level of prediction. We illustrate our methods on the task of prediction of human-generated random sequences. Relative to an atheoretical machine learning algorithm benchmark, we find that existing behavioral models explain roughly 10 to 12 percent of the predictable variation in this problem. This fraction is robust across several variations on the problem. We also consider a version of this approach for analyzing field data from domains in which human perception and generation of randomness has been used as a conceptual framework; these include sequential decision-making and repeated zero-sum games. In these domains, our framework for testing the completeness of theories suggest that existing theoretical models may be more complete in their predictions for some domains than for others, suggesting that our methods can offer a comparative perspective across settings. Overall, our results indicate that (i) there is a significant amount of structure in this problem that existing models have yet to capture and (ii) there are rich domains in which machine learning may provide a viable approach to testing completeness.","PeriodicalId":287551,"journal":{"name":"Proceedings of the 2017 ACM Conference on Economics and Computation","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122142117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}