The design of content recommendation systems underpins many online platforms: social media feeds, online news aggregators, and audio/video hosting websites all choose how best to organize an enormous amount of content for users to consume. Many projects (both practical and academic) have designed algorithms to match users to content they will enjoy under the assumption that user's preferences and opinions do not change with the content they see. However, increasing amounts of evidence suggest that individuals' preferences are directly shaped by what content they see---radicalization, rabbit holes, polarization, and boredom are all example phenomena of preferences affected by content. Polarization in particular can occur even in ecosystems with "mass media," where no personalization takes place, as recently explored in a natural model of preference dynamics by [14] and [13]. If all users' preferences are drawn towards content they already like, or are repelled from content they already dislike, uniform consumption of media leads to a population of heterogeneous preferences converging towards only two poles. In this work, we explore whether some phenomenon akin to polarization occurs when users receive personalized content recommendations. We use a similar model of preference dynamics, where an individual's preferences move towards content the consume and enjoy, and away from content they consume and dislike. We show that standard user reward maximization is an almost trivial goal in such an environment (a large class of simple algorithms will achieve only constant regret). A more interesting objective, then, is to understand under what conditions a recommendation algorithm can ensure stationarity of user's preferences. We show how to design a content recommendations which can achieve approximate stationarity, under mild conditions on the set of available content, when a user's preferences are known, and how one can learn enough about a user's preferences to implement such a strategy even when user preferences are initially unknown.
{"title":"Preference Dynamics Under Personalized Recommendations","authors":"Sarah Dean, Jamie Morgenstern","doi":"10.1145/3490486.3538346","DOIUrl":"https://doi.org/10.1145/3490486.3538346","url":null,"abstract":"The design of content recommendation systems underpins many online platforms: social media feeds, online news aggregators, and audio/video hosting websites all choose how best to organize an enormous amount of content for users to consume. Many projects (both practical and academic) have designed algorithms to match users to content they will enjoy under the assumption that user's preferences and opinions do not change with the content they see. However, increasing amounts of evidence suggest that individuals' preferences are directly shaped by what content they see---radicalization, rabbit holes, polarization, and boredom are all example phenomena of preferences affected by content. Polarization in particular can occur even in ecosystems with \"mass media,\" where no personalization takes place, as recently explored in a natural model of preference dynamics by [14] and [13]. If all users' preferences are drawn towards content they already like, or are repelled from content they already dislike, uniform consumption of media leads to a population of heterogeneous preferences converging towards only two poles. In this work, we explore whether some phenomenon akin to polarization occurs when users receive personalized content recommendations. We use a similar model of preference dynamics, where an individual's preferences move towards content the consume and enjoy, and away from content they consume and dislike. We show that standard user reward maximization is an almost trivial goal in such an environment (a large class of simple algorithms will achieve only constant regret). A more interesting objective, then, is to understand under what conditions a recommendation algorithm can ensure stationarity of user's preferences. We show how to design a content recommendations which can achieve approximate stationarity, under mild conditions on the set of available content, when a user's preferences are known, and how one can learn enough about a user's preferences to implement such a strategy even when user preferences are initially unknown.","PeriodicalId":209859,"journal":{"name":"Proceedings of the 23rd ACM Conference on Economics and Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114929391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces a unified framework for stable matching, which nests the traditional definition of stable matching in finite markets and the continuum definition of stable matching from Azevedo and Leshno (2016) as special cases. Within this framework, I identify a novel continuum model, which makes individual-level probabilistic predictions. This new model always has a unique stable outcome, which can be found using an analog of the Deferred Acceptance algorithm. The crucial difference between this model and that of Azevedo and Leshno (2016) is that they assume that the amount of student interest at each school is deterministic, whereas my proposed alternative assumes that it follows a Poisson distribution. As a result, this new model accurately predicts the simulated distribution of cutoffs, even for markets with only ten schools and twenty students. This model generates new insights about the number and quality of matches. When schools are homogeneous, it provides upper and lower bounds on students' average rank, which match results from Ashlagi, Kanoria and Leshno (2017) but apply to more general settings. This model also provides clean analytical expressions for the number of matches in a platform pricing setting considered by Marx and Schummer (2021).
{"title":"A Continuum Model of Stable Matching with Finite Capacities","authors":"N. Arnosti","doi":"10.1145/3490486.3538230","DOIUrl":"https://doi.org/10.1145/3490486.3538230","url":null,"abstract":"This paper introduces a unified framework for stable matching, which nests the traditional definition of stable matching in finite markets and the continuum definition of stable matching from Azevedo and Leshno (2016) as special cases. Within this framework, I identify a novel continuum model, which makes individual-level probabilistic predictions. This new model always has a unique stable outcome, which can be found using an analog of the Deferred Acceptance algorithm. The crucial difference between this model and that of Azevedo and Leshno (2016) is that they assume that the amount of student interest at each school is deterministic, whereas my proposed alternative assumes that it follows a Poisson distribution. As a result, this new model accurately predicts the simulated distribution of cutoffs, even for markets with only ten schools and twenty students. This model generates new insights about the number and quality of matches. When schools are homogeneous, it provides upper and lower bounds on students' average rank, which match results from Ashlagi, Kanoria and Leshno (2017) but apply to more general settings. This model also provides clean analytical expressions for the number of matches in a platform pricing setting considered by Marx and Schummer (2021).","PeriodicalId":209859,"journal":{"name":"Proceedings of the 23rd ACM Conference on Economics and Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126690020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider the problem of aggregating individual preferences over alternatives into a social ranking. A key feature of the problems that we consider---and the one that allows us to obtain positive results, in contrast to negative results such as Arrow's Impossibililty Theorem---is that the alternatives to be ranked are outcomes of a competitive process. Examples include rankings of colleges or academic journals. The foundation of our ranking method is that alternatives that an agent desires---those that they have been rejected by---should be ranked higher than the one they receive. We provide a mechanism to produce a social ranking given any preference profile and outcome assignment, and characterize this ranking as the unique one that satisfies certain desirable axioms. A full version of this paper can be found at: https://arxiv.org/abs/2205.11684.
{"title":"Desirable Rankings: A New Method for Ranking Outcomes of a Competitive Process","authors":"T. Morrill, Peter Troyan","doi":"10.1145/3490486.3538272","DOIUrl":"https://doi.org/10.1145/3490486.3538272","url":null,"abstract":"We consider the problem of aggregating individual preferences over alternatives into a social ranking. A key feature of the problems that we consider---and the one that allows us to obtain positive results, in contrast to negative results such as Arrow's Impossibililty Theorem---is that the alternatives to be ranked are outcomes of a competitive process. Examples include rankings of colleges or academic journals. The foundation of our ranking method is that alternatives that an agent desires---those that they have been rejected by---should be ranked higher than the one they receive. We provide a mechanism to produce a social ranking given any preference profile and outcome assignment, and characterize this ranking as the unique one that satisfies certain desirable axioms. A full version of this paper can be found at: https://arxiv.org/abs/2205.11684.","PeriodicalId":209859,"journal":{"name":"Proceedings of the 23rd ACM Conference on Economics and Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131505471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To better understand discriminations and the effect of affirmative actions in selection problems (e.g., college admission or hiring), a recent line of research proposed a model based on differential variance. This model assumes that the decision-maker has a noisy estimate of each candidate's quality and puts forward the difference in the noise variances between different demographic groups as a key factor to explain discrimination. The literature on differential variance, however, does not consider the strategic behavior of candidates who can react to the selection procedure to improve their outcome, which is well-known to happen in many domains. In this paper, we study how the strategic aspect affects fairness in selection problems. We propose to model selection problems with strategic candidates as a contest game: A population of rational candidates compete by choosing an effort level to increase their quality. They incur a cost-of-effort but get a (random) quality whose expectation equals the chosen effort. A Bayesian decision-maker observes a noisy estimate of the quality of each candidate (with differential variance) and selects the fraction α of best candidates based on their posterior expected quality; each selected candidate receives a reward S. We characterize the (unique) equilibrium of this game in the different parameters' regimes, both when the decision-maker is unconstrained and when they are constrained to respect the fairness notion of demographic parity. Our results reveal important impacts of the strategic behavior on the discrimination observed at equilibrium and allow us to understand the effect of imposing demographic parity in this context. In particular, we find that, in many cases, the results contrast with the non-strategic setting. We also find that, when the cost-of-effort depends on the demographic group (which is reasonable in many cases), then it entirely governs the observed discrimination (i.e., the noise becomes a second-order effect that does not have any impact on discrimination). Finally we find that imposing demographic parity can sometimes increase the quality of the selection at equilibrium; which surprisingly contrasts with the optimality of the Bayesian decision-maker in the non-strategic case. Our results give a new perspective on fairness in selection problems, relevant in many domains where strategic behavior is a reality.
{"title":"Fairness in Selection Problems with Strategic Candidates","authors":"V. Emelianov, Nicolas Gast, P. Loiseau","doi":"10.1145/3490486.3538287","DOIUrl":"https://doi.org/10.1145/3490486.3538287","url":null,"abstract":"To better understand discriminations and the effect of affirmative actions in selection problems (e.g., college admission or hiring), a recent line of research proposed a model based on differential variance. This model assumes that the decision-maker has a noisy estimate of each candidate's quality and puts forward the difference in the noise variances between different demographic groups as a key factor to explain discrimination. The literature on differential variance, however, does not consider the strategic behavior of candidates who can react to the selection procedure to improve their outcome, which is well-known to happen in many domains. In this paper, we study how the strategic aspect affects fairness in selection problems. We propose to model selection problems with strategic candidates as a contest game: A population of rational candidates compete by choosing an effort level to increase their quality. They incur a cost-of-effort but get a (random) quality whose expectation equals the chosen effort. A Bayesian decision-maker observes a noisy estimate of the quality of each candidate (with differential variance) and selects the fraction α of best candidates based on their posterior expected quality; each selected candidate receives a reward S. We characterize the (unique) equilibrium of this game in the different parameters' regimes, both when the decision-maker is unconstrained and when they are constrained to respect the fairness notion of demographic parity. Our results reveal important impacts of the strategic behavior on the discrimination observed at equilibrium and allow us to understand the effect of imposing demographic parity in this context. In particular, we find that, in many cases, the results contrast with the non-strategic setting. We also find that, when the cost-of-effort depends on the demographic group (which is reasonable in many cases), then it entirely governs the observed discrimination (i.e., the noise becomes a second-order effect that does not have any impact on discrimination). Finally we find that imposing demographic parity can sometimes increase the quality of the selection at equilibrium; which surprisingly contrasts with the optimality of the Bayesian decision-maker in the non-strategic case. Our results give a new perspective on fairness in selection problems, relevant in many domains where strategic behavior is a reality.","PeriodicalId":209859,"journal":{"name":"Proceedings of the 23rd ACM Conference on Economics and Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115228623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. Chaudhury, J. Garg, Patricia C. McGlaughlin, R. Mehta
We study the computational complexity of finding a competitive equilibrium (CE) with chores when agents have linear preferences. CE is one of the most preferred mechanisms for allocating a set of items among agents. CE with equal incomes (CEEI), Fisher, and Arrow-Debreu (exchange) are the fundamental economic models to study allocation problems, where CEEI is a special case of Fisher and Fisher is a special case of exchange. When the items are goods (giv-ing utility), the CE set is convex even in the exchange model, facilitating several combinatorial polynomial-time algorithms (starting with the seminal work of Devanur, Papadimitriou, Saberi and Vazirani [DPSV08]) for all of these models. In sharp contrast, when the items are chores (giving disutility), the CE set is known to be non-convex and disconnected even in the CEEI model. Further, no combinatorial algorithms or hardness results are known for these models. In this paper, we give two main results for CE with chores: To the best of our knowledge, these results show the first separation between the CEEI and exchange models when agents have linear preferences, assuming PPAD (cid:54) = P. Furthermore, this is also the first separation between the two economic models when the CE set is non-convex in both cases. Finally, we show that our new insight implies a straightforward proof of the existence of an allocation that is both envy-free up to one chore (EF1) and Pareto optimal (PO) in the when have factored bivalued preferences. EPS22] involved
{"title":"Competitive Equilibrium with Chores: Combinatorial Algorithm and Hardness","authors":"B. Chaudhury, J. Garg, Patricia C. McGlaughlin, R. Mehta","doi":"10.1145/3490486.3538255","DOIUrl":"https://doi.org/10.1145/3490486.3538255","url":null,"abstract":"We study the computational complexity of finding a competitive equilibrium (CE) with chores when agents have linear preferences. CE is one of the most preferred mechanisms for allocating a set of items among agents. CE with equal incomes (CEEI), Fisher, and Arrow-Debreu (exchange) are the fundamental economic models to study allocation problems, where CEEI is a special case of Fisher and Fisher is a special case of exchange. When the items are goods (giv-ing utility), the CE set is convex even in the exchange model, facilitating several combinatorial polynomial-time algorithms (starting with the seminal work of Devanur, Papadimitriou, Saberi and Vazirani [DPSV08]) for all of these models. In sharp contrast, when the items are chores (giving disutility), the CE set is known to be non-convex and disconnected even in the CEEI model. Further, no combinatorial algorithms or hardness results are known for these models. In this paper, we give two main results for CE with chores: To the best of our knowledge, these results show the first separation between the CEEI and exchange models when agents have linear preferences, assuming PPAD (cid:54) = P. Furthermore, this is also the first separation between the two economic models when the CE set is non-convex in both cases. Finally, we show that our new insight implies a straightforward proof of the existence of an allocation that is both envy-free up to one chore (EF1) and Pareto optimal (PO) in the when have factored bivalued preferences. EPS22] involved","PeriodicalId":209859,"journal":{"name":"Proceedings of the 23rd ACM Conference on Economics and Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122303754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Certain but important classes of strategic-form games, including zero-sum and identical-interest games, have thefictitious-play-property (FPP), i.e., beliefs formed in fictitious play dynamics always converge to a Nash equilibrium (NE) in the repeated play of these games. Such convergence results are seen as a (behavioral) justification for the game-theoretical equilibrium analysis. Markov games (MGs), also known as stochastic games, generalize the repeated play of strategic-form games to dynamic multi-state settings with Markovian state transitions. In particular, MGs are standard models for multi-agent reinforcement learning -- a reviving research area in learning and games, and their game-theoretical equilibrium analyses have also been conducted extensively. However, whether certain classes of MGs have the FPP or not (i.e., whether there is a behavioral justification for equilibrium analysis or not) remains largely elusive. In this paper, we study a new variant of fictitious play dynamics for MGs and show its convergence to an NE in n-player identical-interest MGs in which a single player controls the state transitions. Such games are of interest in communications, control, and economics applications. Our result together with the recent results in [42] establishes the FPP of two-player zero-sum MGs and n-player identical-interest MGs with a single controller (standing at two different ends of the MG spectrum from fully competitive to fully cooperative).
{"title":"Fictitious Play in Markov Games with Single Controller","authors":"M. O. Sayin, K. Zhang, A. Ozdaglar","doi":"10.1145/3490486.3538289","DOIUrl":"https://doi.org/10.1145/3490486.3538289","url":null,"abstract":"Certain but important classes of strategic-form games, including zero-sum and identical-interest games, have thefictitious-play-property (FPP), i.e., beliefs formed in fictitious play dynamics always converge to a Nash equilibrium (NE) in the repeated play of these games. Such convergence results are seen as a (behavioral) justification for the game-theoretical equilibrium analysis. Markov games (MGs), also known as stochastic games, generalize the repeated play of strategic-form games to dynamic multi-state settings with Markovian state transitions. In particular, MGs are standard models for multi-agent reinforcement learning -- a reviving research area in learning and games, and their game-theoretical equilibrium analyses have also been conducted extensively. However, whether certain classes of MGs have the FPP or not (i.e., whether there is a behavioral justification for equilibrium analysis or not) remains largely elusive. In this paper, we study a new variant of fictitious play dynamics for MGs and show its convergence to an NE in n-player identical-interest MGs in which a single player controls the state transitions. Such games are of interest in communications, control, and economics applications. Our result together with the recent results in [42] establishes the FPP of two-player zero-sum MGs and n-player identical-interest MGs with a single controller (standing at two different ends of the MG spectrum from fully competitive to fully cooperative).","PeriodicalId":209859,"journal":{"name":"Proceedings of the 23rd ACM Conference on Economics and Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129529035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deniz Kattwinkel, Axel Niemeyer, Justus Preusser, Alexander Winter
A principal must decide between two options. Which one she prefers depends on the private information of two agents. One agent always prefers the first option; the other always prefers the second. Transfers are infeasible. One application of this setting is the efficient division of a fixed budget between two competing departments. We first characterize all implementable mechanisms under arbitrary correlation. Second, we study when there exists a mechanism that yields the principal a higher payoff than she could receive by choosing the ex-ante optimal decision without consulting the agents. In the budget example, such a profitable mechanism exists if and only if the information of one department is also relevant for the expected returns of the other department. We generalize this insight to derive necessary and sufficient conditions for the existence of a profitable mechanism in the n-agent allocation problem with independent types.
{"title":"Mechanisms without Transfers for Fully Biased Agents","authors":"Deniz Kattwinkel, Axel Niemeyer, Justus Preusser, Alexander Winter","doi":"10.1145/3490486.3538317","DOIUrl":"https://doi.org/10.1145/3490486.3538317","url":null,"abstract":"A principal must decide between two options. Which one she prefers depends on the private information of two agents. One agent always prefers the first option; the other always prefers the second. Transfers are infeasible. One application of this setting is the efficient division of a fixed budget between two competing departments. We first characterize all implementable mechanisms under arbitrary correlation. Second, we study when there exists a mechanism that yields the principal a higher payoff than she could receive by choosing the ex-ante optimal decision without consulting the agents. In the budget example, such a profitable mechanism exists if and only if the information of one department is also relevant for the expected returns of the other department. We generalize this insight to derive necessary and sufficient conditions for the existence of a profitable mechanism in the n-agent allocation problem with independent types.","PeriodicalId":209859,"journal":{"name":"Proceedings of the 23rd ACM Conference on Economics and Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126146450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a model with k identical tickets. The set of agents (N) is partitioned into a set of groups, and agents have dichotomous preferences: an agent is successful if and only if members of her group receive enough tickets for everyone in the group. We treat the group structure as private information, unknown to the designer. Because there are only k tickets, there can be at most k successful agents. We define the efficiency of a lottery allocation to be the expected number of successful agents, divided by k. If this is at least β, then the allocation is β-efficient. A lottery allocation is fair if each agent has the same success probability, and β-fair if for any pair of agents, the ratio of their success probabilities is at least β. Given these definitions, we seek lottery allocations that are both approximately efficient and approximately fair. Although this may be unattainable if groups are large, in many cases group sizes are much smaller than the total number of tickets. We define a family of instances characterized by two parameters, κ and α. The parameter κ bounds the ratio of group size to total number of tickets, while α bounds the supply-demand ratio. For any κ and α, we provide worst-case performance guarantees in terms of efficiency and fairness. We first consider a scenario where applicants can identify each member of their group. Here, the mechanism typically used is the Group Lottery, which orders groups uniformly at random and processes them sequentially. We show that this mechanism incentivizes agents to truthfully report their groups. Moreover, we prove that the Group Lottery is (1 - κ)-efficient and (1-2κ)-fair. It is not perfectly efficient, as tickets might be wasted if the size of the group being processed exceeds the number of remaining tickets. It is not perfectly fair, since once only a few tickets remain, a large group can no longer be successful, but a small group can. Furthermore, we show that these guarantees are tight. Could there be a mechanism with stronger performance guarantees than the Group Lottery? We answer this question by establishing the limits of what can be achieved. Specifically, there always exists an allocation (π) that is (1-κ)-efficient and fair, but for any ε > 0, there are examples where any allocation that is (1- κ + ε)-efficient is not even ε-fair. To show the existence of the random allocation (π), we use a generalization of the Birkhoff-von Neumann theorem proved by [1]. By awarding groups according to the allocation (π), we can obtain a mechanism that attains the best possible performance guarantees. Therefore, the 2 κ loss in fairness in the Group Lottery can be thought of as the "cost" of using a simple procedure that orders groups uniformly, rather than employing a Birkhoff-von Neumann decomposition to generate the allocation (π). In many applications, developing an interface that allows applicants to list their group members may be too cumbersome. This motivates the study of a secon
{"title":"Lotteries for Shared Experiences","authors":"N. Arnosti, Carlos Bonet","doi":"10.1145/3490486.3538312","DOIUrl":"https://doi.org/10.1145/3490486.3538312","url":null,"abstract":"We consider a model with k identical tickets. The set of agents (N) is partitioned into a set of groups, and agents have dichotomous preferences: an agent is successful if and only if members of her group receive enough tickets for everyone in the group. We treat the group structure as private information, unknown to the designer. Because there are only k tickets, there can be at most k successful agents. We define the efficiency of a lottery allocation to be the expected number of successful agents, divided by k. If this is at least β, then the allocation is β-efficient. A lottery allocation is fair if each agent has the same success probability, and β-fair if for any pair of agents, the ratio of their success probabilities is at least β. Given these definitions, we seek lottery allocations that are both approximately efficient and approximately fair. Although this may be unattainable if groups are large, in many cases group sizes are much smaller than the total number of tickets. We define a family of instances characterized by two parameters, κ and α. The parameter κ bounds the ratio of group size to total number of tickets, while α bounds the supply-demand ratio. For any κ and α, we provide worst-case performance guarantees in terms of efficiency and fairness. We first consider a scenario where applicants can identify each member of their group. Here, the mechanism typically used is the Group Lottery, which orders groups uniformly at random and processes them sequentially. We show that this mechanism incentivizes agents to truthfully report their groups. Moreover, we prove that the Group Lottery is (1 - κ)-efficient and (1-2κ)-fair. It is not perfectly efficient, as tickets might be wasted if the size of the group being processed exceeds the number of remaining tickets. It is not perfectly fair, since once only a few tickets remain, a large group can no longer be successful, but a small group can. Furthermore, we show that these guarantees are tight. Could there be a mechanism with stronger performance guarantees than the Group Lottery? We answer this question by establishing the limits of what can be achieved. Specifically, there always exists an allocation (π) that is (1-κ)-efficient and fair, but for any ε > 0, there are examples where any allocation that is (1- κ + ε)-efficient is not even ε-fair. To show the existence of the random allocation (π), we use a generalization of the Birkhoff-von Neumann theorem proved by [1]. By awarding groups according to the allocation (π), we can obtain a mechanism that attains the best possible performance guarantees. Therefore, the 2 κ loss in fairness in the Group Lottery can be thought of as the \"cost\" of using a simple procedure that orders groups uniformly, rather than employing a Birkhoff-von Neumann decomposition to generate the allocation (π). In many applications, developing an interface that allows applicants to list their group members may be too cumbersome. This motivates the study of a secon","PeriodicalId":209859,"journal":{"name":"Proceedings of the 23rd ACM Conference on Economics and Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126259526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-20DOI: 10.48550/arXiv.2205.10302
Makis Arsenis, Robert D. Kleinberg
Prophet inequalities are performance guarantees for online algorithms (a.k.a. stopping rules) solving the following ''hiring problem'': a decision maker sequentially inspects candidates whose values are independent random numbers and is asked to hire at most one candidate by selecting it before inspecting the values of future candidates in the sequence. A classic result in optimal stopping theory asserts that there exist stopping rules guaranteeing that the decision maker will hire a candidate whose expected value is at least half as good as the expected value of the candidate hired by a ''prophet,'' i.e.one who has simultaneous access to the realizations of all candidates' values. Such stopping rules may indeed have provably good performance but might treat individual candidates unfairly in a number of different ways. In this work we identify two types of individual fairness that might be desirable in optimal stopping problems. We call them identity-independent fairness (IIF) and time-independent fairness (TIF) and give precise definitions in the context of the hiring problem. We give polynomial-time algorithms for finding the optimal IIF/TIF stopping rules for a given instance with discrete support and we manage to recover a prophet inequality with factor 1/2 when the decision maker's stopping rule is required to satisfy both fairness properties while the prophet is unconstrained. We also explore worst-case ratios between optimal selection rules in the presence vs. absence of individual fairness constraints, in both the online and offline settings. We prove an impossibility result showing that there is no prophet inequality with a non-zero factor for either IIF or TIF stopping rules when we further constrain the decision maker to make a hire with probability 1. We finally consider a setting in which the decision maker doesn't know the distributions of candidates' values but has access to a bounded number of independent samples from each distribution. We provide constant-competitive algorithms that satisfy both TIF and IIF, using one sample from each distribution in the offline setting and two samples from each distribution in the online setting. The full version of the paper: https://arxiv.org/abs/2205.10302v1
{"title":"Individual Fairness in Prophet Inequalities","authors":"Makis Arsenis, Robert D. Kleinberg","doi":"10.48550/arXiv.2205.10302","DOIUrl":"https://doi.org/10.48550/arXiv.2205.10302","url":null,"abstract":"Prophet inequalities are performance guarantees for online algorithms (a.k.a. stopping rules) solving the following ''hiring problem'': a decision maker sequentially inspects candidates whose values are independent random numbers and is asked to hire at most one candidate by selecting it before inspecting the values of future candidates in the sequence. A classic result in optimal stopping theory asserts that there exist stopping rules guaranteeing that the decision maker will hire a candidate whose expected value is at least half as good as the expected value of the candidate hired by a ''prophet,'' i.e.one who has simultaneous access to the realizations of all candidates' values. Such stopping rules may indeed have provably good performance but might treat individual candidates unfairly in a number of different ways. In this work we identify two types of individual fairness that might be desirable in optimal stopping problems. We call them identity-independent fairness (IIF) and time-independent fairness (TIF) and give precise definitions in the context of the hiring problem. We give polynomial-time algorithms for finding the optimal IIF/TIF stopping rules for a given instance with discrete support and we manage to recover a prophet inequality with factor 1/2 when the decision maker's stopping rule is required to satisfy both fairness properties while the prophet is unconstrained. We also explore worst-case ratios between optimal selection rules in the presence vs. absence of individual fairness constraints, in both the online and offline settings. We prove an impossibility result showing that there is no prophet inequality with a non-zero factor for either IIF or TIF stopping rules when we further constrain the decision maker to make a hire with probability 1. We finally consider a setting in which the decision maker doesn't know the distributions of candidates' values but has access to a bounded number of independent samples from each distribution. We provide constant-competitive algorithms that satisfy both TIF and IIF, using one sample from each distribution in the offline setting and two samples from each distribution in the online setting. The full version of the paper: https://arxiv.org/abs/2205.10302v1","PeriodicalId":209859,"journal":{"name":"Proceedings of the 23rd ACM Conference on Economics and Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130163721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ridesharing markets are complex: drivers are strategic, rider demand and driver availability are stochastic, and complex city-scale phenomena like weather induce large scale correlation across space and time. At the same time, past work has focused on a subset of these challenges. We propose a model of ridesharing networks with strategic drivers, spatiotemporal dynamics, and stochasticity. Supporting both computational tractability and better modeling flexibility than classical fluid limits, we use a two-level stochastic model that allows correlated shocks caused by weather or large public events. Using this model, we propose a novel pricing mechanism: stochastic spatiotemporal pricing (SSP). We show that the SSP mechanism is asymptotically incentive-compatible and that all (approximate) equilibria of the resulting game are asymptotically welfare-maximizing when the market is large enough. The SSP mechanism iteratively recomputes prices based on realized demand and supply, and in this sense prices dynamically. We show that this is critical: while a static variant of the SSP mechanism (whose prices vary with the market-level stochastic scenario but not individual rider and driver decisions) has a sequence of asymptotically welfare-optimal approximate equilibria, we demonstrate that it also has other equilibria producing extremely low social welfare. Thus, we argue that dynamic pricing is important for ensuring robustness in stochastic ride-sharing networks.
{"title":"Dynamic Pricing Provides Robust Equilibria in Stochastic Ride-Sharing Networks","authors":"J. M. Cashore, P. Frazier, É. Tardos","doi":"10.1145/3490486.3538277","DOIUrl":"https://doi.org/10.1145/3490486.3538277","url":null,"abstract":"Ridesharing markets are complex: drivers are strategic, rider demand and driver availability are stochastic, and complex city-scale phenomena like weather induce large scale correlation across space and time. At the same time, past work has focused on a subset of these challenges. We propose a model of ridesharing networks with strategic drivers, spatiotemporal dynamics, and stochasticity. Supporting both computational tractability and better modeling flexibility than classical fluid limits, we use a two-level stochastic model that allows correlated shocks caused by weather or large public events. Using this model, we propose a novel pricing mechanism: stochastic spatiotemporal pricing (SSP). We show that the SSP mechanism is asymptotically incentive-compatible and that all (approximate) equilibria of the resulting game are asymptotically welfare-maximizing when the market is large enough. The SSP mechanism iteratively recomputes prices based on realized demand and supply, and in this sense prices dynamically. We show that this is critical: while a static variant of the SSP mechanism (whose prices vary with the market-level stochastic scenario but not individual rider and driver decisions) has a sequence of asymptotically welfare-optimal approximate equilibria, we demonstrate that it also has other equilibria producing extremely low social welfare. Thus, we argue that dynamic pricing is important for ensuring robustness in stochastic ride-sharing networks.","PeriodicalId":209859,"journal":{"name":"Proceedings of the 23rd ACM Conference on Economics and Computation","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114268981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}