We consider a fundamental pricing model in which a fixed number of units of a reusable resource are used to serve customers. Customers arrive to the system according to a stochastic process and upon arrival decide whether or not to purchase the service, depending on their willingness-to-pay and the current price. The service time during which the resource is used by the customer is stochastic and the firm may incur a service cost. This model represents various markets for reusable resources such as cloud computing, shared vehicles, rotable parts, and hotel rooms. In the present paper, we analyze this pricing problem when the firm attempts to maximize a weighted combination of three central metrics: profit, market share, and service level. Under Poisson arrivals, exponential service times, and standard assumptions on the willingness-to-pay distribution, we establish a series of results that characterize the performance of static pricing in such environments. In particular, while an optimal policy is fully dynamic in such a context, we prove that a static pricing policy simultaneously guarantees 78.9% of the profit, market share, and service level from the optimal policy. Notably, this result holds for any service rate and number of units the firm operates. In the special case where there are two units and the induced demand is linear, we also prove that the static policy guarantees 95.5% of the profit from the optimal policy. Our numerical findings on a large testbed of instances suggest that the latter result is quite indicative of the profit obtained by the static pricing policy across all parameters.
{"title":"Static Pricing: Universal Guarantees for Reusable Resources","authors":"Omar Besbes, Adam N. Elmachtoub, Yunjie Sun","doi":"10.1145/3328526.3329585","DOIUrl":"https://doi.org/10.1145/3328526.3329585","url":null,"abstract":"We consider a fundamental pricing model in which a fixed number of units of a reusable resource are used to serve customers. Customers arrive to the system according to a stochastic process and upon arrival decide whether or not to purchase the service, depending on their willingness-to-pay and the current price. The service time during which the resource is used by the customer is stochastic and the firm may incur a service cost. This model represents various markets for reusable resources such as cloud computing, shared vehicles, rotable parts, and hotel rooms. In the present paper, we analyze this pricing problem when the firm attempts to maximize a weighted combination of three central metrics: profit, market share, and service level. Under Poisson arrivals, exponential service times, and standard assumptions on the willingness-to-pay distribution, we establish a series of results that characterize the performance of static pricing in such environments. In particular, while an optimal policy is fully dynamic in such a context, we prove that a static pricing policy simultaneously guarantees 78.9% of the profit, market share, and service level from the optimal policy. Notably, this result holds for any service rate and number of units the firm operates. In the special case where there are two units and the induced demand is linear, we also prove that the static policy guarantees 95.5% of the profit from the optimal policy. Our numerical findings on a large testbed of instances suggest that the latter result is quite indicative of the profit obtained by the static pricing policy across all parameters.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126274348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Keynesian Beauty Contest is a classical game in which strategic agents seek to both accurately guess the true state of the world as well as the average action of all agents. We study an augmentation of this game where agents are concerned about revealing their private information and additionally suffer a loss based on how well an observer can infer their private signals. We solve for an equilibrium of this augmented game and quantify the loss of social welfare as a result of agents acting to obscure their private information, which we call the 'price of privacy'. We analyze two versions of this this price: one from the perspective of the agents measuring their diminished ability to coordinate due to acting to obscure their information and another from the perspective of an aggregator whose statistical estimate of the true state of the world is of lower precision due to the agents adding random noise to their actions. We show that these quantities are high when agents care very strongly about protecting their personal information and low when the quality of the signals the agents receive is poor.
{"title":"Price of Privacy in the Keynesian Beauty Contest","authors":"Hadi Elzayn, Zachary Schutzman","doi":"10.1145/3328526.3329607","DOIUrl":"https://doi.org/10.1145/3328526.3329607","url":null,"abstract":"The Keynesian Beauty Contest is a classical game in which strategic agents seek to both accurately guess the true state of the world as well as the average action of all agents. We study an augmentation of this game where agents are concerned about revealing their private information and additionally suffer a loss based on how well an observer can infer their private signals. We solve for an equilibrium of this augmented game and quantify the loss of social welfare as a result of agents acting to obscure their private information, which we call the 'price of privacy'. We analyze two versions of this this price: one from the perspective of the agents measuring their diminished ability to coordinate due to acting to obscure their information and another from the perspective of an aggregator whose statistical estimate of the true state of the world is of lower precision due to the agents adding random noise to their actions. We show that these quantities are high when agents care very strongly about protecting their personal information and low when the quality of the signals the agents receive is poor.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130981855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Akos Lada, A. Peysakhovich, Diego Aparicio, Michael Bailey
Decision makers in health, public policy, technology, and social science are increasingly interested in going beyond 'one-size-fits-all' policies to personalized ones. Thus, they are faced with the problem of estimating heterogeneous causal effects. Unfortunately, estimating heterogeneous effects from randomized data requires large amounts of statistical power and while observational data is often available in much larger quantities the presence of unobserved confounders can make using estimates derived from it highly suspect. We show that under some assumptions estimated heterogeneous treatment effects from observational data can preserve the rank ordering of the true heterogeneous causal effects. Such an approach is useful when observational data is large, the set of features is high-dimensional, and our priors about feature importance are weak. We probe the effectiveness of our approach in simulations and show a real-world example in a large-scale recommendations problem.
{"title":"Observational Data for Heterogeneous Treatment Effects with Application to Recommender Systems","authors":"Akos Lada, A. Peysakhovich, Diego Aparicio, Michael Bailey","doi":"10.1145/3328526.3329558","DOIUrl":"https://doi.org/10.1145/3328526.3329558","url":null,"abstract":"Decision makers in health, public policy, technology, and social science are increasingly interested in going beyond 'one-size-fits-all' policies to personalized ones. Thus, they are faced with the problem of estimating heterogeneous causal effects. Unfortunately, estimating heterogeneous effects from randomized data requires large amounts of statistical power and while observational data is often available in much larger quantities the presence of unobserved confounders can make using estimates derived from it highly suspect. We show that under some assumptions estimated heterogeneous treatment effects from observational data can preserve the rank ordering of the true heterogeneous causal effects. Such an approach is useful when observational data is large, the set of features is high-dimensional, and our priors about feature importance are weak. We probe the effectiveness of our approach in simulations and show a real-world example in a large-scale recommendations problem.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128995296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yannai A. Gonczarowski, Lior Kovalio, N. Nisan, Assaf Romm
We describe our experience with designing and running a matching market for the Israeli "Mechinot" gap-year programs. The main conceptual challenge in the design of this market was the rich set of diversity considerations, which necessitated the development of an appropriate preference-specification language along with corresponding choice-function semantics, which we also theoretically analyze to a certain extent. This market was run for the first time in January 2018 and matched 1,607 candidates (out of a total of 2,580 candidates) to 35 different programs, and has been adopted by the Joint Council of the "Mechinot" gap-year programs for the foreseeable future.
{"title":"Matching for the Israeli \"Mechinot\" Gap-Year Programs: Handling Rich Diversity Requirements","authors":"Yannai A. Gonczarowski, Lior Kovalio, N. Nisan, Assaf Romm","doi":"10.1145/3328526.3329620","DOIUrl":"https://doi.org/10.1145/3328526.3329620","url":null,"abstract":"We describe our experience with designing and running a matching market for the Israeli \"Mechinot\" gap-year programs. The main conceptual challenge in the design of this market was the rich set of diversity considerations, which necessitated the development of an appropriate preference-specification language along with corresponding choice-function semantics, which we also theoretically analyze to a certain extent. This market was run for the first time in January 2018 and matched 1,607 candidates (out of a total of 2,580 candidates) to 35 different programs, and has been adopted by the Joint Council of the \"Mechinot\" gap-year programs for the foreseeable future.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131442302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As algorithmic prediction systems have become widespread, fears that these systems may inadvertently discriminate against members of underrepresented populations have grown. With the goal of understanding fundamental principles that underpin the growing number of approaches to mitigating algorithmic discrimination, we investigate the role of information in fair prediction. A common strategy for decision-making uses a predictor to assign individuals a risk score; then, individuals are selected or rejected on the basis of this score. In this work, we study a formal framework for measuring the information content of predictors. Central to the framework is the notion of a refinement; intuitively, a refinement of a predictor z increases the overall informativeness of the predictions without losing the information already contained in z. We show that increasing information content through refinements improves the downstream selection rules across a wide range of fairness measures (e.g. true positive rates, false positive rates, selection rates). In turn, refinements provide a simple but effective tool for reducing disparity in treatment and impact without sacrificing the utility of the predictions. Our results suggest that in many applications, the perceived "cost of fairness" results from an information disparity across populations, and thus, may be avoided with improved information.
{"title":"Tracking and Improving Information in the Service of Fairness","authors":"Sumegha Garg, Michael P. Kim, Omer Reingold","doi":"10.1145/3328526.3329624","DOIUrl":"https://doi.org/10.1145/3328526.3329624","url":null,"abstract":"As algorithmic prediction systems have become widespread, fears that these systems may inadvertently discriminate against members of underrepresented populations have grown. With the goal of understanding fundamental principles that underpin the growing number of approaches to mitigating algorithmic discrimination, we investigate the role of information in fair prediction. A common strategy for decision-making uses a predictor to assign individuals a risk score; then, individuals are selected or rejected on the basis of this score. In this work, we study a formal framework for measuring the information content of predictors. Central to the framework is the notion of a refinement; intuitively, a refinement of a predictor z increases the overall informativeness of the predictions without losing the information already contained in z. We show that increasing information content through refinements improves the downstream selection rules across a wide range of fairness measures (e.g. true positive rates, false positive rates, selection rates). In turn, refinements provide a simple but effective tool for reducing disparity in treatment and impact without sacrificing the utility of the predictions. Our results suggest that in many applications, the perceived \"cost of fairness\" results from an information disparity across populations, and thus, may be avoided with improved information.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131221623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we study a model of information consumption where consumers sequentially interact with a platform that offers a menu of signals (posts) about an underlying state of the world (fact). At each time, incapable of consuming all posts, consumers screen the posts and only select (and consume) one from the offered menu. We show that in the presence of uncertainty about the accuracy of these posts, and as the number of posts increases, adverse effects such as slow learning and polarization arise. Specifically, we establish that, in this setting, bias emerges as a consequence of the consumer's screening process. Namely, consumers, in their quest to choose the post that reduces their uncertainty about the state of the world, choose to consume the post that is closest to their own beliefs. We study the evolution of beliefs and we show that such a screening bias slows down the learning process, and the speed of learning decreases with the menu size. Further, we show that the society becomes polarized during the prolonged learning process even in situations where the society's belief distribution was not a priori polarized.
{"title":"Information Inundation on Platforms and Implications","authors":"Gad Allon, K. Drakopoulos, V. Manshadi","doi":"10.2139/ssrn.3385627","DOIUrl":"https://doi.org/10.2139/ssrn.3385627","url":null,"abstract":"In this paper we study a model of information consumption where consumers sequentially interact with a platform that offers a menu of signals (posts) about an underlying state of the world (fact). At each time, incapable of consuming all posts, consumers screen the posts and only select (and consume) one from the offered menu. We show that in the presence of uncertainty about the accuracy of these posts, and as the number of posts increases, adverse effects such as slow learning and polarization arise. Specifically, we establish that, in this setting, bias emerges as a consequence of the consumer's screening process. Namely, consumers, in their quest to choose the post that reduces their uncertainty about the state of the world, choose to consume the post that is closest to their own beliefs. We study the evolution of beliefs and we show that such a screening bias slows down the learning process, and the speed of learning decreases with the menu size. Further, we show that the society becomes polarized during the prolonged learning process even in situations where the society's belief distribution was not a priori polarized.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122145123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Public discourse on pay transparency has not focused on equilibrium effects: how greater transparency impacts hiring and bargaining. To study these effects, we combine a dynamic wage-bargaining model with data from online markets for low-skill, temporary jobs that differ in their level of transparency. Wages are more equal, but lower under transparency. Transparency increases hiring and employer profits, rising 27% in an online field experiment. A key intuition is high transparency commits employers to negotiating aggressively, because a highly paid worker's salary affects negotiations with other workers. We discuss implications for the gender wage gap and employers' endogenous transparency choices.
{"title":"Equilibrium Effects of Pay Transparency in a Simple Labor Market: Extended Abstract","authors":"Zoë B. Cullen, Bobak Pakzad-Hurson","doi":"10.1145/3328526.3329645","DOIUrl":"https://doi.org/10.1145/3328526.3329645","url":null,"abstract":"Public discourse on pay transparency has not focused on equilibrium effects: how greater transparency impacts hiring and bargaining. To study these effects, we combine a dynamic wage-bargaining model with data from online markets for low-skill, temporary jobs that differ in their level of transparency. Wages are more equal, but lower under transparency. Transparency increases hiring and employer profits, rising 27% in an online field experiment. A key intuition is high transparency commits employers to negotiating aggressively, because a highly paid worker's salary affects negotiations with other workers. We discuss implications for the gender wage gap and employers' endogenous transparency choices.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130674388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alon Eden, M. Feldman, A. Fiat, Kira Goldner, Anna R. Karlin
We study combinatorial auctions with interdependent valuations, where each agent i has a private signal si that captures her private information and the valuation function of every agent depends on the entire signal profile, [Formula: see text]. The literature in economics shows that the interdependent model gives rise to strong impossibility results and identifies assumptions under which optimal solutions can be attained. The computer science literature provides approximation results for simple single-parameter settings (mostly single-item auctions or matroid feasibility constraints). Both bodies of literature focus largely on valuations satisfying a technical condition termed single crossing (or variants thereof). We consider the class of submodular over signals (SOS) valuations (without imposing any single crossing-type assumption) and provide the first welfare approximation guarantees for multidimensional combinatorial auctions achieved by universally ex post incentive-compatible, individually rational mechanisms. Our main results are (i) four approximation for any single-parameter downward-closed setting with single-dimensional signals and SOS valuations; (ii) four approximation for any combinatorial auction with multidimensional signals and separable-SOS valuations; and (iii) (k + 3) and (2 log(k) + 4) approximation for any combinatorial auction with single-dimensional signals, with k-sized signal space, for SOS and strong-SOS valuations, respectively. All of our results extend to a parameterized version of SOS, d-approximate SOS, while losing a factor that depends on d. Funding: This work was supported by the Israel Science Foundation [Grant 317/17], the National Science Foundation [Grant CCF-1813135], the Air Force Office of Scientific Research [Grant FA9550-20-1-0212], and the H2020 European Research Council [Grant 866132].
{"title":"Combinatorial Auctions with Interdependent Valuations: SOS to the Rescue","authors":"Alon Eden, M. Feldman, A. Fiat, Kira Goldner, Anna R. Karlin","doi":"10.1145/3328526.3329759","DOIUrl":"https://doi.org/10.1145/3328526.3329759","url":null,"abstract":"We study combinatorial auctions with interdependent valuations, where each agent i has a private signal si that captures her private information and the valuation function of every agent depends on the entire signal profile, [Formula: see text]. The literature in economics shows that the interdependent model gives rise to strong impossibility results and identifies assumptions under which optimal solutions can be attained. The computer science literature provides approximation results for simple single-parameter settings (mostly single-item auctions or matroid feasibility constraints). Both bodies of literature focus largely on valuations satisfying a technical condition termed single crossing (or variants thereof). We consider the class of submodular over signals (SOS) valuations (without imposing any single crossing-type assumption) and provide the first welfare approximation guarantees for multidimensional combinatorial auctions achieved by universally ex post incentive-compatible, individually rational mechanisms. Our main results are (i) four approximation for any single-parameter downward-closed setting with single-dimensional signals and SOS valuations; (ii) four approximation for any combinatorial auction with multidimensional signals and separable-SOS valuations; and (iii) (k + 3) and (2 log(k) + 4) approximation for any combinatorial auction with single-dimensional signals, with k-sized signal space, for SOS and strong-SOS valuations, respectively. All of our results extend to a parameterized version of SOS, d-approximate SOS, while losing a factor that depends on d. Funding: This work was supported by the Israel Science Foundation [Grant 317/17], the National Science Foundation [Grant CCF-1813135], the Air Force Office of Scientific Research [Grant FA9550-20-1-0212], and the H2020 European Research Council [Grant 866132].","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126427678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a setting where agents in a social network take binary actions, which exhibit local strategic complementarities. In particular, the payoff of each agent depends on the number of her neighbors who take action 1, as well as an underlying state of the world. The agents are a priori uninformed about the state, which belongs to an interval of the real line. An information designer (sender) can commit to a public signaling mechanism, which once the state is realized reveals a public signal to all the agents. Agents update their posterior about the state using the realization of the public signal, and possibly change their actions. The objective of the information designer is to maximize the expected activity level, i.e., the expected total number of agents who take action 1. How should the information information designer choose her public signaling mechanism to achieve this objective? This is the first paper to study the design of public signaling mechanisms in social networks, and its main contribution is to provide an answer this question.
{"title":"Persuasion in Networks: Public Signals and k-Cores","authors":"Ozan Candogan","doi":"10.2139/ssrn.3346144","DOIUrl":"https://doi.org/10.2139/ssrn.3346144","url":null,"abstract":"We consider a setting where agents in a social network take binary actions, which exhibit local strategic complementarities. In particular, the payoff of each agent depends on the number of her neighbors who take action 1, as well as an underlying state of the world. The agents are a priori uninformed about the state, which belongs to an interval of the real line. An information designer (sender) can commit to a public signaling mechanism, which once the state is realized reveals a public signal to all the agents. Agents update their posterior about the state using the realization of the public signal, and possibly change their actions. The objective of the information designer is to maximize the expected activity level, i.e., the expected total number of agents who take action 1. How should the information information designer choose her public signaling mechanism to achieve this objective? This is the first paper to study the design of public signaling mechanisms in social networks, and its main contribution is to provide an answer this question.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124994353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Cominetti, M. Scarsini, M. Schröder, N. Stier-Moses
We consider an atomic congestion game with stochastic demand in which each player participates in the game with probability $p$, and incurs no cost with probability $1-p$. We assume that $p$ is common knowledge among all players and that players are independent. For congestion games with affine costs, we provide an analytic expression for the price of anarchy as a function of $p$, which is monotonically increasing and converges to the well-known bound of ${5}/{2}$ as $pto 1$. On the other extreme, for $pleq {1}/{4}$ the bound is constant and equal to ${4}/{3}$ independently of the game structure and the number of players. We show that these bounds are tight and are attained on routing games with purely linear costs. Additionally, we also obtain tight bounds for the price of stability for all values of $p$.
{"title":"Price of Anarchy in Stochastic Atomic Congestion Games with Affine Costs","authors":"R. Cominetti, M. Scarsini, M. Schröder, N. Stier-Moses","doi":"10.1145/3328526.3329579","DOIUrl":"https://doi.org/10.1145/3328526.3329579","url":null,"abstract":"We consider an atomic congestion game with stochastic demand in which each player participates in the game with probability $p$, and incurs no cost with probability $1-p$. We assume that $p$ is common knowledge among all players and that players are independent. For congestion games with affine costs, we provide an analytic expression for the price of anarchy as a function of $p$, which is monotonically increasing and converges to the well-known bound of ${5}/{2}$ as $pto 1$. On the other extreme, for $pleq {1}/{4}$ the bound is constant and equal to ${4}/{3}$ independently of the game structure and the number of players. We show that these bounds are tight and are attained on routing games with purely linear costs. Additionally, we also obtain tight bounds for the price of stability for all values of $p$.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126835050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}