We consider a new model of many-to-one matching markets in which agents with multi-unit demand aim to maximize a cardinal linear objective subject to multidimensional knapsack constraints. The choice functions of agents with multi-unit demand are therefore not substitutable. As a result, pairwise stable matchings may not exist and, even when they do, may be highly inefficient. We provide an algorithm that finds a group-stable matching that approximately satisfies all the multidimensional knapsack constraints. The degree of the constraint violation is proportional to the sparsity of the constraint matrix. The algorithm therefore provides practical error bounds for applications in several contexts, such as refugee resettlement, matching of children to daycare centers, and meeting diversity requirements in colleges. A novel ingredient in our algorithm is a combination of matching with contracts and Scarf's Lemma.
{"title":"Stability in Matching Markets with Complex Constraints","authors":"Thành Nguyen, Hai Nguyen, A. Teytelboym","doi":"10.1145/3328526.3329639","DOIUrl":"https://doi.org/10.1145/3328526.3329639","url":null,"abstract":"We consider a new model of many-to-one matching markets in which agents with multi-unit demand aim to maximize a cardinal linear objective subject to multidimensional knapsack constraints. The choice functions of agents with multi-unit demand are therefore not substitutable. As a result, pairwise stable matchings may not exist and, even when they do, may be highly inefficient. We provide an algorithm that finds a group-stable matching that approximately satisfies all the multidimensional knapsack constraints. The degree of the constraint violation is proportional to the sparsity of the constraint matrix. The algorithm therefore provides practical error bounds for applications in several contexts, such as refugee resettlement, matching of children to daycare centers, and meeting diversity requirements in colleges. A novel ingredient in our algorithm is a combination of matching with contracts and Scarf's Lemma.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126223780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The computation of market equilibria is a fundamental and practically relevant research question. Advances in computational optimization allow for the organization of large combinatorial markets in the field nowadays. While we know the computational complexity and the types of price functions necessary on combinatorial exchanges with quasi-linear preferences, prior literature did not consider financially constrained buyers. We aim at allocations and competitive equilibrium prices that respect budget constraints. Such constraints are an important concern for the design of real-world markets, but we show that the allocation and pricing problem becomes even Σ2p-hard. Problems in this complexity class are rare, but ignoring budget constraints can lead to significant efficiency losses and instability. We introduce mixed integer bilevel linear programs (MIBLP) to compute core prices, and effective column and constraint generation algorithms to solve the problems. While full core stability becomes quickly intractable, we show that small but realistic problem sizes can actually be solved if the designer limits attention to deviations of small coalitions. This n-coalition stability is a practical approach to tame the computational complexity of the general problem and at the same time provide a reasonable level of stability.
{"title":"Computing Core-Stable Outcomes in Combinatorial Exchanges with Financially Constrained Bidders","authors":"M. Bichler, S. Waldherr","doi":"10.1145/3328526.3329641","DOIUrl":"https://doi.org/10.1145/3328526.3329641","url":null,"abstract":"The computation of market equilibria is a fundamental and practically relevant research question. Advances in computational optimization allow for the organization of large combinatorial markets in the field nowadays. While we know the computational complexity and the types of price functions necessary on combinatorial exchanges with quasi-linear preferences, prior literature did not consider financially constrained buyers. We aim at allocations and competitive equilibrium prices that respect budget constraints. Such constraints are an important concern for the design of real-world markets, but we show that the allocation and pricing problem becomes even Σ2p-hard. Problems in this complexity class are rare, but ignoring budget constraints can lead to significant efficiency losses and instability. We introduce mixed integer bilevel linear programs (MIBLP) to compute core prices, and effective column and constraint generation algorithms to solve the problems. While full core stability becomes quickly intractable, we show that small but realistic problem sizes can actually be solved if the designer limits attention to deviations of small coalitions. This n-coalition stability is a practical approach to tame the computational complexity of the general problem and at the same time provide a reasonable level of stability.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"48 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113974219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sortition is an alternative approach to democracy, in which representatives are not elected but randomly selected from the population. Most electoral democracies fail to accurately represent even a handful of protected groups. By contrast, sortition guarantees that every subset of the population will in expectation fill their fair share of the available positions. This fairness property remains satisfied when the sample is stratified based on known features. Moreover, stratification can greatly reduce the variance in the number of positions filled by any unknown group, as long as this group correlates with the strata. Our main result is that stratification cannot increase this variance by more than a negligible factor, even in the presence of indivisibilities and rounding. When the unknown group is unevenly spread across strata, we give a guarantee on the reduction in variance with respect to uniform sampling. We also contextualize stratification and uniform sampling in the space of fair sampling algorithms. Finally, we apply our insights to an empirical case study.
{"title":"No Stratification Without Representation","authors":"Gerdus Benade, Paul Gölz, A. Procaccia","doi":"10.1145/3328526.3329578","DOIUrl":"https://doi.org/10.1145/3328526.3329578","url":null,"abstract":"Sortition is an alternative approach to democracy, in which representatives are not elected but randomly selected from the population. Most electoral democracies fail to accurately represent even a handful of protected groups. By contrast, sortition guarantees that every subset of the population will in expectation fill their fair share of the available positions. This fairness property remains satisfied when the sample is stratified based on known features. Moreover, stratification can greatly reduce the variance in the number of positions filled by any unknown group, as long as this group correlates with the strata. Our main result is that stratification cannot increase this variance by more than a negligible factor, even in the presence of indivisibilities and rounding. When the unknown group is unevenly spread across strata, we give a guarantee on the reduction in variance with respect to uniform sampling. We also contextualize stratification and uniform sampling in the space of fair sampling algorithms. Finally, we apply our insights to an empirical case study.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"266 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114041220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study black-box reductions from mechanism design to algorithm design for welfare maximization in settings of incomplete information. Given oracle access to an algorithm for an underlying optimization problem, the goal is to simulate an incentive compatible mechanism. The mechanism will be evaluated on its expected welfare, relative to the algorithm provided, and its complexity is measured by the time (and queries) needed to simulate the mechanism on any input. While it is known that black-box reductions are not possible in many prior-free settings, settings with priors appear more promising: there are known reductions for Bayesian incentive compatible (BIC) mechanism design for general classes of welfare maximization problems. This dichotomy begs the question: which mechanism design problems admit black-box reductions, and which do not? Our main result is that black-box mechanism design is impossible under two of the simplest settings not captured by known positive results. First, for the problem of allocating n goods to a single buyer whose valuation is additive and independent across the goods, subject to a downward-closed constraint on feasible allocations, we show that there is no polytime (in n) BIC black-box reduction for expected welfare maximization. Second, for the setting of multiple single-parameter agents---where polytime BIC reductions are known---we show that no polytime reductions exist when the incentive requirement is tightened to Max-In-Distributional-Range. In each case, we show that achieving a sub-polynomial approximation to the expected welfare requires exponentially many queries, even when the set of feasible allocations is known to be downward-closed.
{"title":"The Complexity of Black-Box Mechanism Design with Priors","authors":"Evangelia Gergatsouli, Brendan Lucier, Christos Tzamos","doi":"10.1145/3328526.3329648","DOIUrl":"https://doi.org/10.1145/3328526.3329648","url":null,"abstract":"We study black-box reductions from mechanism design to algorithm design for welfare maximization in settings of incomplete information. Given oracle access to an algorithm for an underlying optimization problem, the goal is to simulate an incentive compatible mechanism. The mechanism will be evaluated on its expected welfare, relative to the algorithm provided, and its complexity is measured by the time (and queries) needed to simulate the mechanism on any input. While it is known that black-box reductions are not possible in many prior-free settings, settings with priors appear more promising: there are known reductions for Bayesian incentive compatible (BIC) mechanism design for general classes of welfare maximization problems. This dichotomy begs the question: which mechanism design problems admit black-box reductions, and which do not? Our main result is that black-box mechanism design is impossible under two of the simplest settings not captured by known positive results. First, for the problem of allocating n goods to a single buyer whose valuation is additive and independent across the goods, subject to a downward-closed constraint on feasible allocations, we show that there is no polytime (in n) BIC black-box reduction for expected welfare maximization. Second, for the setting of multiple single-parameter agents---where polytime BIC reductions are known---we show that no polytime reductions exist when the incentive requirement is tightened to Max-In-Distributional-Range. In each case, we show that achieving a sub-polynomial approximation to the expected welfare requires exponentially many queries, even when the set of feasible allocations is known to be downward-closed.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"362 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133133303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Apostolos Filippas, Srikanth Jagabathula, A. Sundararajan
We report on a randomized trial conducted during a market design transition on a sharing economy platform, where providers who formerly set rental prices for their assets were randomly assigned to groups with varying levels of pricing control. Even when faced with the prospect of significantly higher revenues, providers retaliate against the centralization of pricing by exiting the platform, reducing asset availability and cancelling transactions. Allowing providers to retain partial control lowers retaliation substantially even though providers do not frequently utilize this additional flexibility. We discuss information asymmetry, divergent incentives, and psychological contract violation as alternative explanations for our results.
{"title":"Managing Market Mechanism Transitions: A Randomized Trial of Decentralized Pricing Versus Platform Control","authors":"Apostolos Filippas, Srikanth Jagabathula, A. Sundararajan","doi":"10.1145/3328526.3329654","DOIUrl":"https://doi.org/10.1145/3328526.3329654","url":null,"abstract":"We report on a randomized trial conducted during a market design transition on a sharing economy platform, where providers who formerly set rental prices for their assets were randomly assigned to groups with varying levels of pricing control. Even when faced with the prospect of significantly higher revenues, providers retaliate against the centralization of pricing by exiting the platform, reducing asset availability and cancelling transactions. Allowing providers to retain partial control lowers retaliation substantially even though providers do not frequently utilize this additional flexibility. We discuss information asymmetry, divergent incentives, and psychological contract violation as alternative explanations for our results.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129474970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the value of price discrimination in large random networks. Recent trends in industry suggest that increasingly firms are using information about social network to offer personalized prices to individuals based upon their positions in the social network. In the presence of positive network externalities, firms aim to increase their profits by offering discounts to influential individuals that can stimulate consumption by other individuals at a higher price. However, the lack of transparency in discriminative pricing can reduce consumer satisfaction and create mistrust. Recent research has focused on the computation of optimal prices in deterministic networks under positive externalities. We would like to answer the question: how valuable is such discriminative pricing? We find, surprisingly, that the value of such pricing policies (increase in profits due to price discrimination) in very large random networks are often not significant. We provide the exact rates at which this value grows in the size of the random networks for different ranges of network densities.
{"title":"The Value of Price Discrimination in Large Random Networks","authors":"Jiali Huang, Ankur Mani, Zizhuo Wang","doi":"10.2139/ssrn.3368458","DOIUrl":"https://doi.org/10.2139/ssrn.3368458","url":null,"abstract":"We study the value of price discrimination in large random networks. Recent trends in industry suggest that increasingly firms are using information about social network to offer personalized prices to individuals based upon their positions in the social network. In the presence of positive network externalities, firms aim to increase their profits by offering discounts to influential individuals that can stimulate consumption by other individuals at a higher price. However, the lack of transparency in discriminative pricing can reduce consumer satisfaction and create mistrust. Recent research has focused on the computation of optimal prices in deterministic networks under positive externalities. We would like to answer the question: how valuable is such discriminative pricing? We find, surprisingly, that the value of such pricing policies (increase in profits due to price discrimination) in very large random networks are often not significant. We provide the exact rates at which this value grows in the size of the random networks for different ranges of network densities.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125033014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Correa, Paul Dütting, Felix A. Fischer, Kevin Schewior
A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from a distribution F, the goal is to choose a stopping time τ so as to maximize α such that for all distributions F we have E[Xτ]≥α•E[maxt Xt]. What makes this problem challenging is that the decision whether τ=t may only depend on the values of the random variables X1, ..., Xt and on the distribution F. For a long time the best known bound for the problem had been α≥1-1/e≅0.632, but quite recently a tight bound of α≅0.745 was obtained. The case where F is unknown, such that the decision whether τ=t may depend only on the values of the random variables X1, ..., Xt, is equally well motivated but has received much less attention. A straightforward guarantee for this case of α≥1-1/e≅0.368 can be derived from the solution to the secretary problem, where an arbitrary set of values arrive in random order and the goal is to maximize the probability of selecting the largest value. We show that this bound is in fact tight. We then investigate the case where the stopping time may additionally depend on a limited number of samples from~F, and show that even with o(n) samples α≥1/e. On the other hand, n samples allow for a significant improvement, while O(n2) samples are equivalent to knowledge of the distribution: specifically, with n samples α≥1-1/e≅0.632 and α≥ln(2)≅0.693, and with O(n2) samples α≥0.745-ε for any ε>0.
{"title":"Prophet Inequalities for I.I.D. Random Variables from an Unknown Distribution","authors":"J. Correa, Paul Dütting, Felix A. Fischer, Kevin Schewior","doi":"10.1145/3328526.3329627","DOIUrl":"https://doi.org/10.1145/3328526.3329627","url":null,"abstract":"A central object in optimal stopping theory is the single-choice prophet inequality for independent, identically distributed random variables: given a sequence of random variables X1, ..., Xn drawn independently from a distribution F, the goal is to choose a stopping time τ so as to maximize α such that for all distributions F we have E[Xτ]≥α•E[maxt Xt]. What makes this problem challenging is that the decision whether τ=t may only depend on the values of the random variables X1, ..., Xt and on the distribution F. For a long time the best known bound for the problem had been α≥1-1/e≅0.632, but quite recently a tight bound of α≅0.745 was obtained. The case where F is unknown, such that the decision whether τ=t may depend only on the values of the random variables X1, ..., Xt, is equally well motivated but has received much less attention. A straightforward guarantee for this case of α≥1-1/e≅0.368 can be derived from the solution to the secretary problem, where an arbitrary set of values arrive in random order and the goal is to maximize the probability of selecting the largest value. We show that this bound is in fact tight. We then investigate the case where the stopping time may additionally depend on a limited number of samples from~F, and show that even with o(n) samples α≥1/e. On the other hand, n samples allow for a significant improvement, while O(n2) samples are equivalent to knowledge of the distribution: specifically, with n samples α≥1-1/e≅0.632 and α≥ln(2)≅0.693, and with O(n2) samples α≥0.745-ε for any ε>0.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116562201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Correa, R. Epstein, Juan F. Escobar, Ignacio Rios, Bastián Bahamondes, Carlos Bonet, Natalie Epstein, Nicolas Aramayo, Martin Castillo, Andrés Cristi, Boris Epstein
Centralized school admission mechanisms are an attractive way of improving social welfare and fairness in large educational systems. In this paper we report the design and implementation of the newly established school choice mechanism in Chile, where over 274,000 students applied to more than 6,400 schools. The Chilean system presents unprecedented design challenges that make it unique. On the one hand, it is a simultaneous nationwide system, making it one of the largest school admission problems worldwide. On the other hand, the system runs at all school levels, from Pre-K to 12th grade, raising at least two issues of outmost importance; namely, the system needs to guarantee their current seat to students applying for a school change, and the system has to favor the assignment of siblings to the same school. As in other systems around the world, we develop a model based on the celebrated Deferred Acceptance algorithm. The algorithm deals not only with the aforementioned issues, but also with further practical features such as soft-bounds and overlapping types. In this context we analyze new stability definitions, present the results of its implementation and conduct simulations showing the benefits of the innovations of the implemented system.
{"title":"School Choice in Chile","authors":"J. Correa, R. Epstein, Juan F. Escobar, Ignacio Rios, Bastián Bahamondes, Carlos Bonet, Natalie Epstein, Nicolas Aramayo, Martin Castillo, Andrés Cristi, Boris Epstein","doi":"10.1145/3328526.3329580","DOIUrl":"https://doi.org/10.1145/3328526.3329580","url":null,"abstract":"Centralized school admission mechanisms are an attractive way of improving social welfare and fairness in large educational systems. In this paper we report the design and implementation of the newly established school choice mechanism in Chile, where over 274,000 students applied to more than 6,400 schools. The Chilean system presents unprecedented design challenges that make it unique. On the one hand, it is a simultaneous nationwide system, making it one of the largest school admission problems worldwide. On the other hand, the system runs at all school levels, from Pre-K to 12th grade, raising at least two issues of outmost importance; namely, the system needs to guarantee their current seat to students applying for a school change, and the system has to favor the assignment of siblings to the same school. As in other systems around the world, we develop a model based on the celebrated Deferred Acceptance algorithm. The algorithm deals not only with the aforementioned issues, but also with further practical features such as soft-bounds and overlapping types. In this context we analyze new stability definitions, present the results of its implementation and conduct simulations showing the benefits of the innovations of the implemented system.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132511765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic network flows, or network flows over time, constitute an important model for real-world situations where steady states are unusual, such as urban traffic and the Internet. These applications immediately raise the issue of analyzing dynamic network flows from a game-theoretic perspective. In this paper we study dynamic equilibria in the deterministic fluid queuing model in single-source single-sink networks, arguably the most basic model for flows over time. In the last decade we have witnessed significant developments in the theoretical understanding of the model. However, several fundamental questions remain open. One of the most prominent ones concerns the Price of Anarchy, measured as the worst case ratio between the minimum time required to route a given amount of flow from the source to the sink, and the time a dynamic equilibrium takes to perform the same task. Our main result states that if we could reduce the inflow of the network in a dynamic equilibrium, then the Price of Anarchy is exactly $e/(e-1)approx 1.582$. This significantly extends a result by Bhaskar, Fleischer, and Anshelevich (SODA 2011). Furthermore, our methods allow to determine that the Price of Anarchy in parallel-link networks is exactly 4/3. Finally, we argue that if a certain very natural monotonicity conjecture holds, the Price of Anarchy in the general case is exactly $e/(e-1)$.
动态网络流,或随时间变化的网络流,构成了稳定状态不寻常的现实世界情况的重要模型,例如城市交通和互联网。这些应用立即提出了从博弈论角度分析动态网络流的问题。本文研究了单源单汇网络中确定性流体排队模型的动态平衡问题,该模型可以说是最基本的随时间流动模型。在过去十年中,我们见证了对该模型的理论理解的重大发展。然而,仍有几个基本问题有待解决。其中最突出的便是无政府状态的价格(Price of Anarchy),即将一定数量的流从源发送到汇聚所需的最短时间与执行相同任务所需的动态平衡时间之间的最坏情况比率。我们的主要结果表明,如果我们能够在动态均衡中减少网络的流入,那么无政府状态的价格恰好是$e/(e-1)约1.582$。这大大扩展了Bhaskar, Fleischer和Anshelevich (SODA 2011)的结果。此外,我们的方法允许确定并行链路网络的无政府状态的价格正好是4/3。最后,我们论证了如果一个非常自然的单调性猜想成立,在一般情况下,无政府状态的价格恰好是$e/(e-1)$。
{"title":"On the Price of Anarchy for flows over time","authors":"J. Correa, Andrés Cristi, Tim Oosterwijk","doi":"10.1145/3328526.3329593","DOIUrl":"https://doi.org/10.1145/3328526.3329593","url":null,"abstract":"Dynamic network flows, or network flows over time, constitute an important model for real-world situations where steady states are unusual, such as urban traffic and the Internet. These applications immediately raise the issue of analyzing dynamic network flows from a game-theoretic perspective. In this paper we study dynamic equilibria in the deterministic fluid queuing model in single-source single-sink networks, arguably the most basic model for flows over time. In the last decade we have witnessed significant developments in the theoretical understanding of the model. However, several fundamental questions remain open. One of the most prominent ones concerns the Price of Anarchy, measured as the worst case ratio between the minimum time required to route a given amount of flow from the source to the sink, and the time a dynamic equilibrium takes to perform the same task. Our main result states that if we could reduce the inflow of the network in a dynamic equilibrium, then the Price of Anarchy is exactly $e/(e-1)approx 1.582$. This significantly extends a result by Bhaskar, Fleischer, and Anshelevich (SODA 2011). Furthermore, our methods allow to determine that the Price of Anarchy in parallel-link networks is exactly 4/3. Finally, we argue that if a certain very natural monotonicity conjecture holds, the Price of Anarchy in the general case is exactly $e/(e-1)$.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133758531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taylor Lundy, Alexander Wei, Hu Fu, S. Kominers, Kevin Leyton-Brown
We consider the problem of a nonprofit organization ("center") that must divide resources among subsidiaries ("agents"), based on agents' reported demand forecasts, with the aim of maximizing social good (agents' valuations for the allocation minus any payments that are imposed on them). We investigate the impact of a common feature of the nonprofit setting: the center's ability to audit agents who receive allocations, comparing their actual consumption with their reported forecasts. We show that auditing increases the power of mechanisms for utility maximization, both in unit-demand settings and beyond: in unit-demand settings, we consider both constraining ourselves to an allocation function studied in past work and allowing the allocation function to vary; beyond unit demand, we adopt the VCG allocation but modify the payment rule. Our ultimate goal is to show how to leverage auditing mechanisms to maximize utility in repeated allocation problems where payments are not possible; we show how any static auditing mechanism can be transformed to operate in such a setting, using the threat of reduced future allocations in place of monetary payments.
{"title":"Allocation for Social Good: Auditing Mechanisms for Utility Maximization","authors":"Taylor Lundy, Alexander Wei, Hu Fu, S. Kominers, Kevin Leyton-Brown","doi":"10.1145/3328526.3329623","DOIUrl":"https://doi.org/10.1145/3328526.3329623","url":null,"abstract":"We consider the problem of a nonprofit organization (\"center\") that must divide resources among subsidiaries (\"agents\"), based on agents' reported demand forecasts, with the aim of maximizing social good (agents' valuations for the allocation minus any payments that are imposed on them). We investigate the impact of a common feature of the nonprofit setting: the center's ability to audit agents who receive allocations, comparing their actual consumption with their reported forecasts. We show that auditing increases the power of mechanisms for utility maximization, both in unit-demand settings and beyond: in unit-demand settings, we consider both constraining ourselves to an allocation function studied in past work and allowing the allocation function to vary; beyond unit demand, we adopt the VCG allocation but modify the payment rule. Our ultimate goal is to show how to leverage auditing mechanisms to maximize utility in repeated allocation problems where payments are not possible; we show how any static auditing mechanism can be transformed to operate in such a setting, using the threat of reduced future allocations in place of monetary payments.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134025732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}