Computing ground states of local Hamiltonians is a fundamental problem in condensed matter physics. The problem is known to be QMA-complete, even for one-dimensional Hamiltonians [1]. This means that we do not even expect that there is a sub-exponential size description of the ground state that allows efficient computation of local observables such as the energy. In sharp contrast, the heuristic density matrix renormalization group (DMRG) algorithm invented two decades ago [5] has been remarkably successful in practice on one-dimensional problems. The situation is reminiscent of the unexplained success of the simplex algorithm before the advent of ellipsoid and interior-point methods. Is there a principled explanation for this, in the form of a large class of one-dimensional Hamiltonians whose ground states can be provably efficiently approximated? Here we give such an algorithm for gapped one-dimensional Hamiltonians: our algorithm outputs an (inverse-polynomial) approximation to the ground state, expressed as a matrix product state (MPS) of polynomial bond dimension. The running time of the algorithm is polynomial in the number of qudits n and the approximation quality δ, for a fixed local dimension d and gap Δ > 0. A key ingredient of our algorithm is a new construction of an operator called an approximate ground state projector (AGSP), a concept first introduced in [2] to derive an improved area law for gapped one-dimensional systems [3]. For this purpose the AGSP has to be efficiently constructed; the particular AGSP we construct relies on matrix-valued Chernoff bounds [4]. Other ingredients of the algorithm include the use of convex programming, recently discovered structural features of gapped 1D quantum systems [2], and new techniques for manipulating and bounding the complexity of matrix product states.
{"title":"An efficient algorithm for finding the ground state of 1D gapped local hamiltonians","authors":"Zeph Landau, U. Vazirani, Thomas Vidick","doi":"10.1145/2554797.2554825","DOIUrl":"https://doi.org/10.1145/2554797.2554825","url":null,"abstract":"Computing ground states of local Hamiltonians is a fundamental problem in condensed matter physics. The problem is known to be QMA-complete, even for one-dimensional Hamiltonians [1]. This means that we do not even expect that there is a sub-exponential size description of the ground state that allows efficient computation of local observables such as the energy. In sharp contrast, the heuristic density matrix renormalization group (DMRG) algorithm invented two decades ago [5] has been remarkably successful in practice on one-dimensional problems. The situation is reminiscent of the unexplained success of the simplex algorithm before the advent of ellipsoid and interior-point methods. Is there a principled explanation for this, in the form of a large class of one-dimensional Hamiltonians whose ground states can be provably efficiently approximated? Here we give such an algorithm for gapped one-dimensional Hamiltonians: our algorithm outputs an (inverse-polynomial) approximation to the ground state, expressed as a matrix product state (MPS) of polynomial bond dimension. The running time of the algorithm is polynomial in the number of qudits n and the approximation quality δ, for a fixed local dimension d and gap Δ > 0. A key ingredient of our algorithm is a new construction of an operator called an approximate ground state projector (AGSP), a concept first introduced in [2] to derive an improved area law for gapped one-dimensional systems [3]. For this purpose the AGSP has to be efficiently constructed; the particular AGSP we construct relies on matrix-valued Chernoff bounds [4]. Other ingredients of the algorithm include the use of convex programming, recently discovered structural features of gapped 1D quantum systems [2], and new techniques for manipulating and bounding the complexity of matrix product states.","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129591537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We initiate the study of the complexity of arithmetic circuits with division gates over non-commuting variables. Such circuits and formulas compute non-commutative rational functions, which, despite their name, can no longer be expressed as ratios of polynomials. We prove some lower and upper bounds, completeness and simulation results, as follows. If X is n x n matrix consisting of n2 distinct mutually non-commuting variables, we show that: (i). X-1 can be computed by a circuit of polynomial size, (ii). every formula computing some entry of X-1 must have size at least 2Ω(n). We also show that matrix inverse is complete in the following sense: (i). Assume that a non-commutative rational function f can be computed by a formula of size s. Then there exists an invertible 2s x 2s-matrix A whose entries are variables or field elements such that f is an entry of A-1. (ii). If f is a non-commutative polynomial computed by a formula without inverse gates then A can be taken as an upper triangular matrix with field elements on the diagonal. We show how divisions can be eliminated from non-commutative circuits and formulae which compute polynomials, and we address the non-commutative version of the "rational function identity testing" problem. As it happens, the complexity of both of these procedures depends on a single open problem in invariant theory.
{"title":"Non-commutative arithmetic circuits with division","authors":"P. Hrubes, A. Wigderson","doi":"10.1145/2554797.2554805","DOIUrl":"https://doi.org/10.1145/2554797.2554805","url":null,"abstract":"We initiate the study of the complexity of arithmetic circuits with division gates over non-commuting variables. Such circuits and formulas compute non-commutative rational functions, which, despite their name, can no longer be expressed as ratios of polynomials. We prove some lower and upper bounds, completeness and simulation results, as follows. If X is n x n matrix consisting of n2 distinct mutually non-commuting variables, we show that: (i). X-1 can be computed by a circuit of polynomial size, (ii). every formula computing some entry of X-1 must have size at least 2Ω(n). We also show that matrix inverse is complete in the following sense: (i). Assume that a non-commutative rational function f can be computed by a formula of size s. Then there exists an invertible 2s x 2s-matrix A whose entries are variables or field elements such that f is an entry of A-1. (ii). If f is a non-commutative polynomial computed by a formula without inverse gates then A can be taken as an upper triangular matrix with field elements on the diagonal. We show how divisions can be eliminated from non-commutative circuits and formulae which compute polynomials, and we address the non-commutative version of the \"rational function identity testing\" problem. As it happens, the complexity of both of these procedures depends on a single open problem in invariant theory.","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124586205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is generally assumed that you can make a financial asset out of any underlying event or combination thereof, and then sell a security. We show that while this is theoretically true from the financial engineering perspective, compound securities might be intractable to price. Even given no information asymmetries, or adversarial sellers, it might be computationally intractable to put a value on these, and the associated computational complexity might afford an advantage to the party with more compute power. We prove that the problem of pricing an option on a single security with unbounded compounding is PSPACE hard, even when the behavior of the underlying security is computationally tractable. We also show that in the oracle model, even when compounding is limited to at most k layers, the complexity of pricing securities grows exponentially in k.
{"title":"The computational hardness of pricing compound options","authors":"M. Braverman, Kanika Pasricha","doi":"10.1145/2554797.2554809","DOIUrl":"https://doi.org/10.1145/2554797.2554809","url":null,"abstract":"It is generally assumed that you can make a financial asset out of any underlying event or combination thereof, and then sell a security. We show that while this is theoretically true from the financial engineering perspective, compound securities might be intractable to price. Even given no information asymmetries, or adversarial sellers, it might be computationally intractable to put a value on these, and the associated computational complexity might afford an advantage to the party with more compute power. We prove that the problem of pricing an option on a single security with unbounded compounding is PSPACE hard, even when the behavior of the underlying security is computationally tractable. We also show that in the oracle model, even when compounding is limited to at most k layers, the complexity of pricing securities grows exponentially in k.","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"402 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133610795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Session 2: 10:30--10:40","authors":"N. Linial","doi":"10.1145/3255054","DOIUrl":"https://doi.org/10.1145/3255054","url":null,"abstract":"","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128788362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show that (leveled) fully homomorphic encryption (FHE) can be based on the hardness of O(n1.5+ε)-approximation for lattice problems (such as GapSVP) under quantum reductions for any ε 〉 0 (or O(n2+ε)-approximation under classical reductions). This matches the best known hardness for "regular" (non-homomorphic) lattice based public-key encryption up to the ε factor. A number of previous methods had hit a roadblock at quasipolynomial approximation. (As usual, a circular security assumption can be used to achieve a non-leveled FHE scheme.) Our approach consists of three main ideas: Noise-bounded sequential evaluation of high fan-in operations; Circuit sequentialization using Barrington's Theorem; and finally, successive dimension-modulus reduction.
{"title":"Lattice-based FHE as secure as PKE","authors":"Zvika Brakerski, V. Vaikuntanathan","doi":"10.1145/2554797.2554799","DOIUrl":"https://doi.org/10.1145/2554797.2554799","url":null,"abstract":"We show that (leveled) fully homomorphic encryption (FHE) can be based on the hardness of O(n1.5+ε)-approximation for lattice problems (such as GapSVP) under quantum reductions for any ε 〉 0 (or O(n2+ε)-approximation under classical reductions). This matches the best known hardness for \"regular\" (non-homomorphic) lattice based public-key encryption up to the ε factor. A number of previous methods had hit a roadblock at quasipolynomial approximation. (As usual, a circular security assumption can be used to achieve a non-leveled FHE scheme.) Our approach consists of three main ideas: Noise-bounded sequential evaluation of high fan-in operations; Circuit sequentialization using Barrington's Theorem; and finally, successive dimension-modulus reduction.","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"2019 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125761403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies property testing for NP optimization problems with parameter k under the general graph model with an augmentation of random edge sampling capability. It is shown that a variety of such problems, including k-Vertex Cover, k-Feedback Vertex Set, k-Multicut, k-path-freeness and k-Dominating Set, are constant-time testable if k is constant. It should be noted that the first four problems are fixed parameter tractable (FPT) and it turns out that algorithmic techniques for their FPT algorithms (branch-and-bound search, color coding, etc.) are also useful for our testers. k-Dominating Set is $W[2]$-hard, but we can still test the property in constant time since the definition of ε-farness makes the problem trivial for non-sparse graphs that are the source of hardness for the original optimization problem. We also consider k-Odd Cycle Transversal, which is another well-known FPT problem, but we only give a sublinear-time tester when k is a constant.
{"title":"Parameterized testability","authors":"K. Iwama, Yuichi Yoshida","doi":"10.1145/2554797.2554843","DOIUrl":"https://doi.org/10.1145/2554797.2554843","url":null,"abstract":"This paper studies property testing for NP optimization problems with parameter k under the general graph model with an augmentation of random edge sampling capability. It is shown that a variety of such problems, including k-Vertex Cover, k-Feedback Vertex Set, k-Multicut, k-path-freeness and k-Dominating Set, are constant-time testable if k is constant. It should be noted that the first four problems are fixed parameter tractable (FPT) and it turns out that algorithmic techniques for their FPT algorithms (branch-and-bound search, color coding, etc.) are also useful for our testers. k-Dominating Set is $W[2]$-hard, but we can still test the property in constant time since the definition of ε-farness makes the problem trivial for non-sparse graphs that are the source of hardness for the original optimization problem. We also consider k-Odd Cycle Transversal, which is another well-known FPT problem, but we only give a sublinear-time tester when k is a constant.","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125851670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show that if NC1 ≠ L, then for every element α of the alternating group At, circuits of depth O(log t) cannot distinguish between a uniform vector over (At)t with product = α and one with product = identity. Combined with a recent construction by the author and Viola in the setting of leakage-resilient cryptography [STOC '13], this gives a compiler that produces circuits withstanding leakage from NC1 (assuming NC1 ≠ L). For context, leakage from NC1 breaks nearly all previous constructions, and security against leakage from P is impossible. We build on work by Cook and McKenzie [J. Algorithms '87] establishing the relationship between L = logarithmic space and the symmetric group St. Our techniques include a novel algorithmic use of commutators to manipulate the cycle structure of permutations in At.
{"title":"Iterated group products and leakage resilience against NC1","authors":"Eric Miles","doi":"10.1145/2554797.2554822","DOIUrl":"https://doi.org/10.1145/2554797.2554822","url":null,"abstract":"We show that if NC1 ≠ L, then for every element α of the alternating group At, circuits of depth O(log t) cannot distinguish between a uniform vector over (At)t with product = α and one with product = identity. Combined with a recent construction by the author and Viola in the setting of leakage-resilient cryptography [STOC '13], this gives a compiler that produces circuits withstanding leakage from NC1 (assuming NC1 ≠ L). For context, leakage from NC1 breaks nearly all previous constructions, and security against leakage from P is impossible. We build on work by Cook and McKenzie [J. Algorithms '87] establishing the relationship between L = logarithmic space and the symmetric group St. Our techniques include a novel algorithmic use of commutators to manipulate the cycle structure of permutations in At.","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133339312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show that the high dimensional expansion property as defined by Gromov, Linial and Meshulam, for simplicial complexes is a form of testability. Namely, a simplicial complex is a high dimensional expander iff a suitable property is testable. Using this connection, we derive several testability results.
{"title":"High dimensional expanders and property testing","authors":"T. Kaufman, A. Lubotzky","doi":"10.1145/2554797.2554842","DOIUrl":"https://doi.org/10.1145/2554797.2554842","url":null,"abstract":"We show that the high dimensional expansion property as defined by Gromov, Linial and Meshulam, for simplicial complexes is a form of testability. Namely, a simplicial complex is a high dimensional expander iff a suitable property is testable. Using this connection, we derive several testability results.","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127464512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we study mechanism design problems in the ordinal setting wherein the preferences of agents are described by orderings over outcomes, as opposed to specific numerical values associated with them. This setting is relevant when agents can compare outcomes, but aren't able to evaluate precise utilities for them. Such a situation arises in diverse contexts including voting and matching markets. Our paper addresses two issues that arise in ordinal mechanism design. To design social welfare maximizing mechanisms, one needs to be able to quantitatively measure the welfare of an outcome which is not clear in the ordinal setting. Second, since the impossibility results of Gibbard and Satterthwaite [14, 25] force one to move to randomized mechanisms, one needs a more nuanced notion of truthfulness. We propose rank approximation as a metric for measuring the quality of an outcome, which allows us to evaluate mechanisms based on worst-case performance, and lex-truthfulness as a notion of truthfulness for randomized ordinal mechanisms. Lex-truthfulness is stronger than notions studied in the literature, and yet flexible enough to admit a rich class of mechanisms circumventing classical impossibility results. We demonstrate the usefulness of the above notions by devising lex-truthful mechanisms achieving good rank-approximation factors, both in the general ordinal setting, as well as structured settings such as (one-sided) matching markets, and its generalizations, matroid and scheduling markets.
{"title":"Welfare maximization and truthfulness in mechanism design with ordinal preferences","authors":"Deeparnab Chakrabarty, Chaitanya Swamy","doi":"10.1145/2554797.2554810","DOIUrl":"https://doi.org/10.1145/2554797.2554810","url":null,"abstract":"In this paper, we study mechanism design problems in the ordinal setting wherein the preferences of agents are described by orderings over outcomes, as opposed to specific numerical values associated with them. This setting is relevant when agents can compare outcomes, but aren't able to evaluate precise utilities for them. Such a situation arises in diverse contexts including voting and matching markets. Our paper addresses two issues that arise in ordinal mechanism design. To design social welfare maximizing mechanisms, one needs to be able to quantitatively measure the welfare of an outcome which is not clear in the ordinal setting. Second, since the impossibility results of Gibbard and Satterthwaite [14, 25] force one to move to randomized mechanisms, one needs a more nuanced notion of truthfulness. We propose rank approximation as a metric for measuring the quality of an outcome, which allows us to evaluate mechanisms based on worst-case performance, and lex-truthfulness as a notion of truthfulness for randomized ordinal mechanisms. Lex-truthfulness is stronger than notions studied in the literature, and yet flexible enough to admit a rich class of mechanisms circumventing classical impossibility results. We demonstrate the usefulness of the above notions by devising lex-truthful mechanisms achieving good rank-approximation factors, both in the general ordinal setting, as well as structured settings such as (one-sided) matching markets, and its generalizations, matroid and scheduling markets.","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133804313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate computational and mechanism design aspects of optimal scarce resource allocation, where the primary rationing mechanism is through waiting times. Specifically we consider the problem of allocating medical treatments to a population of patients. Each patient has demand for exactly one unit of treatment, and can choose to be treated in one of k hospitals, H1, ..., Hk. Different hospitals have different costs per treatment, which are fully paid by a third party ---the "payer"--- and do not accrue to the patients. The payer has a fixed budget B and can only cover a limited number of treatments in the more expensive hospitals. Access to over-demanded hospitals is rationed through waiting times: each hospital Hi will have waiting time wi. In equilibrium, each patient will choose his most preferred hospital given his intrinsic preferences and the waiting times. The payer thus computes the waiting times and the number of treatments authorized for each hospital, so that in equilibrium the budget constraint is satisfied and the social welfare is maximized. We show that even if the patients' preferences are known to the payer, the task of optimizing social welfare in equilibrium subject to the budget constraint is NP-hard. We also show that, with constant number of hospitals, if the budget constraint can be relaxed from B to (1+ε)B for an arbitrarily small constant ε, then the original optimum under budget B can be approximated very efficiently. Next, we study the endogenous emergence of waiting time from the dynamics between hospitals and patients, and show that there is no need for the payer to explicitly enforce the optimal equilibrium waiting times. When the patients arrive uniformly along time and when they have generic types, all that the payer needs to do is to enforce the total amount of money he would like to pay to each hospital. The waiting times will simply change according to the demand, and the dynamics will always converge to the desired waiting times in finite time. We then go beyond equilibrium solutions and investigate the optimization problem over a much larger class of mechanisms containing the equilibrium ones as special cases. In the setting with two hospitals, we show that under a natural assumption on the patients' preference profiles, optimal welfare is in fact attained by the randomized assignment mechanism, which allocates patients to hospitals at random subject to the budget constraint, but avoids waiting times. Finally, we discuss potential policy implications of our results, as well as follow-up directions and open problems.
{"title":"Optimal provision-after-wait in healthcare","authors":"M. Braverman, Jing Chen, Sampath Kannan","doi":"10.1145/2554797.2554846","DOIUrl":"https://doi.org/10.1145/2554797.2554846","url":null,"abstract":"We investigate computational and mechanism design aspects of optimal scarce resource allocation, where the primary rationing mechanism is through waiting times. Specifically we consider the problem of allocating medical treatments to a population of patients. Each patient has demand for exactly one unit of treatment, and can choose to be treated in one of k hospitals, H1, ..., Hk. Different hospitals have different costs per treatment, which are fully paid by a third party ---the \"payer\"--- and do not accrue to the patients. The payer has a fixed budget B and can only cover a limited number of treatments in the more expensive hospitals. Access to over-demanded hospitals is rationed through waiting times: each hospital Hi will have waiting time wi. In equilibrium, each patient will choose his most preferred hospital given his intrinsic preferences and the waiting times. The payer thus computes the waiting times and the number of treatments authorized for each hospital, so that in equilibrium the budget constraint is satisfied and the social welfare is maximized. We show that even if the patients' preferences are known to the payer, the task of optimizing social welfare in equilibrium subject to the budget constraint is NP-hard. We also show that, with constant number of hospitals, if the budget constraint can be relaxed from B to (1+ε)B for an arbitrarily small constant ε, then the original optimum under budget B can be approximated very efficiently. Next, we study the endogenous emergence of waiting time from the dynamics between hospitals and patients, and show that there is no need for the payer to explicitly enforce the optimal equilibrium waiting times. When the patients arrive uniformly along time and when they have generic types, all that the payer needs to do is to enforce the total amount of money he would like to pay to each hospital. The waiting times will simply change according to the demand, and the dynamics will always converge to the desired waiting times in finite time. We then go beyond equilibrium solutions and investigate the optimization problem over a much larger class of mechanisms containing the equilibrium ones as special cases. In the setting with two hospitals, we show that under a natural assumption on the patients' preference profiles, optimal welfare is in fact attained by the randomized assignment mechanism, which allocates patients to hospitals at random subject to the budget constraint, but avoids waiting times. Finally, we discuss potential policy implications of our results, as well as follow-up directions and open problems.","PeriodicalId":382856,"journal":{"name":"Proceedings of the 5th conference on Innovations in theoretical computer science","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131967115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}