Dean Eckles, Hossein Esfandiari, Elchanan Mossel, M. Amin Rahimian
Seeding the most influential individuals based on the contact structure can substantially enhance the extent of a spread over the social network. Most of the influence maximization literature assumes the knowledge of the entire network graph. However, in practice, obtaining full knowledge of the network structure is very costly. We propose polynomial-time algorithms that provide almost tight approximation guarantees using a bounded number of queries to the graph structure. We also provide impossibility results to lower bound the query complexity and show tightness of our guarantees.
{"title":"Seeding with Costly Network Information","authors":"Dean Eckles, Hossein Esfandiari, Elchanan Mossel, M. Amin Rahimian","doi":"10.2139/ssrn.3386417","DOIUrl":"https://doi.org/10.2139/ssrn.3386417","url":null,"abstract":"Seeding the most influential individuals based on the contact structure can substantially enhance the extent of a spread over the social network. Most of the influence maximization literature assumes the knowledge of the entire network graph. However, in practice, obtaining full knowledge of the network structure is very costly. We propose polynomial-time algorithms that provide almost tight approximation guarantees using a bounded number of queries to the graph structure. We also provide impossibility results to lower bound the query complexity and show tightness of our guarantees.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133040697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a stochastic inventory control problem under censored demands, lost sales, and positive lead times. This is a fundamental problem in inventory management, with significant literature establishing near-optimality of a simple class of policies called "base-stock policies" for the underlying Markov Decision Process (MDP), as well as convexity of long run average-cost under those policies. We consider the relatively less studied problem of designing a learning algorithm for this problem when the underlying demand distribution is unknown. The goal is to bound regret of the algorithm when compared to the best base-stock policy. We utilize the convexity properties and a newly derived bound on bias of base-stock policies to establish a connection to stochastic convex bandit optimization. Our main contribution is a learning algorithm with a regret bound of ~O (L√T+D) for the inventory control problem. Here L is the fixed and known lead time, and D is an unknown parameter of the demand distribution described roughly as the number of time steps needed to generate enough demand for depleting one unit of inventory. Notably, even though the state space of the underlying MDP is continuous and L-dimensional, our regret bounds depend linearly on L. Our results significantly improve the previously best known regret bounds for this problem where the dependence on L was exponential and many further assumptions on demand distribution were required. The techniques presented here may be of independent interest for other settings that involve large structured MDPs but with convex cost functions.
{"title":"Learning in Structured MDPs with Convex Cost Functions: Improved Regret Bounds for Inventory Management","authors":"Shipra Agrawal, Randy Jia","doi":"10.1145/3328526.3329565","DOIUrl":"https://doi.org/10.1145/3328526.3329565","url":null,"abstract":"We consider a stochastic inventory control problem under censored demands, lost sales, and positive lead times. This is a fundamental problem in inventory management, with significant literature establishing near-optimality of a simple class of policies called \"base-stock policies\" for the underlying Markov Decision Process (MDP), as well as convexity of long run average-cost under those policies. We consider the relatively less studied problem of designing a learning algorithm for this problem when the underlying demand distribution is unknown. The goal is to bound regret of the algorithm when compared to the best base-stock policy. We utilize the convexity properties and a newly derived bound on bias of base-stock policies to establish a connection to stochastic convex bandit optimization. Our main contribution is a learning algorithm with a regret bound of ~O (L√T+D) for the inventory control problem. Here L is the fixed and known lead time, and D is an unknown parameter of the demand distribution described roughly as the number of time steps needed to generate enough demand for depleting one unit of inventory. Notably, even though the state space of the underlying MDP is continuous and L-dimensional, our regret bounds depend linearly on L. Our results significantly improve the previously best known regret bounds for this problem where the dependence on L was exponential and many further assumptions on demand distribution were required. The techniques presented here may be of independent interest for other settings that involve large structured MDPs but with convex cost functions.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"85 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134476500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bulow and Klemperer's well-known result states that, in a single-item auction where the n bidders' values are independently and identically drawn from a regular distribution, the Vickrey auction with one additional bidder (a duplicate) extracts at least as much revenue as the optimal auction without the duplicate. Hartline and Roughgarden, in their influential 2009 paper, removed the requirement that the distributions be identical, at the cost of allowing the Vickrey auction to recruit n duplicates, one from each distribution, and relaxing its revenue advantage to a 2-approximation. In this work we restore Bulow and Klemperer's number of duplicates in Hartline and Roughgarden's more general setting. We show that recruiting a duplicate from one of the distributions suffices for the Vickrey auction to $10$-approximate the optimal revenue. We also show that in a k-unit auction, recruiting k duplicates suffices for the VCG auction to $O(1)$-approximate the optimal revenue. We also tighten the analysis for Hartline and Roughgarden's Vickrey auction with n duplicates. We show that, for two distributions, the Vickrey auction with two duplicates obtains at least $3/4$ of the optimal revenue. This is tight by meeting a lower bound by Hartline and Roughgarden. En route, we obtain a transparent analysis of their $2$-approximation, by a natural connection to Ronen's lookahead auction.
{"title":"The Vickrey Auction with a Single Duplicate Bidder Approximates the Optimal Revenue","authors":"Hu Fu, Christopher Liaw, Sikander Randhawa","doi":"10.1145/3328526.3329597","DOIUrl":"https://doi.org/10.1145/3328526.3329597","url":null,"abstract":"Bulow and Klemperer's well-known result states that, in a single-item auction where the n bidders' values are independently and identically drawn from a regular distribution, the Vickrey auction with one additional bidder (a duplicate) extracts at least as much revenue as the optimal auction without the duplicate. Hartline and Roughgarden, in their influential 2009 paper, removed the requirement that the distributions be identical, at the cost of allowing the Vickrey auction to recruit n duplicates, one from each distribution, and relaxing its revenue advantage to a 2-approximation. In this work we restore Bulow and Klemperer's number of duplicates in Hartline and Roughgarden's more general setting. We show that recruiting a duplicate from one of the distributions suffices for the Vickrey auction to $10$-approximate the optimal revenue. We also show that in a k-unit auction, recruiting k duplicates suffices for the VCG auction to $O(1)$-approximate the optimal revenue. We also tighten the analysis for Hartline and Roughgarden's Vickrey auction with n duplicates. We show that, for two distributions, the Vickrey auction with two duplicates obtains at least $3/4$ of the optimal revenue. This is tight by meeting a lower bound by Hartline and Roughgarden. En route, we obtain a transparent analysis of their $2$-approximation, by a natural connection to Ronen's lookahead auction.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"94 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131771074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the implications of endogenous pricing for learning and welfare in the classic herding model. When prices are determined exogenously, it is known that learning occurs if and only if signals are unbounded. By contrast, we show that learning can occur when signals are bounded as long as non-conformism among consumers is scarce. More formally, learning happens if and only if signals exhibit the vanishing likelihood property introduced bellow. We discuss the implications of our results for potential market failure in the context of Schumpeterian growth with uncertainty over the value of innovations.
{"title":"The Implications of Pricing on Social Learning","authors":"Itai Arieli, Moran Koren, Rann Smorodinsky","doi":"10.1145/3328526.3329554","DOIUrl":"https://doi.org/10.1145/3328526.3329554","url":null,"abstract":"We study the implications of endogenous pricing for learning and welfare in the classic herding model. When prices are determined exogenously, it is known that learning occurs if and only if signals are unbounded. By contrast, we show that learning can occur when signals are bounded as long as non-conformism among consumers is scarce. More formally, learning happens if and only if signals exhibit the vanishing likelihood property introduced bellow. We discuss the implications of our results for potential market failure in the context of Schumpeterian growth with uncertainty over the value of innovations.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125163679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martin Weitzman's "Pandora's problem" furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher is not obligated to pay the cost of inspecting an alternative's value before selecting it. Unlike the original Pandora's problem, the version with nonobligatory inspection cannot be solved optimally by any simple ranking-based policy, and it is unknown whether there exists any polynomial-time algorithm to compute the optimal policy. This motivates the study of approximately optimal policies that are simple and computationally efficient. In this work we provide the first non-trivial approximation guarantees for this problem. We introduce a family of "committing policies" such that it is computationally easy to find and implement the optimal committing policy. We prove that the optimal committing policy is guaranteed to approximate the fully optimal policy within a 1-1/e = 0.63... factor, and for the special case of two boxes we improve this factor to 4/5 and show that this approximation is tight for the class of committing policies.
Martin Weitzman的“潘多拉问题”为经济学中的最优搜索理论提供了数学基础。近40年后,劳拉·多瓦尔(Laura Doval)提出了这个问题的另一个版本,即搜索者在选择备选项之前没有义务支付检查备选项价值的费用。与原始的潘多拉问题不同,非强制检查版本不能通过任何简单的基于排名的策略来最优解决,并且不知道是否存在多项式时间算法来计算最优策略。这激发了对简单且计算效率高的近似最优策略的研究。在这项工作中,我们为这个问题提供了第一个非平凡近似保证。我们引入了一系列“提交策略”,以便在计算上容易找到并实现最优提交策略。我们证明了最优提交策略在1-1/e = 0.63范围内保证近似于完全最优策略。因子,对于两个盒子的特殊情况,我们将这个因子提高到4/5,并表明这个近似对于提交策略类是紧密的。
{"title":"Pandora's Problem with Nonobligatory Inspection","authors":"Hedyeh Beyhaghi, Robert D. Kleinberg","doi":"10.1145/3328526.3329626","DOIUrl":"https://doi.org/10.1145/3328526.3329626","url":null,"abstract":"Martin Weitzman's \"Pandora's problem\" furnishes the mathematical basis for optimal search theory in economics. Nearly 40 years later, Laura Doval introduced a version of the problem in which the searcher is not obligated to pay the cost of inspecting an alternative's value before selecting it. Unlike the original Pandora's problem, the version with nonobligatory inspection cannot be solved optimally by any simple ranking-based policy, and it is unknown whether there exists any polynomial-time algorithm to compute the optimal policy. This motivates the study of approximately optimal policies that are simple and computationally efficient. In this work we provide the first non-trivial approximation guarantees for this problem. We introduce a family of \"committing policies\" such that it is computationally easy to find and implement the optimal committing policy. We prove that the optimal committing policy is guaranteed to approximate the fully optimal policy within a 1-1/e = 0.63... factor, and for the special case of two boxes we improve this factor to 4/5 and show that this approximation is tight for the class of committing policies.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131330796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we study the metric distortion of deterministic social choice rules that choose a winning candidate from a set of candidates based on voter preferences. Voters and candidates are located in an underlying metric space. A voter has cost equal to her distance to the winning candidate. Ordinal social choice rules only have access to the ordinal preferences of the voters that are assumed to be consistent with the metric distances. Our goal is to design an ordinal social choice rule with minimum distortion, which is the worst-case ratio, over all consistent metrics, between the social cost of the rule and that of the optimal omniscient rule with knowledge of the underlying metric space. The distortion of the best deterministic social choice rule was known to be between 3 and 5. It had been conjectured that any rule that only looks at the weighted tournament graph on the candidates cannot have distortion better than 5. In our paper, we disprove it by presenting a weighted tournament rule with distortion of 4.236. We design this rule by generalizing the classic notion of uncovered sets, and further show that this class of rules cannot have distortion better than 4.236. We then propose a new voting rule, via an alternative generalization of uncovered sets. We show that if a candidate satisfying the criterion of this voting rule exists, then choosing such a candidate yields a distortion bound of 3, matching the lower bound. We present a combinatorial conjecture that implies distortion of $3$, and verify it for small numbers of candidates and voters by computer experiments. Using our framework, we also show that selecting any candidate guarantees distortion of at most 3 when the weighted tournament graph is cyclically symmetric.
{"title":"Improved Metric Distortion for Deterministic Social Choice Rules","authors":"Kamesh Munagala, Kangning Wang","doi":"10.1145/3328526.3329550","DOIUrl":"https://doi.org/10.1145/3328526.3329550","url":null,"abstract":"In this paper, we study the metric distortion of deterministic social choice rules that choose a winning candidate from a set of candidates based on voter preferences. Voters and candidates are located in an underlying metric space. A voter has cost equal to her distance to the winning candidate. Ordinal social choice rules only have access to the ordinal preferences of the voters that are assumed to be consistent with the metric distances. Our goal is to design an ordinal social choice rule with minimum distortion, which is the worst-case ratio, over all consistent metrics, between the social cost of the rule and that of the optimal omniscient rule with knowledge of the underlying metric space. The distortion of the best deterministic social choice rule was known to be between 3 and 5. It had been conjectured that any rule that only looks at the weighted tournament graph on the candidates cannot have distortion better than 5. In our paper, we disprove it by presenting a weighted tournament rule with distortion of 4.236. We design this rule by generalizing the classic notion of uncovered sets, and further show that this class of rules cannot have distortion better than 4.236. We then propose a new voting rule, via an alternative generalization of uncovered sets. We show that if a candidate satisfying the criterion of this voting rule exists, then choosing such a candidate yields a distortion bound of 3, matching the lower bound. We present a combinatorial conjecture that implies distortion of $3$, and verify it for small numbers of candidates and voters by computer experiments. Using our framework, we also show that selecting any candidate guarantees distortion of at most 3 when the weighted tournament graph is cyclically symmetric.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125632227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prediction is a well-studied machine learning task, and prediction algorithms are core ingredients in online products and services. Despite their centrality in the competition between online companies who offer prediction-based products, the strategic use of prediction algorithms remains unexplored. The goal of this paper is to examine strategic use of prediction algorithms. We introduce a novel game-theoretic setting that is based on the PAC learning framework, where each player (aka a prediction algorithm aimed at competition) seeks to maximize the sum of points for which it produces an accurate prediction and the others do not. We show that algorithms aiming at generalization may wittingly mispredict some points to perform better than others on expectation. We analyze the empirical game, i.e., the game induced on a given sample, prove that it always possesses a pure Nash equilibrium, and show that every better-response learning process converges. Moreover, our learning-theoretic analysis suggests that players can, with high probability, learn an approximate pure Nash equilibrium for the whole population using a small number of samples.
{"title":"Regression Equilibrium","authors":"Omer Ben-Porat, Moshe Tennenholtz","doi":"10.1145/3328526.3329560","DOIUrl":"https://doi.org/10.1145/3328526.3329560","url":null,"abstract":"Prediction is a well-studied machine learning task, and prediction algorithms are core ingredients in online products and services. Despite their centrality in the competition between online companies who offer prediction-based products, the strategic use of prediction algorithms remains unexplored. The goal of this paper is to examine strategic use of prediction algorithms. We introduce a novel game-theoretic setting that is based on the PAC learning framework, where each player (aka a prediction algorithm aimed at competition) seeks to maximize the sum of points for which it produces an accurate prediction and the others do not. We show that algorithms aiming at generalization may wittingly mispredict some points to perform better than others on expectation. We analyze the empirical game, i.e., the game induced on a given sample, prove that it always possesses a pure Nash equilibrium, and show that every better-response learning process converges. Moreover, our learning-theoretic analysis suggests that players can, with high probability, learn an approximate pure Nash equilibrium for the whole population using a small number of samples.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130792557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the problem of computing personalized reserve prices in eager second price auctions without having any assumption on valuation distributions. Here, the input is a dataset that contains the submitted bids of n buyers in a set of auctions and the goal is to return personalized reserve prices r that maximize the revenue earned on these auctions by running eager second price auctions with reserve r. We present a novel LP formulation to this problem and a rounding procedure which achieves a (1+2(√2-1)e√2-2)-1≅0.684-approximation. This improves over the 1/2-approximation Algorithm due to Roughgarden and Wang. We show that our analysis is tight for this rounding procedure. We also bound the integrality gap of the LP, which bounds the performance of any algorithm based on this LP.
{"title":"LP-based Approximation for Personalized Reserve Prices","authors":"M. Derakhshan, Negin Golrezaei, R. Leme","doi":"10.1145/3328526.3329594","DOIUrl":"https://doi.org/10.1145/3328526.3329594","url":null,"abstract":"We study the problem of computing personalized reserve prices in eager second price auctions without having any assumption on valuation distributions. Here, the input is a dataset that contains the submitted bids of n buyers in a set of auctions and the goal is to return personalized reserve prices r that maximize the revenue earned on these auctions by running eager second price auctions with reserve r. We present a novel LP formulation to this problem and a rounding procedure which achieves a (1+2(√2-1)e√2-2)-1≅0.684-approximation. This improves over the 1/2-approximation Algorithm due to Roughgarden and Wang. We show that our analysis is tight for this rounding procedure. We also bound the integrality gap of the LP, which bounds the performance of any algorithm based on this LP.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131692235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study non-Bayesian social learning in large networks and binary state space. Agents who are located in a network receive conditionally i.i.d. signals over the state. We refer to the initial distribution of signals as the information structure. In each step, all agents aggregate their belief with the beliefs of their neighbors according to some non-Bayesian rule. We refer to the aggregation rule as the learning dynamic. We say that a dynamic leads to learning if the beliefs of all agents converge to the correct state with a probability that approaches one in an increasing sequence of large networks. We say that a class of information structures p is learnable if there exists a learning dynamic that leads to learning for all information structures in p. Namely, there exists a single learning dynamic that robustly leads to learning for all possible information structures. We provide a necessary and sufficient characterization of learnable classes of information structures. Whenever learning is possible in a class p it is also possible via a virtually additive learning dynamic, where players map beliefs to virtual values and in each period they simply sum up all neighbors' virtual values to deduce their new belief. In addition, we relax the common prior assumption and provide a sufficient condition for learning in the absence of a common prior.
{"title":"Robust Non-Bayesian Social Learning","authors":"Itai Arieli, Y. Babichenko, Segev Shlomov","doi":"10.2139/ssrn.3381563","DOIUrl":"https://doi.org/10.2139/ssrn.3381563","url":null,"abstract":"We study non-Bayesian social learning in large networks and binary state space. Agents who are located in a network receive conditionally i.i.d. signals over the state. We refer to the initial distribution of signals as the information structure. In each step, all agents aggregate their belief with the beliefs of their neighbors according to some non-Bayesian rule. We refer to the aggregation rule as the learning dynamic. We say that a dynamic leads to learning if the beliefs of all agents converge to the correct state with a probability that approaches one in an increasing sequence of large networks. We say that a class of information structures p is learnable if there exists a learning dynamic that leads to learning for all information structures in p. Namely, there exists a single learning dynamic that robustly leads to learning for all possible information structures. We provide a necessary and sufficient characterization of learnable classes of information structures. Whenever learning is possible in a class p it is also possible via a virtually additive learning dynamic, where players map beliefs to virtual values and in each period they simply sum up all neighbors' virtual values to deduce their new belief. In addition, we relax the common prior assumption and provide a sufficient condition for learning in the absence of a common prior.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122684319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The framework of budget-feasible mechanism design studies procurement auctions where the auctioneer (buyer) aims to maximize his valuation function subject to a hard budget constraint. We study the problem of designing truthful mechanisms that have good approximation guarantees and never pay the participating agents (sellers) more than the budget. We focus on the case of general (non-monotone) submodular valuation functions and derive the first truthful, budget-feasible and $O(1)$-approximation mechanisms that run in polynomial time in the value query model, for both offline and online auctions. Since the introduction of the problem by Singer citepSinger10, obtaining efficient mechanisms for objectives that go beyond the class of monotone submodular functions has been elusive. Prior to our work, the only $O(1)$-approximation mechanism known for non-monotone submodular objectives required an exponential number of value queries. At the heart of our approach lies a novel greedy algorithm for non-monotone submodular maximization under a knapsack constraint. Our algorithm builds two candidate solutions simultaneously (to achieve a good approximation), yet ensures that agents cannot jump from one solution to the other (to implicitly enforce truthfulness). Ours is the first mechanism for the problem where---crucially---the agents are not ordered according to their marginal value per cost. This allows us to appropriately adapt these ideas to the online setting as well. To further illustrate the applicability of our approach, we also consider the case where additional feasibility constraints are present, e.g., at most k agents can be selected. We obtain O(p)-approximation mechanisms for both monotone and non-monotone submodular objectives, when the feasible solutions are independent sets of a p-system. With the exception of additive valuation functions, no mechanisms were known for this setting prior to our work. Finally, we provide lower bounds suggesting that, when one cares about non-trivial approximation guarantees in polynomial time, our results are asymptotically best possible.
{"title":"Budget-Feasible Mechanism Design for Non-Monotone Submodular Objectives: Offline and Online","authors":"Georgios Amanatidis, P. Kleer, G. Schäfer","doi":"10.1145/3328526.3329622","DOIUrl":"https://doi.org/10.1145/3328526.3329622","url":null,"abstract":"The framework of budget-feasible mechanism design studies procurement auctions where the auctioneer (buyer) aims to maximize his valuation function subject to a hard budget constraint. We study the problem of designing truthful mechanisms that have good approximation guarantees and never pay the participating agents (sellers) more than the budget. We focus on the case of general (non-monotone) submodular valuation functions and derive the first truthful, budget-feasible and $O(1)$-approximation mechanisms that run in polynomial time in the value query model, for both offline and online auctions. Since the introduction of the problem by Singer citepSinger10, obtaining efficient mechanisms for objectives that go beyond the class of monotone submodular functions has been elusive. Prior to our work, the only $O(1)$-approximation mechanism known for non-monotone submodular objectives required an exponential number of value queries. At the heart of our approach lies a novel greedy algorithm for non-monotone submodular maximization under a knapsack constraint. Our algorithm builds two candidate solutions simultaneously (to achieve a good approximation), yet ensures that agents cannot jump from one solution to the other (to implicitly enforce truthfulness). Ours is the first mechanism for the problem where---crucially---the agents are not ordered according to their marginal value per cost. This allows us to appropriately adapt these ideas to the online setting as well. To further illustrate the applicability of our approach, we also consider the case where additional feasibility constraints are present, e.g., at most k agents can be selected. We obtain O(p)-approximation mechanisms for both monotone and non-monotone submodular objectives, when the feasible solutions are independent sets of a p-system. With the exception of additive valuation functions, no mechanisms were known for this setting prior to our work. Finally, we provide lower bounds suggesting that, when one cares about non-trivial approximation guarantees in polynomial time, our results are asymptotically best possible.","PeriodicalId":416173,"journal":{"name":"Proceedings of the 2019 ACM Conference on Economics and Computation","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127510240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}