A General Framework for Learning-Augmented Online Allocation

I. Cohen, Debmalya Panigrahi
{"title":"A General Framework for Learning-Augmented Online Allocation","authors":"I. Cohen, Debmalya Panigrahi","doi":"10.48550/arXiv.2305.18861","DOIUrl":null,"url":null,"abstract":"Online allocation is a broad class of problems where items arriving online have to be allocated to agents who have a fixed utility/cost for each assigned item so to maximize/minimize some objective. This framework captures a broad range of fundamental problems such as the Santa Claus problem (maximizing minimum utility), Nash welfare maximization (maximizing geometric mean of utilities), makespan minimization (minimizing maximum cost), minimization of $\\ell_p$-norms, and so on. We focus on divisible items (i.e., fractional allocations) in this paper. Even for divisible items, these problems are characterized by strong super-constant lower bounds in the classical worst-case online model. In this paper, we study online allocations in the {\\em learning-augmented} setting, i.e., where the algorithm has access to some additional (machine-learned) information about the problem instance. We introduce a {\\em general} algorithmic framework for learning-augmented online allocation that produces nearly optimal solutions for this broad range of maximization and minimization objectives using only a single learned parameter for every agent. As corollaries of our general framework, we improve prior results of Lattanzi et al. (SODA 2020) and Li and Xian (ICML 2021) for learning-augmented makespan minimization, and obtain the first learning-augmented nearly-optimal algorithms for the other objectives such as Santa Claus, Nash welfare, $\\ell_p$-minimization, etc. We also give tight bounds on the resilience of our algorithms to errors in the learned parameters, and study the learnability of these parameters.","PeriodicalId":266158,"journal":{"name":"International Colloquium on Automata, Languages and Programming","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Colloquium on Automata, Languages and Programming","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2305.18861","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Online allocation is a broad class of problems where items arriving online have to be allocated to agents who have a fixed utility/cost for each assigned item so to maximize/minimize some objective. This framework captures a broad range of fundamental problems such as the Santa Claus problem (maximizing minimum utility), Nash welfare maximization (maximizing geometric mean of utilities), makespan minimization (minimizing maximum cost), minimization of $\ell_p$-norms, and so on. We focus on divisible items (i.e., fractional allocations) in this paper. Even for divisible items, these problems are characterized by strong super-constant lower bounds in the classical worst-case online model. In this paper, we study online allocations in the {\em learning-augmented} setting, i.e., where the algorithm has access to some additional (machine-learned) information about the problem instance. We introduce a {\em general} algorithmic framework for learning-augmented online allocation that produces nearly optimal solutions for this broad range of maximization and minimization objectives using only a single learned parameter for every agent. As corollaries of our general framework, we improve prior results of Lattanzi et al. (SODA 2020) and Li and Xian (ICML 2021) for learning-augmented makespan minimization, and obtain the first learning-augmented nearly-optimal algorithms for the other objectives such as Santa Claus, Nash welfare, $\ell_p$-minimization, etc. We also give tight bounds on the resilience of our algorithms to errors in the learned parameters, and study the learnability of these parameters.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
学习增强在线分配的一般框架
在线分配是一类广泛的问题,其中必须将到达在线的物品分配给每个分配物品具有固定效用/成本的代理,以便最大化/最小化某些目标。这个框架捕获了广泛的基本问题,如圣诞老人问题(最大化最小效用)、纳什福利最大化(最大化效用的几何平均值)、makespan最小化(最小化最大成本)、最小化$\ell_p$-规范,等等。在本文中,我们关注可分割的项目(即分数分配)。即使对于可分项目,这些问题在经典的最坏情况在线模型中也具有强超常下界的特征。在本文中,我们研究了{\em学习增强}设置下的在线分配,即算法可以访问有关问题实例的一些附加(机器学习)信息。我们引入了一个用于学习增强在线分配的通用算法框架,该框架仅使用每个智能体的单个学习参数,就可以为这种广泛的最大化和最小化目标产生几乎最优的解决方案。作为我们一般框架的推论,我们改进了Lattanzi等人(SODA 2020)和Li和Xian (ICML 2021)在学习增强最大跨度最小化方面的先验结果,并获得了其他目标(如圣诞老人、纳什福利、$\ell_p$-最小化等)的第一个学习增强近最优算法。我们还给出了算法对学习参数误差的弹性的严格限制,并研究了这些参数的可学习性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Integer Linear-Exponential Programming in NP by Quantifier Elimination On Finding Constrained Independent Sets in Cycles Checking Refinement of Asynchronous Programs against Context-Free Specifications A General Framework for Learning-Augmented Online Allocation On Semantically-Deterministic Automata
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1