Online learning under one sided $$\sigma $$ -smooth function

IF 0.9 4区 数学 Q4 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Journal of Combinatorial Optimization Pub Date : 2024-05-18 DOI:10.1007/s10878-024-01174-2
Hongxiang Zhang, Dachuan Xu, Ling Gai, Zhenning Zhang
{"title":"Online learning under one sided $$\\sigma $$ -smooth function","authors":"Hongxiang Zhang, Dachuan Xu, Ling Gai, Zhenning Zhang","doi":"10.1007/s10878-024-01174-2","DOIUrl":null,"url":null,"abstract":"<p>The online optimization model was first introduced in the research of machine learning problems (Zinkevich, Proceedings of ICML, 928–936, 2003). It is a powerful framework that combines the principles of optimization with the challenges of online decision-making. The present research mainly consider the case that the reveal objective functions are convex or submodular. In this paper, we focus on the online maximization problem under a special objective function <span>\\(\\varPhi (x):[0,1]^n\\rightarrow \\mathbb {R}_{+}\\)</span> which satisfies the inequality <span>\\(\\frac{1}{2}\\langle u^{T}\\nabla ^{2}\\varPhi (x),u\\rangle \\le \\sigma \\cdot \\frac{\\Vert u\\Vert _{1}}{\\Vert x\\Vert _{1}}\\langle u,\\nabla \\varPhi (x)\\rangle \\)</span> for any <span>\\(x,u\\in [0,1]^n, x\\ne 0\\)</span>. This objective function is named as one sided <span>\\(\\sigma \\)</span>-smooth (OSS) function. We achieve two conclusions here. Firstly, under the assumption that the gradient function of OSS function is L-smooth, we propose an <span>\\((1-\\exp ((\\theta -1)(\\theta /(1+\\theta ))^{2\\sigma }))\\)</span>- approximation algorithm with <span>\\(O(\\sqrt{T})\\)</span> regret upper bound, where <i>T</i> is the number of rounds in the online algorithm and <span>\\(\\theta , \\sigma \\in \\mathbb {R}_{+}\\)</span> are parameters. Secondly, if the gradient function of OSS function has no L-smoothness, we provide an <span>\\(\\left( 1+((\\theta +1)/\\theta )^{4\\sigma }\\right) ^{-1}\\)</span>-approximation projected gradient algorithm, and prove that the regret upper bound of the algorithm is <span>\\(O(\\sqrt{T})\\)</span>. We think that this research can provide different ideas for online non-convex and non-submodular learning.</p>","PeriodicalId":50231,"journal":{"name":"Journal of Combinatorial Optimization","volume":"48 1","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Combinatorial Optimization","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1007/s10878-024-01174-2","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

The online optimization model was first introduced in the research of machine learning problems (Zinkevich, Proceedings of ICML, 928–936, 2003). It is a powerful framework that combines the principles of optimization with the challenges of online decision-making. The present research mainly consider the case that the reveal objective functions are convex or submodular. In this paper, we focus on the online maximization problem under a special objective function \(\varPhi (x):[0,1]^n\rightarrow \mathbb {R}_{+}\) which satisfies the inequality \(\frac{1}{2}\langle u^{T}\nabla ^{2}\varPhi (x),u\rangle \le \sigma \cdot \frac{\Vert u\Vert _{1}}{\Vert x\Vert _{1}}\langle u,\nabla \varPhi (x)\rangle \) for any \(x,u\in [0,1]^n, x\ne 0\). This objective function is named as one sided \(\sigma \)-smooth (OSS) function. We achieve two conclusions here. Firstly, under the assumption that the gradient function of OSS function is L-smooth, we propose an \((1-\exp ((\theta -1)(\theta /(1+\theta ))^{2\sigma }))\)- approximation algorithm with \(O(\sqrt{T})\) regret upper bound, where T is the number of rounds in the online algorithm and \(\theta , \sigma \in \mathbb {R}_{+}\) are parameters. Secondly, if the gradient function of OSS function has no L-smoothness, we provide an \(\left( 1+((\theta +1)/\theta )^{4\sigma }\right) ^{-1}\)-approximation projected gradient algorithm, and prove that the regret upper bound of the algorithm is \(O(\sqrt{T})\). We think that this research can provide different ideas for online non-convex and non-submodular learning.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
单边 $\$sigma $$ 平滑函数下的在线学习
在线优化模型最早出现在机器学习问题的研究中(Zinkevich,Proceedings of ICML,928-936,2003)。它是一个功能强大的框架,将优化原理与在线决策的挑战相结合。目前的研究主要考虑揭示目标函数为凸函数或次模函数的情况。本文主要研究特殊目标函数 \(\varPhi (x):[0,1]^n\rightarrow(mathbb {R}_{+})满足不等式((\frac{1}{2}\langle u^{T}\nabla ^{2}\varPhi (x)、u\rangle\le\sigma\cdot \frac\Vert u\Vert _{1}}{\Vert x\Vert _{1}}\langle u,\nabla \varPhi (x)\rangle\) for any \(x,u\in [0,1]^n, x\ne 0\).这个目标函数被命名为单边平滑(OSS)函数。我们在此得出两个结论。首先,在假设OSS函数的梯度函数是L-光滑的前提下,我们提出了一种具有(O(\sqrt{T})\)遗憾上限的((1-\exp ((\theta -1)(\theta /(1+\theta ))^{2\sigma }))\)-近似算法、其中,T 是在线算法的轮数,\(\theta , \sigma \in \mathbb {R}_{+}\) 是参数。其次,如果 OSS 函数的梯度函数不具有 L 平滑性,我们提供了一种 \(\left( 1+((\theta +1)/\theta )^{4\sigma }\right) ^{-1}\) 近似的投影梯度算法,并证明该算法的遗憾上限为 \(O(\sqrt{T})\)。我们认为这项研究可以为在线非凸和非次模块学习提供不同的思路。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of Combinatorial Optimization
Journal of Combinatorial Optimization 数学-计算机:跨学科应用
CiteScore
2.00
自引率
10.00%
发文量
83
审稿时长
6 months
期刊介绍: The objective of Journal of Combinatorial Optimization is to advance and promote the theory and applications of combinatorial optimization, which is an area of research at the intersection of applied mathematics, computer science, and operations research and which overlaps with many other areas such as computation complexity, computational biology, VLSI design, communication networks, and management science. It includes complexity analysis and algorithm design for combinatorial optimization problems, numerical experiments and problem discovery with applications in science and engineering. The Journal of Combinatorial Optimization publishes refereed papers dealing with all theoretical, computational and applied aspects of combinatorial optimization. It also publishes reviews of appropriate books and special issues of journals.
期刊最新文献
Enhanced deterministic approximation algorithm for non-monotone submodular maximization under knapsack constraint with linear query complexity A novel arctic fox survival strategy inspired optimization algorithm Dynamic time window based full-view coverage maximization in CSNs Different due-window assignment scheduling with deterioration effects An upper bound for neighbor-connectivity of graphs
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1