An online semi-definite programming with a generalised log-determinant regularizer and its applications

Yaxiong Liu, Ken-ichiro Moridomi, Kohei Hatano, Eiji Takimoto
{"title":"An online semi-definite programming with a generalised log-determinant regularizer and its applications","authors":"Yaxiong Liu, Ken-ichiro Moridomi, Kohei Hatano, Eiji Takimoto","doi":"10.3390/math10071055","DOIUrl":null,"url":null,"abstract":"We consider a variant of the online semi-definite programming problem (OSDP). Specifically, in our problem, the setting of the decision space is a set of positive semi-definite matrices constrained by two norms in parallel: the L∞ norm to the diagonal entries and the Γ-trace norm, which is a generalized trace norm with a positive definite matrix Γ. Our setting recovers the original one when Γ is an identity matrix. To solve this problem, we design a follow-the-regularized-leader algorithm with a Γ-dependent regularizer, which also generalizes the log-determinant function. Next, we focus on online binary matrix completion (OBMC) with side information and online similarity prediction with side information. By reducing to the OSDP framework and applying our proposed algorithm, we remove the logarithmic factors in the previous mistake bound of the above two problems. In particular, for OBMC, our bound is optimal. Furthermore, our result implies a better offline generalization bound for the algorithm, which is similar to those of SVMs with the best kernel, if the side information is involved in advance.","PeriodicalId":119756,"journal":{"name":"Asian Conference on Machine Learning","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Asian Conference on Machine Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/math10071055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

We consider a variant of the online semi-definite programming problem (OSDP). Specifically, in our problem, the setting of the decision space is a set of positive semi-definite matrices constrained by two norms in parallel: the L∞ norm to the diagonal entries and the Γ-trace norm, which is a generalized trace norm with a positive definite matrix Γ. Our setting recovers the original one when Γ is an identity matrix. To solve this problem, we design a follow-the-regularized-leader algorithm with a Γ-dependent regularizer, which also generalizes the log-determinant function. Next, we focus on online binary matrix completion (OBMC) with side information and online similarity prediction with side information. By reducing to the OSDP framework and applying our proposed algorithm, we remove the logarithmic factors in the previous mistake bound of the above two problems. In particular, for OBMC, our bound is optimal. Furthermore, our result implies a better offline generalization bound for the algorithm, which is similar to those of SVMs with the best kernel, if the side information is involved in advance.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
具有广义对数行列式正则化器的在线半确定规划及其应用
我们考虑了在线半确定规划问题(OSDP)的一个变体。具体来说,在我们的问题中,决策空间的设置是由两个并行范数约束的正半定矩阵的集合:对角线项的L∞范数和Γ-trace范数,这是一个具有正定矩阵Γ的广义迹范数。当Γ是单位矩阵时,我们的设置恢复原来的设置。为了解决这个问题,我们设计了一个带Γ-dependent正则化器的跟随正则化领导者算法,该算法也推广了对数行列式函数。接下来,我们重点研究了基于边信息的在线二值矩阵补全(OBMC)和基于边信息的在线相似度预测。通过简化到OSDP框架并应用我们提出的算法,我们消除了上述两个问题的前一个错误界中的对数因子。特别是对于OBMC,我们的界是最优的。此外,我们的结果表明,该算法具有更好的离线泛化边界,类似于具有最佳核的支持向量机,如果提前涉及侧信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
RoLNiP: Robust Learning Using Noisy Pairwise Comparisons AIIR-MIX: Multi-Agent Reinforcement Learning Meets Attention Individual Intrinsic Reward Mixing Network On the Interpretability of Attention Networks Evaluating the Perceived Safety of Urban City via Maximum Entropy Deep Inverse Reinforcement Learning One Gradient Frank-Wolfe for Decentralized Online Convex and Submodular Optimization
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1