具有随机特征的RKHS中监督学习的改进分析:超越最小二乘。

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Neural Networks Pub Date : 2025-01-08 DOI:10.1016/j.neunet.2024.107091
Jiamin Liu, Lei Wang, Heng Lian
{"title":"具有随机特征的RKHS中监督学习的改进分析:超越最小二乘。","authors":"Jiamin Liu, Lei Wang, Heng Lian","doi":"10.1016/j.neunet.2024.107091","DOIUrl":null,"url":null,"abstract":"<p><p>We consider kernel-based supervised learning using random Fourier features, focusing on its statistical error bounds and generalization properties with general loss functions. Beyond the least squares loss, existing results only demonstrate worst-case analysis with rate n<sup>-1/2</sup> and the number of features at least comparable to n, and refined-case analysis where it can achieve almost n<sup>-1</sup> rate when the kernel's eigenvalue decay is exponential and the number of features is again at least comparable to n. For the least squares loss, the results are much richer and the optimal rates can be achieved under the source and capacity assumptions, with the number of features smaller than n. In this paper, for both losses with Lipschitz derivative and Lipschitz losses, we successfully establish faster rates with number of features much smaller than n, which are the same as the rates and number of features for the least squares loss. More specifically, in the attainable case (the true function is in the RKHS), we obtain the rate n<sup>-2ξ2ξ+γ</sup> which is the same as the standard method without using approximation, using o(n) features, where ξ characterizes the smoothness of the true function and γ characterizes the decay rate of the eigenvalues of the integral operator. Thus our results answer an important open question regarding random features.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"184 ","pages":"107091"},"PeriodicalIF":6.0000,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improved analysis of supervised learning in the RKHS with random features: Beyond least squares.\",\"authors\":\"Jiamin Liu, Lei Wang, Heng Lian\",\"doi\":\"10.1016/j.neunet.2024.107091\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>We consider kernel-based supervised learning using random Fourier features, focusing on its statistical error bounds and generalization properties with general loss functions. Beyond the least squares loss, existing results only demonstrate worst-case analysis with rate n<sup>-1/2</sup> and the number of features at least comparable to n, and refined-case analysis where it can achieve almost n<sup>-1</sup> rate when the kernel's eigenvalue decay is exponential and the number of features is again at least comparable to n. For the least squares loss, the results are much richer and the optimal rates can be achieved under the source and capacity assumptions, with the number of features smaller than n. In this paper, for both losses with Lipschitz derivative and Lipschitz losses, we successfully establish faster rates with number of features much smaller than n, which are the same as the rates and number of features for the least squares loss. More specifically, in the attainable case (the true function is in the RKHS), we obtain the rate n<sup>-2ξ2ξ+γ</sup> which is the same as the standard method without using approximation, using o(n) features, where ξ characterizes the smoothness of the true function and γ characterizes the decay rate of the eigenvalues of the integral operator. Thus our results answer an important open question regarding random features.</p>\",\"PeriodicalId\":49763,\"journal\":{\"name\":\"Neural Networks\",\"volume\":\"184 \",\"pages\":\"107091\"},\"PeriodicalIF\":6.0000,\"publicationDate\":\"2025-01-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Networks\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1016/j.neunet.2024.107091\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.neunet.2024.107091","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

我们考虑使用随机傅立叶特征的基于核的监督学习,重点研究其统计误差范围和具有一般损失函数的泛化性质。除了最小二乘损失之外,现有的结果只证明了速率为n-1/2且特征数量至少与n相当的最坏情况分析,以及细化的情况分析,当核的特征值衰减为指数且特征数量至少与n相当时,它可以实现几乎n-1的速率。对于最小二乘损失,结果更丰富,并且在源和容量假设下可以实现最优速率。在本文中,对于Lipschitz导数损失和Lipschitz损失,我们成功地建立了特征数远小于n的更快的速率,这与最小二乘损失的速率和特征数相同。更具体地说,在可实现的情况下(真函数在RKHS中),我们获得速率n-2ξ2ξ+γ,这与不使用近似的标准方法相同,使用o(n)个特征,其中ξ表征真函数的平滑性,γ表征积分算子的特征值的衰减率。因此,我们的结果回答了一个关于随机特征的重要开放问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Improved analysis of supervised learning in the RKHS with random features: Beyond least squares.

We consider kernel-based supervised learning using random Fourier features, focusing on its statistical error bounds and generalization properties with general loss functions. Beyond the least squares loss, existing results only demonstrate worst-case analysis with rate n-1/2 and the number of features at least comparable to n, and refined-case analysis where it can achieve almost n-1 rate when the kernel's eigenvalue decay is exponential and the number of features is again at least comparable to n. For the least squares loss, the results are much richer and the optimal rates can be achieved under the source and capacity assumptions, with the number of features smaller than n. In this paper, for both losses with Lipschitz derivative and Lipschitz losses, we successfully establish faster rates with number of features much smaller than n, which are the same as the rates and number of features for the least squares loss. More specifically, in the attainable case (the true function is in the RKHS), we obtain the rate n-2ξ2ξ+γ which is the same as the standard method without using approximation, using o(n) features, where ξ characterizes the smoothness of the true function and γ characterizes the decay rate of the eigenvalues of the integral operator. Thus our results answer an important open question regarding random features.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
期刊最新文献
Estimating global phase synchronization by quantifying multivariate mutual information and detecting network structure. Event-based adaptive fixed-time optimal control for saturated fault-tolerant nonlinear multiagent systems via reinforcement learning algorithm. Lie group convolution neural networks with scale-rotation equivariance. Multi-hop interpretable meta learning for few-shot temporal knowledge graph completion. An object detection-based model for automated screening of stem-cells senescence during drug screening.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1