注意差距:跨多个目标测量泛化性能

Matthias Feurer, Katharina Eggensperger, Eddie Bergman, Florian Pfisterer, B. Bischl, F. Hutter
{"title":"注意差距:跨多个目标测量泛化性能","authors":"Matthias Feurer, Katharina Eggensperger, Eddie Bergman, Florian Pfisterer, B. Bischl, F. Hutter","doi":"10.48550/arXiv.2212.04183","DOIUrl":null,"url":null,"abstract":"Modern machine learning models are often constructed taking into account multiple objectives, e.g., minimizing inference time while also maximizing accuracy. Multi-objective hyperparameter optimization (MHPO) algorithms return such candidate models, and the approximation of the Pareto front is used to assess their performance. In practice, we also want to measure generalization when moving from the validation to the test set. However, some of the models might no longer be Pareto-optimal which makes it unclear how to quantify the performance of the MHPO method when evaluated on the test set. To resolve this, we provide a novel evaluation protocol that allows measuring the generalization performance of MHPO methods and studying its capabilities for comparing two optimization experiments.","PeriodicalId":91439,"journal":{"name":"Advances in intelligent data analysis. International Symposium on Intelligent Data Analysis","volume":"412 1","pages":"130-142"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Mind the Gap: Measuring Generalization Performance Across Multiple Objectives\",\"authors\":\"Matthias Feurer, Katharina Eggensperger, Eddie Bergman, Florian Pfisterer, B. Bischl, F. Hutter\",\"doi\":\"10.48550/arXiv.2212.04183\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Modern machine learning models are often constructed taking into account multiple objectives, e.g., minimizing inference time while also maximizing accuracy. Multi-objective hyperparameter optimization (MHPO) algorithms return such candidate models, and the approximation of the Pareto front is used to assess their performance. In practice, we also want to measure generalization when moving from the validation to the test set. However, some of the models might no longer be Pareto-optimal which makes it unclear how to quantify the performance of the MHPO method when evaluated on the test set. To resolve this, we provide a novel evaluation protocol that allows measuring the generalization performance of MHPO methods and studying its capabilities for comparing two optimization experiments.\",\"PeriodicalId\":91439,\"journal\":{\"name\":\"Advances in intelligent data analysis. International Symposium on Intelligent Data Analysis\",\"volume\":\"412 1\",\"pages\":\"130-142\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advances in intelligent data analysis. International Symposium on Intelligent Data Analysis\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.48550/arXiv.2212.04183\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in intelligent data analysis. International Symposium on Intelligent Data Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2212.04183","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

现代机器学习模型通常考虑多个目标,例如,最小化推理时间,同时最大化准确性。多目标超参数优化(MHPO)算法返回这些候选模型,并使用Pareto前沿逼近来评估它们的性能。在实践中,我们还希望在从验证转移到测试集时度量泛化。然而,有些模型可能不再是帕累托最优的,这使得在测试集上评估MHPO方法时如何量化其性能变得不清楚。为了解决这个问题,我们提供了一种新的评估协议,可以测量MHPO方法的泛化性能,并研究其比较两个优化实验的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Mind the Gap: Measuring Generalization Performance Across Multiple Objectives
Modern machine learning models are often constructed taking into account multiple objectives, e.g., minimizing inference time while also maximizing accuracy. Multi-objective hyperparameter optimization (MHPO) algorithms return such candidate models, and the approximation of the Pareto front is used to assess their performance. In practice, we also want to measure generalization when moving from the validation to the test set. However, some of the models might no longer be Pareto-optimal which makes it unclear how to quantify the performance of the MHPO method when evaluated on the test set. To resolve this, we provide a novel evaluation protocol that allows measuring the generalization performance of MHPO methods and studying its capabilities for comparing two optimization experiments.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Learning Permutation-Invariant Embeddings for Description Logic Concepts Effects of Locality and Rule Language on Explanations for Knowledge Graph Embeddings Transferable Deep Metric Learning for Clustering Revised Conditional t-SNE: Looking Beyond the Nearest Neighbors Shapley Values with Uncertain Value Functions
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1