基于词典的选择方法与符号回归问题的向下采样:概述与基准

Alina Geiger, Dominik Sobania, Franz Rothlauf
{"title":"基于词典的选择方法与符号回归问题的向下采样:概述与基准","authors":"Alina Geiger, Dominik Sobania, Franz Rothlauf","doi":"arxiv-2407.21632","DOIUrl":null,"url":null,"abstract":"In recent years, several new lexicase-based selection variants have emerged\ndue to the success of standard lexicase selection in various application\ndomains. For symbolic regression problems, variants that use an\nepsilon-threshold or batches of training cases, among others, have led to\nperformance improvements. Lately, especially variants that combine lexicase\nselection and down-sampling strategies have received a lot of attention. This\npaper evaluates random as well as informed down-sampling in combination with\nthe relevant lexicase-based selection methods on a wide range of symbolic\nregression problems. In contrast to most work, we not only compare the methods\nover a given evaluation budget, but also over a given time as time is usually\nlimited in practice. We find that for a given evaluation budget,\nepsilon-lexicase selection in combination with random or informed down-sampling\noutperforms all other methods. Only for a rather long running time of 24h, the\nbest performing method is tournament selection in combination with informed\ndown-sampling. If the given running time is very short, lexicase variants using\nbatches of training cases perform best.","PeriodicalId":501347,"journal":{"name":"arXiv - CS - Neural and Evolutionary Computing","volume":"48 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Lexicase-based Selection Methods with Down-sampling for Symbolic Regression Problems: Overview and Benchmark\",\"authors\":\"Alina Geiger, Dominik Sobania, Franz Rothlauf\",\"doi\":\"arxiv-2407.21632\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, several new lexicase-based selection variants have emerged\\ndue to the success of standard lexicase selection in various application\\ndomains. For symbolic regression problems, variants that use an\\nepsilon-threshold or batches of training cases, among others, have led to\\nperformance improvements. Lately, especially variants that combine lexicase\\nselection and down-sampling strategies have received a lot of attention. This\\npaper evaluates random as well as informed down-sampling in combination with\\nthe relevant lexicase-based selection methods on a wide range of symbolic\\nregression problems. In contrast to most work, we not only compare the methods\\nover a given evaluation budget, but also over a given time as time is usually\\nlimited in practice. We find that for a given evaluation budget,\\nepsilon-lexicase selection in combination with random or informed down-sampling\\noutperforms all other methods. Only for a rather long running time of 24h, the\\nbest performing method is tournament selection in combination with informed\\ndown-sampling. If the given running time is very short, lexicase variants using\\nbatches of training cases perform best.\",\"PeriodicalId\":501347,\"journal\":{\"name\":\"arXiv - CS - Neural and Evolutionary Computing\",\"volume\":\"48 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-07-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Neural and Evolutionary Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2407.21632\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Neural and Evolutionary Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.21632","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

近年来,由于标准lexicase选择在不同应用领域的成功,出现了几种新的基于lexicase选择的变体。对于符号回归问题,使用epsilon阈值或成批训练案例等的变体提高了性能。最近,结合词典选择和向下抽样策略的变体受到了广泛关注。本文结合相关的基于词法的选择方法,对各种符号回归问题进行了随机和知情向下采样的评估。与大多数工作不同的是,我们不仅在给定的评估预算内比较了这些方法,而且还在给定的时间内比较了这些方法,因为在实践中时间通常是有限的。我们发现,在给定的评估预算下,ε-lexicase 选择与随机或知情向下采样相结合的方法优于所有其他方法。只有在运行时间相当长(24 小时)的情况下,表现最好的方法才是锦标赛选择与知情向下抽样相结合的方法。如果给定的运行时间很短,则使用训练案例批次的词法变体表现最佳。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Lexicase-based Selection Methods with Down-sampling for Symbolic Regression Problems: Overview and Benchmark
In recent years, several new lexicase-based selection variants have emerged due to the success of standard lexicase selection in various application domains. For symbolic regression problems, variants that use an epsilon-threshold or batches of training cases, among others, have led to performance improvements. Lately, especially variants that combine lexicase selection and down-sampling strategies have received a lot of attention. This paper evaluates random as well as informed down-sampling in combination with the relevant lexicase-based selection methods on a wide range of symbolic regression problems. In contrast to most work, we not only compare the methods over a given evaluation budget, but also over a given time as time is usually limited in practice. We find that for a given evaluation budget, epsilon-lexicase selection in combination with random or informed down-sampling outperforms all other methods. Only for a rather long running time of 24h, the best performing method is tournament selection in combination with informed down-sampling. If the given running time is very short, lexicase variants using batches of training cases perform best.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Hardware-Friendly Implementation of Physical Reservoir Computing with CMOS-based Time-domain Analog Spiking Neurons Self-Contrastive Forward-Forward Algorithm Bio-Inspired Mamba: Temporal Locality and Bioplausible Learning in Selective State Space Models PReLU: Yet Another Single-Layer Solution to the XOR Problem Inferno: An Extensible Framework for Spiking Neural Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1