{"title":"cp3-bench: a tool for benchmarking symbolic regression algorithms demonstrated with cosmology","authors":"M.E. Thing and S.M. Koksbang","doi":"10.1088/1475-7516/2025/01/040","DOIUrl":null,"url":null,"abstract":"We introduce cp3-bench, a tool for comparing/benching symbolic regression algorithms, which we make publicly available at https://github.com/CP3-Origins/cp3-bench. In its current format, cp3-bench includes 12 different symbolic regression algorithms which can be automatically installed as part of cp3-bench. The philosophy behind cp3-bench is that is should be as user-friendly as possible, available in a ready-to-use format, and allow for easy additions of new algorithms and datasets. Our hope is that users of symbolic regression algorithms can use cp3-bench to easily install and compare/bench an array of symbolic regression algorithms to better decide which algorithms to use for their specific tasks at hand. To introduce and motivate the use of cp3-bench we present a small benchmark of 12 symbolic regression algorithms applied to 28 datasets representing six different cosmological and astroparticle physics setups. Overall, we find that most of the benched algorithms do rather poorly in the benchmark and suggest possible ways to proceed with developing algorithms that will be better at identifying ground truth expressions for cosmological and astroparticle physics datasets. Our demonstration benchmark specifically studies the significance of dimensionality of the feature space and precision of datasets. We find both to be highly important for symbolic regression tasks to be successful. On the other hand, we find no indication that inter-dependence of features in datasets is particularly important, meaning that it is not in general a hindrance for symbolic regression algorithms if datasets e.g. contain both z and H(z) as features. Lastly, we note that we find no indication that performance of algorithms on standardized datasets are good indicators of performance on particular cosmological and astrophysical datasets. This suggests that it is not necessarily prudent to choose symbolic regression algorithms based on their performance on standardized data. Instead, a more robust approach is to consider a variety of algorithms, chosen based on the particular task at hand that one wishes to apply symbolic regression to.","PeriodicalId":15445,"journal":{"name":"Journal of Cosmology and Astroparticle Physics","volume":"43 1","pages":""},"PeriodicalIF":5.3000,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cosmology and Astroparticle Physics","FirstCategoryId":"101","ListUrlMain":"https://doi.org/10.1088/1475-7516/2025/01/040","RegionNum":2,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ASTRONOMY & ASTROPHYSICS","Score":null,"Total":0}
引用次数: 0
Abstract
We introduce cp3-bench, a tool for comparing/benching symbolic regression algorithms, which we make publicly available at https://github.com/CP3-Origins/cp3-bench. In its current format, cp3-bench includes 12 different symbolic regression algorithms which can be automatically installed as part of cp3-bench. The philosophy behind cp3-bench is that is should be as user-friendly as possible, available in a ready-to-use format, and allow for easy additions of new algorithms and datasets. Our hope is that users of symbolic regression algorithms can use cp3-bench to easily install and compare/bench an array of symbolic regression algorithms to better decide which algorithms to use for their specific tasks at hand. To introduce and motivate the use of cp3-bench we present a small benchmark of 12 symbolic regression algorithms applied to 28 datasets representing six different cosmological and astroparticle physics setups. Overall, we find that most of the benched algorithms do rather poorly in the benchmark and suggest possible ways to proceed with developing algorithms that will be better at identifying ground truth expressions for cosmological and astroparticle physics datasets. Our demonstration benchmark specifically studies the significance of dimensionality of the feature space and precision of datasets. We find both to be highly important for symbolic regression tasks to be successful. On the other hand, we find no indication that inter-dependence of features in datasets is particularly important, meaning that it is not in general a hindrance for symbolic regression algorithms if datasets e.g. contain both z and H(z) as features. Lastly, we note that we find no indication that performance of algorithms on standardized datasets are good indicators of performance on particular cosmological and astrophysical datasets. This suggests that it is not necessarily prudent to choose symbolic regression algorithms based on their performance on standardized data. Instead, a more robust approach is to consider a variety of algorithms, chosen based on the particular task at hand that one wishes to apply symbolic regression to.
期刊介绍:
Journal of Cosmology and Astroparticle Physics (JCAP) encompasses theoretical, observational and experimental areas as well as computation and simulation. The journal covers the latest developments in the theory of all fundamental interactions and their cosmological implications (e.g. M-theory and cosmology, brane cosmology). JCAP''s coverage also includes topics such as formation, dynamics and clustering of galaxies, pre-galactic star formation, x-ray astronomy, radio astronomy, gravitational lensing, active galactic nuclei, intergalactic and interstellar matter.