Eesh Kamrah, Fatemeh Ghoreishi, Zijian Ding, Joel Chan, M. Fuge
{"title":"不同的初始样本是如何帮助和损害贝叶斯优化的","authors":"Eesh Kamrah, Fatemeh Ghoreishi, Zijian Ding, Joel Chan, M. Fuge","doi":"10.1115/1.4063006","DOIUrl":null,"url":null,"abstract":"\n Design researchers have struggled to produce quantitative predictions for exactly why and when diversity might help or hinder design search efforts. This paper addresses that problem by studying one ubiquitously used search strategy -- Bayesian Optimization (BO) -- on a 2D test problem with modifiable difficulty. Specifically, we test how providing diverse versus non-diverse initial samples to BO affects its performance during search and introduce a fast DPP sampling method for computing diverse sets to detect sets of highly diverse or non-diverse initial samples. We initially found, to our surprise, that diversity did not affect BO, neither helping nor hurting its convergence. However, follow-on experiments illuminated a trade-off. Non-diverse initial samples hastened posterior convergence for the underlying model hyper-parameters -- a Model Building advantage. In contrast, diverse initial samples accelerated exploring the function itself -- a Space Exploration advantage. Both advantages help BO, but in different ways, and the initial sample diversity modulates how BO trades those advantages. We show that fixing the BO hyper-parameters removes the Model Building advantage, causing diverse initial samples to always outperform models trained with non-diverse samples. These findings shed light on why, at least for BO-type optimizers, the use of diversity has mixed effects and cautions against the ubiquitous use of space-filling initializations in BO. To the extent that humans use explore-exploit search strategies similar to BO, our results provide a testable conjecture for why and when diversity may affect human-subject or design team experiments.","PeriodicalId":50137,"journal":{"name":"Journal of Mechanical Design","volume":"42 1","pages":""},"PeriodicalIF":2.9000,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"How Diverse Initial Samples Help and Hurt Bayesian Optimizers\",\"authors\":\"Eesh Kamrah, Fatemeh Ghoreishi, Zijian Ding, Joel Chan, M. Fuge\",\"doi\":\"10.1115/1.4063006\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Design researchers have struggled to produce quantitative predictions for exactly why and when diversity might help or hinder design search efforts. This paper addresses that problem by studying one ubiquitously used search strategy -- Bayesian Optimization (BO) -- on a 2D test problem with modifiable difficulty. Specifically, we test how providing diverse versus non-diverse initial samples to BO affects its performance during search and introduce a fast DPP sampling method for computing diverse sets to detect sets of highly diverse or non-diverse initial samples. We initially found, to our surprise, that diversity did not affect BO, neither helping nor hurting its convergence. However, follow-on experiments illuminated a trade-off. Non-diverse initial samples hastened posterior convergence for the underlying model hyper-parameters -- a Model Building advantage. In contrast, diverse initial samples accelerated exploring the function itself -- a Space Exploration advantage. Both advantages help BO, but in different ways, and the initial sample diversity modulates how BO trades those advantages. We show that fixing the BO hyper-parameters removes the Model Building advantage, causing diverse initial samples to always outperform models trained with non-diverse samples. These findings shed light on why, at least for BO-type optimizers, the use of diversity has mixed effects and cautions against the ubiquitous use of space-filling initializations in BO. To the extent that humans use explore-exploit search strategies similar to BO, our results provide a testable conjecture for why and when diversity may affect human-subject or design team experiments.\",\"PeriodicalId\":50137,\"journal\":{\"name\":\"Journal of Mechanical Design\",\"volume\":\"42 1\",\"pages\":\"\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2023-07-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Mechanical Design\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1115/1.4063006\",\"RegionNum\":3,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, MECHANICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Mechanical Design","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1115/1.4063006","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, MECHANICAL","Score":null,"Total":0}
How Diverse Initial Samples Help and Hurt Bayesian Optimizers
Design researchers have struggled to produce quantitative predictions for exactly why and when diversity might help or hinder design search efforts. This paper addresses that problem by studying one ubiquitously used search strategy -- Bayesian Optimization (BO) -- on a 2D test problem with modifiable difficulty. Specifically, we test how providing diverse versus non-diverse initial samples to BO affects its performance during search and introduce a fast DPP sampling method for computing diverse sets to detect sets of highly diverse or non-diverse initial samples. We initially found, to our surprise, that diversity did not affect BO, neither helping nor hurting its convergence. However, follow-on experiments illuminated a trade-off. Non-diverse initial samples hastened posterior convergence for the underlying model hyper-parameters -- a Model Building advantage. In contrast, diverse initial samples accelerated exploring the function itself -- a Space Exploration advantage. Both advantages help BO, but in different ways, and the initial sample diversity modulates how BO trades those advantages. We show that fixing the BO hyper-parameters removes the Model Building advantage, causing diverse initial samples to always outperform models trained with non-diverse samples. These findings shed light on why, at least for BO-type optimizers, the use of diversity has mixed effects and cautions against the ubiquitous use of space-filling initializations in BO. To the extent that humans use explore-exploit search strategies similar to BO, our results provide a testable conjecture for why and when diversity may affect human-subject or design team experiments.
期刊介绍:
The Journal of Mechanical Design (JMD) serves the broad design community as the venue for scholarly, archival research in all aspects of the design activity with emphasis on design synthesis. JMD has traditionally served the ASME Design Engineering Division and its technical committees, but it welcomes contributions from all areas of design with emphasis on synthesis. JMD communicates original contributions, primarily in the form of research articles of considerable depth, but also technical briefs, design innovation papers, book reviews, and editorials.
Scope: The Journal of Mechanical Design (JMD) serves the broad design community as the venue for scholarly, archival research in all aspects of the design activity with emphasis on design synthesis. JMD has traditionally served the ASME Design Engineering Division and its technical committees, but it welcomes contributions from all areas of design with emphasis on synthesis. JMD communicates original contributions, primarily in the form of research articles of considerable depth, but also technical briefs, design innovation papers, book reviews, and editorials.