Ashish M. Chaudhari, Ilias Bilionis, Jitesh H. Panchal
{"title":"How Do Designers Choose Among Multiple Noisy Information Sources in Engineering Design Optimization? An Experimental Study","authors":"Ashish M. Chaudhari, Ilias Bilionis, Jitesh H. Panchal","doi":"10.1115/DETC2018-85460","DOIUrl":null,"url":null,"abstract":"Designers make process-level decisions to (i) select designs for performance evaluation, (ii) select information source, and (iii) decide whether to stop design exploration. These decisions are influenced by problem-related factors, such as costs and uncertainty in information sources, and budget constraints for design evaluations. The objective of this paper is to analyze individuals’ strategies for making process-level decisions under the availability of noisy information sources of different cost and uncertainty, and limited budget. Our approach involves a) conducting a behavioral experiment with an engineering optimization task to collect data on subjects’ decision strategies, b) eliciting their decision strategies using a survey, and c) performing a descriptive analysis to compare elicited strategies and observations from the data. We observe that subjects use specific criteria such as fixed values of attributes, highest prediction of performance, highest uncertainty in performance, and attribute thresholds when making decisions of interest. When subjects have higher budget, they are less likely to evaluate points having highest prediction of performance, and more likely to evaluate points having highest uncertainty in performance. Further, subjects conduct expensive evaluations even when their decisions have not sufficiently converged to the region of maximum performance in the design space and improvements from additional cheap evaluations are large. The implications of the results in identifying deviations from optimal strategies and structuring decisions for further model development are discussed.","PeriodicalId":138856,"journal":{"name":"Volume 2A: 44th Design Automation Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Volume 2A: 44th Design Automation Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/DETC2018-85460","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Designers make process-level decisions to (i) select designs for performance evaluation, (ii) select information source, and (iii) decide whether to stop design exploration. These decisions are influenced by problem-related factors, such as costs and uncertainty in information sources, and budget constraints for design evaluations. The objective of this paper is to analyze individuals’ strategies for making process-level decisions under the availability of noisy information sources of different cost and uncertainty, and limited budget. Our approach involves a) conducting a behavioral experiment with an engineering optimization task to collect data on subjects’ decision strategies, b) eliciting their decision strategies using a survey, and c) performing a descriptive analysis to compare elicited strategies and observations from the data. We observe that subjects use specific criteria such as fixed values of attributes, highest prediction of performance, highest uncertainty in performance, and attribute thresholds when making decisions of interest. When subjects have higher budget, they are less likely to evaluate points having highest prediction of performance, and more likely to evaluate points having highest uncertainty in performance. Further, subjects conduct expensive evaluations even when their decisions have not sufficiently converged to the region of maximum performance in the design space and improvements from additional cheap evaluations are large. The implications of the results in identifying deviations from optimal strategies and structuring decisions for further model development are discussed.