{"title":"Calibration Feedback With the Practical Scoring Rule Does Not Improve Calibration of Confidence","authors":"Matthew Martin, David R. Mandel","doi":"10.1002/ffo2.199","DOIUrl":null,"url":null,"abstract":"<p>People are often overconfident in their probabilistic judgments of future events or the state of their own knowledge. Some training methods have proven effective at reducing bias, but these usually involve intensive training sessions with experienced facilitators. This is not conducive to a scalable and domain-general training program for improving calibration. In two experiments (<i>N</i><sub>1</sub> = 610, <i>N</i><sub>2</sub> = 871), we examined the effectiveness of a performance feedback calibration training paradigm based on the Practical scoring rule, a modification of the logarithmic scoring rule designed to be more intuitive to facilitate learning. We examined this training regime in comparison to a control group and an outcome feedback group. Participants were tasked with selecting which of two world urban agglomerations had a higher population and to provide their confidence level. The outcome feedback group received information about the correctness of their choice on a trial-by-trial basis as well as a summary of their percent correct after each experimental block. The performance feedback group received this information plus the Practical score on a trial-by-trial basis and information about their overall over- or underconfidence at the end of each block. We also examined whether Actively Open-Minded Thinking (AOMT) was predictive of calibration and its change across blocks. We found no improvement in calibration due to either training regime. Good calibration overall was predicted by AOMT, but not its change across blocks. The results shed light on the generalizability of other findings showing positive effects of performance training using the Practical scoring rule.</p>","PeriodicalId":100567,"journal":{"name":"FUTURES & FORESIGHT SCIENCE","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ffo2.199","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"FUTURES & FORESIGHT SCIENCE","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ffo2.199","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
People are often overconfident in their probabilistic judgments of future events or the state of their own knowledge. Some training methods have proven effective at reducing bias, but these usually involve intensive training sessions with experienced facilitators. This is not conducive to a scalable and domain-general training program for improving calibration. In two experiments (N1 = 610, N2 = 871), we examined the effectiveness of a performance feedback calibration training paradigm based on the Practical scoring rule, a modification of the logarithmic scoring rule designed to be more intuitive to facilitate learning. We examined this training regime in comparison to a control group and an outcome feedback group. Participants were tasked with selecting which of two world urban agglomerations had a higher population and to provide their confidence level. The outcome feedback group received information about the correctness of their choice on a trial-by-trial basis as well as a summary of their percent correct after each experimental block. The performance feedback group received this information plus the Practical score on a trial-by-trial basis and information about their overall over- or underconfidence at the end of each block. We also examined whether Actively Open-Minded Thinking (AOMT) was predictive of calibration and its change across blocks. We found no improvement in calibration due to either training regime. Good calibration overall was predicted by AOMT, but not its change across blocks. The results shed light on the generalizability of other findings showing positive effects of performance training using the Practical scoring rule.