Although not designed for distinguishing true and false memories, several reasons argue for differences in the criteria-based content analysis (CBCA). As, to the best of our knowledge, previous research did not ensure a comparison between true and false memories, this study sought to do so. Memory reports of 52 participants were rated employing the CBCA by two independent raters. Analyses were based on event reports rated as a memory (where participants believed that the event had occurred and reported additionally remembered details about it) or reports rated as a belief (where participants believed that the event had occurred without remembering details about it). For both samples, the CBCA total score was significantly higher for true than false reports. Exploratory discriminant analyses revealed accuracy rates of 61.3%–69.6% and additional analyses hint towards the cognitive (vs. motivational) criteria as the main drivers of the obtained differences. Further replications are needed.
{"title":"Differences Between True and False Memories Using the Criteria-Based Content Analysis","authors":"Merle Madita Wachendörfer, Aileen Oeberst","doi":"10.1002/acp.4246","DOIUrl":"https://doi.org/10.1002/acp.4246","url":null,"abstract":"<p>Although not designed for distinguishing true and false memories, several reasons argue for differences in the criteria-based content analysis (CBCA). As, to the best of our knowledge, previous research did not ensure a comparison between true and false memories, this study sought to do so. Memory reports of 52 participants were rated employing the CBCA by two independent raters. Analyses were based on event reports rated as a <i>memory</i> (where participants believed that the event had occurred and reported additionally remembered details about it) or reports rated as a <i>belief</i> (where participants believed that the event had occurred without remembering details about it). For both samples, the CBCA total score was significantly higher for true than false reports. Exploratory discriminant analyses revealed accuracy rates of 61.3%–69.6% and additional analyses hint towards the cognitive (vs. motivational) criteria as the main drivers of the obtained differences. Further replications are needed.</p>","PeriodicalId":48281,"journal":{"name":"Applied Cognitive Psychology","volume":"38 5","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/acp.4246","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Experts are expected to make well-calibrated judgments within their field, yet a voluminous literature demonstrates miscalibration in human judgment. Calibration training aimed at improving subsequent calibration performance offers a potential solution. We tested the effect of commercial calibration training on a group of 70 intelligence analysts by comparing the miscalibration and bias of their judgments before and after a commercial training course meant to improve calibration across interval estimation and binary choice tasks. Training significantly improved calibration and bias overall, but this effect was contingent on the task. For interval estimation, analysts were overconfident before training and became better calibrated after training. For the binary choice task, however, analysts were initially underconfident and bias increased in this same direction post-training. Improvement on the two tasks was also uncorrelated. Taken together, results indicate that the training shifted analyst bias toward less confidence rather than having improved metacognitive monitoring ability.
{"title":"The effect of calibration training on the calibration of intelligence analysts' judgments","authors":"Megan O. Kelly, David R. Mandel","doi":"10.1002/acp.4236","DOIUrl":"https://doi.org/10.1002/acp.4236","url":null,"abstract":"<p>Experts are expected to make well-calibrated judgments within their field, yet a voluminous literature demonstrates miscalibration in human judgment. Calibration training aimed at improving subsequent calibration performance offers a potential solution. We tested the effect of commercial calibration training on a group of 70 intelligence analysts by comparing the miscalibration and bias of their judgments before and after a commercial training course meant to improve calibration across interval estimation and binary choice tasks. Training significantly improved calibration and bias overall, but this effect was contingent on the task. For interval estimation, analysts were overconfident before training and became better calibrated after training. For the binary choice task, however, analysts were initially underconfident and bias increased in this same direction post-training. Improvement on the two tasks was also uncorrelated. Taken together, results indicate that the training shifted analyst bias toward less confidence rather than having improved metacognitive monitoring ability.</p>","PeriodicalId":48281,"journal":{"name":"Applied Cognitive Psychology","volume":"38 5","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/acp.4236","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142152170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence can now synthesise face images which people cannot distinguish from real faces. Here, we investigated the wisdom of the (outer) crowd (averaging individuals' responses to the same trial) and inner crowd (averaging the same individual's responses to the same trial after completing the test twice) as routes to increased performance. In Experiment 1, participants viewed synthetic and real faces, and rated whether they thought each face was synthetic or real using a 1–7 scale. Each participant completed the task twice. Inner crowds showed little benefit over individual responses, and we found no associations between performance and personality factors. However, we found increases in performance with increasing sizes of outer crowd. In Experiment 2, participants judged each face only once, providing a binary ‘synthetic/real’ response, along with a confidence rating and an estimate of the percentage of other participants that they thought agreed with their answer. We compared three methods of aggregation for outer crowd decisions, finding that the majority vote provided the best performance for small crowds. However, the ‘surprisingly popular’ solution outperformed the majority vote and the confidence-weighted approach for larger crowds. Taken together, we demonstrate the use of outer crowds as a robust method of improvement during synthetic face detection, comparable with previous approaches based on training interventions.
{"title":"Crowds Improve Human Detection of AI-Synthesised Faces","authors":"Robin S. S. Kramer, Charlotte Cartledge","doi":"10.1002/acp.4245","DOIUrl":"https://doi.org/10.1002/acp.4245","url":null,"abstract":"<p>Artificial intelligence can now synthesise face images which people cannot distinguish from real faces. Here, we investigated the wisdom of the (outer) crowd (averaging individuals' responses to the same trial) and inner crowd (averaging the same individual's responses to the same trial after completing the test twice) as routes to increased performance. In Experiment 1, participants viewed synthetic and real faces, and rated whether they thought each face was synthetic or real using a 1–7 scale. Each participant completed the task twice. Inner crowds showed little benefit over individual responses, and we found no associations between performance and personality factors. However, we found increases in performance with increasing sizes of outer crowd. In Experiment 2, participants judged each face only once, providing a binary ‘synthetic/real’ response, along with a confidence rating and an estimate of the percentage of other participants that they thought agreed with their answer. We compared three methods of aggregation for outer crowd decisions, finding that the majority vote provided the best performance for small crowds. However, the ‘surprisingly popular’ solution outperformed the majority vote and the confidence-weighted approach for larger crowds. Taken together, we demonstrate the use of outer crowds as a robust method of improvement during synthetic face detection, comparable with previous approaches based on training interventions.</p>","PeriodicalId":48281,"journal":{"name":"Applied Cognitive Psychology","volume":"38 5","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/acp.4245","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142152259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}