Most studies in the literature propose that new words should be presented in unrelated sets due to interfering effect of learning vocabulary in semantic sets. Semantically-related words are suggested to be taught in different sessions to avoid this negative effect. However, that is implausible for most second language (L2) teachers owing to the restrictions from curricula or coursebooks, most of which serve new words in semantic fields. The literature does not shed light on how to tackle that problem. Accordingly, this study involves three sets of classroom research conducted with 58 young EFL learners to investigate the effects of mnemonics on minimizing the interference of semantic clustering of new vocabulary. Within 15-week course, one intact class was taught target words through mnemonics while the control group received similar instruction with sentence-context method. The study results demonstrated that mnemonically-instructed L2 learners outperformed on both immediate and delayed recognition of target words.
{"title":"The Use of Mnemonics to Minimize the Interfering Effects of Teaching New Words in Semantic Sets to Learners of English as a Foreign Language","authors":"Mustafa Sarıoğlu, Çiğdem Karatepe","doi":"10.1002/acp.4251","DOIUrl":"https://doi.org/10.1002/acp.4251","url":null,"abstract":"<p>Most studies in the literature propose that new words should be presented in unrelated sets due to interfering effect of learning vocabulary in semantic sets. Semantically-related words are suggested to be taught in different sessions to avoid this negative effect. However, that is implausible for most second language (L2) teachers owing to the restrictions from curricula or coursebooks, most of which serve new words in semantic fields. The literature does not shed light on how to tackle that problem. Accordingly, this study involves three sets of classroom research conducted with 58 young EFL learners to investigate the effects of mnemonics on minimizing the interference of semantic clustering of new vocabulary. Within 15-week course, one intact class was taught target words through mnemonics while the control group received similar instruction with sentence-context method. The study results demonstrated that mnemonically-instructed L2 learners outperformed on both immediate and delayed recognition of target words.</p>","PeriodicalId":48281,"journal":{"name":"Applied Cognitive Psychology","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/acp.4251","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142313388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ziyu Qi, Sibley F. Lyndgaard, Julia E. Melkers, Ruth Kanfer
Little research has examined how prior learning experiences influence adult learning attitudes and lifelong learning engagement. We adopted a person-centric approach to examine past work-related learning experiences and assessed the effects of recalled challenges on current learning attitudes, intentions, and behavior in the same domain. Surveying alumni from an online master's degree program, we found that recollected challenges from past learning entail multifaceted challenge foci (e.g., curriculum-related vs. social obstacles). Learners reporting more challenges in curriculum and social dimensions reported less positive attitudes toward lifelong learning, supporting the notion that negative learning experiences may hinder the development of self-identity as a lifelong learner. Limited support was obtained for predictions about relationships between past challenges and post-graduation learning intentions and behavior. The person-centric approach also permits the analysis of past learning experiences that are not well captured by standard assessments of “successful” adult learning.
{"title":"Toward Sustainable Lifelong Learning: Feedforward Effects of Challenge Recollections on Adult Learning Identity","authors":"Ziyu Qi, Sibley F. Lyndgaard, Julia E. Melkers, Ruth Kanfer","doi":"10.1002/acp.4248","DOIUrl":"https://doi.org/10.1002/acp.4248","url":null,"abstract":"<p>Little research has examined how prior learning experiences influence adult learning attitudes and lifelong learning engagement. We adopted a person-centric approach to examine past work-related learning experiences and assessed the effects of recalled challenges on current learning attitudes, intentions, and behavior in the same domain. Surveying alumni from an online master's degree program, we found that recollected challenges from past learning entail multifaceted challenge foci (e.g., curriculum-related vs. social obstacles). Learners reporting more challenges in curriculum and social dimensions reported less positive attitudes toward lifelong learning, supporting the notion that negative learning experiences may hinder the development of self-identity as a lifelong learner. Limited support was obtained for predictions about relationships between past challenges and post-graduation learning intentions and behavior. The person-centric approach also permits the analysis of past learning experiences that are not well captured by standard assessments of “successful” adult learning.</p>","PeriodicalId":48281,"journal":{"name":"Applied Cognitive Psychology","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/acp.4248","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142313390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nkansah Anakwah, Robert Horselenberg, Lorraine Hope, Margaret Amankwah-Poku, Peter J. van Koppen
The culture in which individuals are socialised can play a role in shaping their eyewitness memory reports. Drawing on self-construal theory, we examined cultural differences in the misinformation effect. In a mock witness paradigm, participants sampled from collectivistic (Ghana; n = 65) and individualistic (United Kingdom; n = 62) cultures were exposed to misleading post-event information (PEI). Participants provided a free-recall account and then completed a recognition task that included misinformation items. Cultural differences in misinformation endorsement were not observed in free recall. However, participants from the collectivistic culture endorsed more misleading items in the recognition task than those from the individualistic culture. We also found that in the respective cultures, individual-level cultural orientation was related to the misinformation effect. These findings provide preliminary insights into the role of culture in susceptibility to misleading PEI and further highlight the importance of eliminating leading or suggestive questioning from investigative interviewing practices.
{"title":"A Cross-Cultural and Intra-Cultural Investigation of the Misinformation Effect in Eyewitness Memory Reports","authors":"Nkansah Anakwah, Robert Horselenberg, Lorraine Hope, Margaret Amankwah-Poku, Peter J. van Koppen","doi":"10.1002/acp.4243","DOIUrl":"https://doi.org/10.1002/acp.4243","url":null,"abstract":"<p>The culture in which individuals are socialised can play a role in shaping their eyewitness memory reports. Drawing on self-construal theory, we examined cultural differences in the misinformation effect. In a mock witness paradigm, participants sampled from collectivistic (Ghana; <i>n</i> = 65) and individualistic (United Kingdom; <i>n</i> = 62) cultures were exposed to misleading post-event information (PEI). Participants provided a free-recall account and then completed a recognition task that included misinformation items. Cultural differences in misinformation endorsement were not observed in free recall. However, participants from the collectivistic culture endorsed more misleading items in the recognition task than those from the individualistic culture. We also found that in the respective cultures, individual-level cultural orientation was related to the misinformation effect. These findings provide preliminary insights into the role of culture in susceptibility to misleading PEI and further highlight the importance of eliminating leading or suggestive questioning from investigative interviewing practices.</p>","PeriodicalId":48281,"journal":{"name":"Applied Cognitive Psychology","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/acp.4243","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142245018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felix Krieglstein, Manuel Schmitz, Lukas Wesenberg, Günter Daniel Rey
Measuring cognitive load and its different types is a significant challenge that is closely related to the development of cognitive load theory (CLT). Previous research has shown that students have difficulty assessing cognitive load after learning or problem-solving. Accordingly, they may not reliably differentiate between the different types of cognitive load. Moreover, students may not consider the entire problem-solving process in their overall cognitive load assessment. The purpose of this work was to examine two training interventions designed to assist students in making informed cognitive load assessments. Study 1 (N = 99) included pre-training with a theoretical introduction to CLT to improve differentiation between cognitive load types. Study 2 (N = 80) implemented post-training by instructing students to consider all impressions during problem-solving for the overall load assessment. As both interventions were unsuccessful, further research is needed to assist students in assessing cognitive load in an informed manner.
{"title":"How to Help Students Make Informed Assessments of Cognitive Load: Examining the Role of Training Interventions","authors":"Felix Krieglstein, Manuel Schmitz, Lukas Wesenberg, Günter Daniel Rey","doi":"10.1002/acp.4247","DOIUrl":"https://doi.org/10.1002/acp.4247","url":null,"abstract":"<p>Measuring cognitive load and its different types is a significant challenge that is closely related to the development of cognitive load theory (CLT). Previous research has shown that students have difficulty assessing cognitive load after learning or problem-solving. Accordingly, they may not reliably differentiate between the different types of cognitive load. Moreover, students may not consider the entire problem-solving process in their overall cognitive load assessment. The purpose of this work was to examine two training interventions designed to assist students in making informed cognitive load assessments. Study 1 (<i>N</i> = 99) included pre-training with a theoretical introduction to CLT to improve differentiation between cognitive load types. Study 2 (<i>N</i> = 80) implemented post-training by instructing students to consider all impressions during problem-solving for the overall load assessment. As both interventions were unsuccessful, further research is needed to assist students in assessing cognitive load in an informed manner.</p>","PeriodicalId":48281,"journal":{"name":"Applied Cognitive Psychology","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/acp.4247","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142244734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although not designed for distinguishing true and false memories, several reasons argue for differences in the criteria-based content analysis (CBCA). As, to the best of our knowledge, previous research did not ensure a comparison between true and false memories, this study sought to do so. Memory reports of 52 participants were rated employing the CBCA by two independent raters. Analyses were based on event reports rated as a memory (where participants believed that the event had occurred and reported additionally remembered details about it) or reports rated as a belief (where participants believed that the event had occurred without remembering details about it). For both samples, the CBCA total score was significantly higher for true than false reports. Exploratory discriminant analyses revealed accuracy rates of 61.3%–69.6% and additional analyses hint towards the cognitive (vs. motivational) criteria as the main drivers of the obtained differences. Further replications are needed.
{"title":"Differences Between True and False Memories Using the Criteria-Based Content Analysis","authors":"Merle Madita Wachendörfer, Aileen Oeberst","doi":"10.1002/acp.4246","DOIUrl":"https://doi.org/10.1002/acp.4246","url":null,"abstract":"<p>Although not designed for distinguishing true and false memories, several reasons argue for differences in the criteria-based content analysis (CBCA). As, to the best of our knowledge, previous research did not ensure a comparison between true and false memories, this study sought to do so. Memory reports of 52 participants were rated employing the CBCA by two independent raters. Analyses were based on event reports rated as a <i>memory</i> (where participants believed that the event had occurred and reported additionally remembered details about it) or reports rated as a <i>belief</i> (where participants believed that the event had occurred without remembering details about it). For both samples, the CBCA total score was significantly higher for true than false reports. Exploratory discriminant analyses revealed accuracy rates of 61.3%–69.6% and additional analyses hint towards the cognitive (vs. motivational) criteria as the main drivers of the obtained differences. Further replications are needed.</p>","PeriodicalId":48281,"journal":{"name":"Applied Cognitive Psychology","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/acp.4246","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Experts are expected to make well-calibrated judgments within their field, yet a voluminous literature demonstrates miscalibration in human judgment. Calibration training aimed at improving subsequent calibration performance offers a potential solution. We tested the effect of commercial calibration training on a group of 70 intelligence analysts by comparing the miscalibration and bias of their judgments before and after a commercial training course meant to improve calibration across interval estimation and binary choice tasks. Training significantly improved calibration and bias overall, but this effect was contingent on the task. For interval estimation, analysts were overconfident before training and became better calibrated after training. For the binary choice task, however, analysts were initially underconfident and bias increased in this same direction post-training. Improvement on the two tasks was also uncorrelated. Taken together, results indicate that the training shifted analyst bias toward less confidence rather than having improved metacognitive monitoring ability.
{"title":"The effect of calibration training on the calibration of intelligence analysts' judgments","authors":"Megan O. Kelly, David R. Mandel","doi":"10.1002/acp.4236","DOIUrl":"https://doi.org/10.1002/acp.4236","url":null,"abstract":"<p>Experts are expected to make well-calibrated judgments within their field, yet a voluminous literature demonstrates miscalibration in human judgment. Calibration training aimed at improving subsequent calibration performance offers a potential solution. We tested the effect of commercial calibration training on a group of 70 intelligence analysts by comparing the miscalibration and bias of their judgments before and after a commercial training course meant to improve calibration across interval estimation and binary choice tasks. Training significantly improved calibration and bias overall, but this effect was contingent on the task. For interval estimation, analysts were overconfident before training and became better calibrated after training. For the binary choice task, however, analysts were initially underconfident and bias increased in this same direction post-training. Improvement on the two tasks was also uncorrelated. Taken together, results indicate that the training shifted analyst bias toward less confidence rather than having improved metacognitive monitoring ability.</p>","PeriodicalId":48281,"journal":{"name":"Applied Cognitive Psychology","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/acp.4236","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142152170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence can now synthesise face images which people cannot distinguish from real faces. Here, we investigated the wisdom of the (outer) crowd (averaging individuals' responses to the same trial) and inner crowd (averaging the same individual's responses to the same trial after completing the test twice) as routes to increased performance. In Experiment 1, participants viewed synthetic and real faces, and rated whether they thought each face was synthetic or real using a 1–7 scale. Each participant completed the task twice. Inner crowds showed little benefit over individual responses, and we found no associations between performance and personality factors. However, we found increases in performance with increasing sizes of outer crowd. In Experiment 2, participants judged each face only once, providing a binary ‘synthetic/real’ response, along with a confidence rating and an estimate of the percentage of other participants that they thought agreed with their answer. We compared three methods of aggregation for outer crowd decisions, finding that the majority vote provided the best performance for small crowds. However, the ‘surprisingly popular’ solution outperformed the majority vote and the confidence-weighted approach for larger crowds. Taken together, we demonstrate the use of outer crowds as a robust method of improvement during synthetic face detection, comparable with previous approaches based on training interventions.
{"title":"Crowds Improve Human Detection of AI-Synthesised Faces","authors":"Robin S. S. Kramer, Charlotte Cartledge","doi":"10.1002/acp.4245","DOIUrl":"https://doi.org/10.1002/acp.4245","url":null,"abstract":"<p>Artificial intelligence can now synthesise face images which people cannot distinguish from real faces. Here, we investigated the wisdom of the (outer) crowd (averaging individuals' responses to the same trial) and inner crowd (averaging the same individual's responses to the same trial after completing the test twice) as routes to increased performance. In Experiment 1, participants viewed synthetic and real faces, and rated whether they thought each face was synthetic or real using a 1–7 scale. Each participant completed the task twice. Inner crowds showed little benefit over individual responses, and we found no associations between performance and personality factors. However, we found increases in performance with increasing sizes of outer crowd. In Experiment 2, participants judged each face only once, providing a binary ‘synthetic/real’ response, along with a confidence rating and an estimate of the percentage of other participants that they thought agreed with their answer. We compared three methods of aggregation for outer crowd decisions, finding that the majority vote provided the best performance for small crowds. However, the ‘surprisingly popular’ solution outperformed the majority vote and the confidence-weighted approach for larger crowds. Taken together, we demonstrate the use of outer crowds as a robust method of improvement during synthetic face detection, comparable with previous approaches based on training interventions.</p>","PeriodicalId":48281,"journal":{"name":"Applied Cognitive Psychology","volume":null,"pages":null},"PeriodicalIF":2.1,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/acp.4245","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142152259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}