{"title":"Relative effects of implicit and explicit attitudes on behavior: A meta-analytic review and test of key moderators.","authors":"Daniel J. Phipps, Martin S. Hagger, Kyra Hamilton","doi":"10.1037/bul0000506","DOIUrl":"https://doi.org/10.1037/bul0000506","url":null,"abstract":"","PeriodicalId":20854,"journal":{"name":"Psychological bulletin","volume":"95 1","pages":""},"PeriodicalIF":22.4,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146101613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katerina Rnic, Angela C. Santee, Hannah R. Snyder, Lisa R. Starr, David J. A. Dozois, Joelle LeMoult
{"title":"Does controlling for baseline stressful life events clarify or cloud the stress generation effect? A response to Dang and Xiao (2025).","authors":"Katerina Rnic, Angela C. Santee, Hannah R. Snyder, Lisa R. Starr, David J. A. Dozois, Joelle LeMoult","doi":"10.1037/bul0000507","DOIUrl":"https://doi.org/10.1037/bul0000507","url":null,"abstract":"","PeriodicalId":20854,"journal":{"name":"Psychological bulletin","volume":"58 1","pages":""},"PeriodicalIF":22.4,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146101612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeanne L. Tsai, Daniel S. Chen, Angela M. Yang, Julie Y. A. Cachia, Elizabeth Blevins, Michael Ko, Maya B. Mathur, Oriana R. Aragón, Elisabeth A. Arens, Lucy Z. Bencharit, Stephen H. Chen, Ying-Chun Chen, Yulia Chentsova Dutton, Benjamin Y. Cheung, Louise Chim, Philip I. Chow, Magali Clobert, Arezou M. Costello, Igor de Almeida, Christopher P. Ditzfeld, Stacey N. Doan, Victoria A. Floerke, Brett Q. Ford, Helene H. Fung, Amy L. Gentzler, Eddie Harmon-Jones, Steven J. Heine, Derek M. Isaacowitz, Eiji Ito, Da Jiang, Emiko S. Kashima, Birgit Koopmann-Holm, Brian T. Kraus, Jocelyn Lai, Austyn T. Lee, Lilian Y. Li, Corinna E. Löckenhoff, Gloria Luong, Bradley C. Mannell, Yael Millgram, Shir Mizrahi Lakan, Benjamin Oosterhoff, Janelle Painter, BoKyung Park, Cara A. Palmer, Suzanne C. Parker, William Peruel, Matthew B. Ruby, Cristina E. Salvador, Gregory R. Samanez-Larkin, Molly Sands, Vassilis Saroglou, Marine I. Severin, Yoonji Shim, Benjamin A. Swerdlow, Maya Tamir, Renee J. Thompson, Yukiko Uchida, Chit Yuen Yi, Chen-Wei Yu, Xiaoyu Zhou
{"title":"A meta-analytic review of cultural variation in affect valuation.","authors":"Jeanne L. Tsai, Daniel S. Chen, Angela M. Yang, Julie Y. A. Cachia, Elizabeth Blevins, Michael Ko, Maya B. Mathur, Oriana R. Aragón, Elisabeth A. Arens, Lucy Z. Bencharit, Stephen H. Chen, Ying-Chun Chen, Yulia Chentsova Dutton, Benjamin Y. Cheung, Louise Chim, Philip I. Chow, Magali Clobert, Arezou M. Costello, Igor de Almeida, Christopher P. Ditzfeld, Stacey N. Doan, Victoria A. Floerke, Brett Q. Ford, Helene H. Fung, Amy L. Gentzler, Eddie Harmon-Jones, Steven J. Heine, Derek M. Isaacowitz, Eiji Ito, Da Jiang, Emiko S. Kashima, Birgit Koopmann-Holm, Brian T. Kraus, Jocelyn Lai, Austyn T. Lee, Lilian Y. Li, Corinna E. Löckenhoff, Gloria Luong, Bradley C. Mannell, Yael Millgram, Shir Mizrahi Lakan, Benjamin Oosterhoff, Janelle Painter, BoKyung Park, Cara A. Palmer, Suzanne C. Parker, William Peruel, Matthew B. Ruby, Cristina E. Salvador, Gregory R. Samanez-Larkin, Molly Sands, Vassilis Saroglou, Marine I. Severin, Yoonji Shim, Benjamin A. Swerdlow, Maya Tamir, Renee J. Thompson, Yukiko Uchida, Chit Yuen Yi, Chen-Wei Yu, Xiaoyu Zhou","doi":"10.1037/bul0000499","DOIUrl":"https://doi.org/10.1037/bul0000499","url":null,"abstract":"","PeriodicalId":20854,"journal":{"name":"Psychological bulletin","volume":"8 1","pages":""},"PeriodicalIF":22.4,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146101614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supplemental Material for Internalized Racism and Personal Self-Esteem Among Ethnoracial Minoritized Groups: A Meta-Analytic Review","authors":"","doi":"10.1037/bul0000508.supp","DOIUrl":"https://doi.org/10.1037/bul0000508.supp","url":null,"abstract":"","PeriodicalId":20854,"journal":{"name":"Psychological bulletin","volume":"4 1","pages":""},"PeriodicalIF":22.4,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146095701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supplemental Material for A Meta-Analytic Review of Cultural Variation in Affect Valuation","authors":"","doi":"10.1037/bul0000499.supp","DOIUrl":"https://doi.org/10.1037/bul0000499.supp","url":null,"abstract":"","PeriodicalId":20854,"journal":{"name":"Psychological bulletin","volume":"8 1","pages":""},"PeriodicalIF":22.4,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146095700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We examined (a) the relation between content knowledge and comprehension (both reading and listening comprehension) using correlational data and (b) the impact of content knowledge instruction on content knowledge and comprehension using causal data. Moderation by assessment, person, instruction, and study quality characteristics was systematically examined. For causal data, listening comprehension was excluded from moderation analysis due to insufficient studies. Correlational data from 108 studies, 441 correlation coefficients, and N = 68,301 participants showed that content knowledge was moderately related to comprehension with an identical magnitude for listening comprehension and reading comprehension (r = .41). The relation with reading comprehension was stronger when content knowledge was assessed using norm-referenced tasks (r = .50) than when it was assessed using researcher-developed tasks (r = .39). Causal data from 55 studies, 304 treatment effect sizes, and N = 18,540 participants showed that content knowledge instruction improved content knowledge (g = 1.36) and reading comprehension (g = 0.44), but not listening comprehension (g = 0.13). Effects on reading comprehension differed: research-developed tasks (g = 0.51) compared to norm-referenced comprehension assessments (g = 0.21); knowledge activation (g = 0.66) compared to knowledge building (g = 0.19); and studies with an N of 1 design (g = 0.67) compared to those that did not (g = 0.18). The findings highlight the importance of content knowledge in comprehension while highlighting the need to consider variation in the relation and impact by assessment, instruction, and study quality features. Future directions are discussed. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
{"title":"Content knowledge and comprehension: A meta-analytic review of correlational and causal associations.","authors":"Young-Suk Grace Kim, Yucheng Cao","doi":"10.1037/bul0000502","DOIUrl":"https://doi.org/10.1037/bul0000502","url":null,"abstract":"We examined (a) the relation between content knowledge and comprehension (both reading and listening comprehension) using correlational data and (b) the impact of content knowledge instruction on content knowledge and comprehension using causal data. Moderation by assessment, person, instruction, and study quality characteristics was systematically examined. For causal data, listening comprehension was excluded from moderation analysis due to insufficient studies. Correlational data from 108 studies, 441 correlation coefficients, and N = 68,301 participants showed that content knowledge was moderately related to comprehension with an identical magnitude for listening comprehension and reading comprehension (r = .41). The relation with reading comprehension was stronger when content knowledge was assessed using norm-referenced tasks (r = .50) than when it was assessed using researcher-developed tasks (r = .39). Causal data from 55 studies, 304 treatment effect sizes, and N = 18,540 participants showed that content knowledge instruction improved content knowledge (g = 1.36) and reading comprehension (g = 0.44), but not listening comprehension (g = 0.13). Effects on reading comprehension differed: research-developed tasks (g = 0.51) compared to norm-referenced comprehension assessments (g = 0.21); knowledge activation (g = 0.66) compared to knowledge building (g = 0.19); and studies with an N of 1 design (g = 0.67) compared to those that did not (g = 0.18). The findings highlight the importance of content knowledge in comprehension while highlighting the need to consider variation in the relation and impact by assessment, instruction, and study quality features. Future directions are discussed. (PsycInfo Database Record (c) 2025 APA, all rights reserved).","PeriodicalId":20854,"journal":{"name":"Psychological bulletin","volume":"35 1","pages":"1219-1244"},"PeriodicalIF":22.4,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145753237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joshua W. Maxwell, Robert D. Torrence, Eric Ruthruff
Facial expressions of emotion are critical to survival and social interaction. Their importance is underscored by evolutionary adaptations that enable their automatic production and recognition. As a result, emotional faces may receive attentional prioritization, even when completely irrelevant to the task at hand. Although attentional bias is theoretically plausible, empirical findings have been inconsistent: Some studies have reported bias toward emotional faces, whereas many others have not. To clarify this discrepancy, we conducted a meta-analysis of attentional bias for task-irrelevant emotional expressions, including studies using the additional singleton and spatial cuing paradigms (which includes dot probe paradigms). We found an overall effect between zero and small (Hedges's g = 0.08), based on 160 cases. The only significant moderator was the data set from which the emotional face stimuli were drawn, with the Gur data set (Gur et al., 2002) producing the strongest bias. In a second meta-analysis, we examined studies where the emotional expression was task relevant because both the expression and the target were singletons. Here, the overall attentional bias was small to medium (g = 0.41), based on 25 cases. We conclude that facial expressions of emotion do not bias attention when they are task irrelevant. In the discussion, we highlight some empirical and theoretical challenges to emotion automaticity and offer explanations for why the effect was between zero and small. One potential explanation is studies often utilize static photographs of actors portraying facial expressions of emotion, which are low in salience and ecological validity because they lack context. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
情绪的面部表情对生存和社会互动至关重要。它们的重要性被进化适应所强调,使它们能够自动产生和识别。因此,情绪激动的面孔可能会获得注意力优先级,即使与手头的任务完全无关。虽然注意偏向在理论上是合理的,但实证结果却不一致:一些研究报告了对情绪激动的面孔的偏向,而其他许多研究则没有。为了澄清这一差异,我们对任务无关情绪表达的注意偏倚进行了荟萃分析,包括使用额外的单线索范式和空间线索范式(包括点探针范式)的研究。基于160个案例,我们发现总体效应介于零和小之间(赫奇斯的g = 0.08)。唯一显著的调节因子是提取情绪面部刺激的数据集,Gur数据集(Gur et al., 2002)产生了最强的偏差。在第二个荟萃分析中,我们检查了情绪表达与任务相关的研究,因为表达和目标都是单身。在这里,基于25例的总体注意偏倚是小到中等(g = 0.41)。我们得出结论,当面部表情与任务无关时,它们不会影响注意力。在讨论中,我们强调了情绪自动性的一些经验和理论挑战,并解释了为什么影响介于零和小之间。一种可能的解释是,研究经常使用演员描绘面部表情的静态照片,由于缺乏背景,这些照片的显着性和生态有效性较低。(PsycInfo Database Record (c) 2025 APA,版权所有)。
{"title":"Attention bias for facial expressions of emotion: A meta-analytic review.","authors":"Joshua W. Maxwell, Robert D. Torrence, Eric Ruthruff","doi":"10.1037/bul0000496","DOIUrl":"https://doi.org/10.1037/bul0000496","url":null,"abstract":"Facial expressions of emotion are critical to survival and social interaction. Their importance is underscored by evolutionary adaptations that enable their automatic production and recognition. As a result, emotional faces may receive attentional prioritization, even when completely irrelevant to the task at hand. Although attentional bias is theoretically plausible, empirical findings have been inconsistent: Some studies have reported bias toward emotional faces, whereas many others have not. To clarify this discrepancy, we conducted a meta-analysis of attentional bias for task-irrelevant emotional expressions, including studies using the additional singleton and spatial cuing paradigms (which includes dot probe paradigms). We found an overall effect between zero and small (Hedges's g = 0.08), based on 160 cases. The only significant moderator was the data set from which the emotional face stimuli were drawn, with the Gur data set (Gur et al., 2002) producing the strongest bias. In a second meta-analysis, we examined studies where the emotional expression was task relevant because both the expression and the target were singletons. Here, the overall attentional bias was small to medium (g = 0.41), based on 25 cases. We conclude that facial expressions of emotion do not bias attention when they are task irrelevant. In the discussion, we highlight some empirical and theoretical challenges to emotion automaticity and offer explanations for why the effect was between zero and small. One potential explanation is studies often utilize static photographs of actors portraying facial expressions of emotion, which are low in salience and ecological validity because they lack context. (PsycInfo Database Record (c) 2025 APA, all rights reserved).","PeriodicalId":20854,"journal":{"name":"Psychological bulletin","volume":"32 1","pages":"1197-1218"},"PeriodicalIF":22.4,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145753236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thorben Jansen, Lucas W. Liebenow, Ute Mertens, Fabian T. C. Schmidt, Julian F. Lohmann, Johanna Fleckenstein, Jennifer Meyer
Psychological science requires reliable measures. Within systematic literature reviews, reliability hinges on high interrater agreement during data extraction. Yet, the extraction process has been time-consuming. Efforts to accelerate the process using technology have shown limited success until generative artificial intelligence (genAI), particularly large language models (LLMs), accurately extracted variables from medical studies. Nonetheless, for psychological researchers, it remains unclear how to utilize genAI for data extraction, given the range of tested variables, the medical context, and the variability in accuracy. We systematically assessed extraction accuracy and error patterns across domains in psychology by comparing genAI-extracted and human-extracted data from 22 systematic review databases published in the Psychological Bulletin. Eight LLMs extracted 312,329 data points from 2,179 studies on 186 variables. LLM extractions achieved unacceptable accuracy on all metrics for 20% of variables. For 46% of variables, accuracy was acceptable for some metrics and unacceptable for others. LLMs reached acceptable but not high accuracy on all metrics in 15%, high but not excellent in 8%, and excellent accuracy in 12% of variables. Accuracy varied most between variables, less between systematic reviews, and least between LLMs. Moderator analyses using a hierarchical logistic regression, hierarchical linear model, and meta-analysis revealed that accuracy was higher for variables describing studies' context and moderator variables compared to variables for effect size calculation. Also, accuracy was higher in systematic reviews with more detailed variable descriptions and positively correlated with model sizes. We discuss directions for investigating ways to use genAI to accelerate data extractions while ensuring meaningful human control. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
心理科学需要可靠的测量方法。在系统的文献综述中,可靠性取决于数据提取过程中研究者的高度一致性。然而,提取过程一直很耗时。在生成式人工智能(genAI),特别是大型语言模型(llm)能够准确地从医学研究中提取变量之前,利用技术加速这一进程的努力取得了有限的成功。尽管如此,对于心理学研究人员来说,考虑到测试变量的范围、医学背景和准确性的可变性,如何利用基因人工智能进行数据提取仍不清楚。通过比较《心理学公报》上发表的22个系统综述数据库中基因人工提取和人工提取的数据,我们系统地评估了心理学各领域的提取准确性和错误模式。8个LLMs从2179项研究中提取了312329个数据点,涉及186个变量。对于20%的变量,LLM提取在所有指标上取得了不可接受的准确性。对于46%的变量,准确性对于某些指标是可以接受的,而对于其他指标则是不可接受的。llm在所有指标上达到可接受但不高的准确率为15%,在8%的变量上达到高但不优秀的准确率,在12%的变量上达到优秀的准确率。准确性在变量之间变化最大,在系统评价之间变化较小,在法学硕士之间变化最小。使用层次逻辑回归、层次线性模型和元分析的调节分析显示,描述研究背景的变量和调节变量的准确性高于效应大小计算的变量。此外,在具有更详细的变量描述的系统评价中,准确性更高,并且与模型大小呈正相关。我们讨论了研究如何使用基因人工智能来加速数据提取,同时确保有意义的人类控制的方向。(PsycInfo Database Record (c) 2025 APA,版权所有)。
{"title":"Data extraction by generative artificial intelligence: Assessing determinants of accuracy using human-extracted data from systematic review databases.","authors":"Thorben Jansen, Lucas W. Liebenow, Ute Mertens, Fabian T. C. Schmidt, Julian F. Lohmann, Johanna Fleckenstein, Jennifer Meyer","doi":"10.1037/bul0000501","DOIUrl":"https://doi.org/10.1037/bul0000501","url":null,"abstract":"Psychological science requires reliable measures. Within systematic literature reviews, reliability hinges on high interrater agreement during data extraction. Yet, the extraction process has been time-consuming. Efforts to accelerate the process using technology have shown limited success until generative artificial intelligence (genAI), particularly large language models (LLMs), accurately extracted variables from medical studies. Nonetheless, for psychological researchers, it remains unclear how to utilize genAI for data extraction, given the range of tested variables, the medical context, and the variability in accuracy. We systematically assessed extraction accuracy and error patterns across domains in psychology by comparing genAI-extracted and human-extracted data from 22 systematic review databases published in the Psychological Bulletin. Eight LLMs extracted 312,329 data points from 2,179 studies on 186 variables. LLM extractions achieved unacceptable accuracy on all metrics for 20% of variables. For 46% of variables, accuracy was acceptable for some metrics and unacceptable for others. LLMs reached acceptable but not high accuracy on all metrics in 15%, high but not excellent in 8%, and excellent accuracy in 12% of variables. Accuracy varied most between variables, less between systematic reviews, and least between LLMs. Moderator analyses using a hierarchical logistic regression, hierarchical linear model, and meta-analysis revealed that accuracy was higher for variables describing studies' context and moderator variables compared to variables for effect size calculation. Also, accuracy was higher in systematic reviews with more detailed variable descriptions and positively correlated with model sizes. We discuss directions for investigating ways to use genAI to accelerate data extractions while ensuring meaningful human control. (PsycInfo Database Record (c) 2025 APA, all rights reserved).","PeriodicalId":20854,"journal":{"name":"Psychological bulletin","volume":"9 1","pages":"1280-1306"},"PeriodicalIF":22.4,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145753239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}