Pub Date : 2025-11-17DOI: 10.1016/j.copsyc.2025.102213
Philipp Schmid
Debunking is an effective means to mitigate the impact of health misinformation. However, even after receiving a corrective message, misinformation often persists in influencing individuals' judgements and decision-making. I review evidence on effective components of debunking in health contexts and propose three mechanisms for how expressions of empathy might help reduce the continued influence of health misinformation. Empathetic debunkings might decrease feelings of discomfort and increase believability of debunkings by (1) decreasing perceived threat to underlying attitude roots, (2) decreasing perceived threat to face or (3) increasing perceptions of trustworthiness. Moreover, I review pitfalls of using empathetic communication that should be considered by practitioners and further investigated in research addressing empathy to tackle misinformation.
{"title":"Debunking health misinformation with empathy","authors":"Philipp Schmid","doi":"10.1016/j.copsyc.2025.102213","DOIUrl":"10.1016/j.copsyc.2025.102213","url":null,"abstract":"<div><div>Debunking is an effective means to mitigate the impact of health misinformation. However, even after receiving a corrective message, misinformation often persists in influencing individuals' judgements and decision-making. I review evidence on effective components of debunking in health contexts and propose three mechanisms for how expressions of empathy might help reduce the continued influence of health misinformation. Empathetic debunkings might decrease feelings of discomfort and increase believability of debunkings by (1) decreasing perceived threat to underlying attitude roots, (2) decreasing perceived threat to face or (3) increasing perceptions of trustworthiness. Moreover, I review pitfalls of using empathetic communication that should be considered by practitioners and further investigated in research addressing empathy to tackle misinformation.</div></div>","PeriodicalId":48279,"journal":{"name":"Current Opinion in Psychology","volume":"67 ","pages":"Article 102213"},"PeriodicalIF":6.9,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145553339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-15DOI: 10.1016/j.copsyc.2025.102211
Zachary Grossman, Tony Hua
Although humans exhibit many prosocial behaviors, when the social benefits of their options are uncertain, surprisingly many avoid learning them before choosing, using ignorance as an excuse to dodge moral obligations and revert to selfish behavior. This kind of willful ignorance is robust in the sense that researchers have documented it using a wide array of methods, across diverse settings, and a time period spanning nearly two decades. At the same time, however, the degree to which it manifests is inconsistent across and within studies. Some of these inconsistencies stem from obvious factors, while the moderators driving others have yet to be identified or are poorly understood. This study synthesizes and organizes these contextual factors, providing recommendations for future research.
{"title":"Willful ignorance in social decisions: Robust, yet contextually sensitive","authors":"Zachary Grossman, Tony Hua","doi":"10.1016/j.copsyc.2025.102211","DOIUrl":"10.1016/j.copsyc.2025.102211","url":null,"abstract":"<div><div>Although humans exhibit many prosocial behaviors, when the social benefits of their options are uncertain, surprisingly many avoid learning them before choosing, using ignorance as an excuse to dodge moral obligations and revert to selfish behavior. This kind of willful ignorance is robust in the sense that researchers have documented it using a wide array of methods, across diverse settings, and a time period spanning nearly two decades. At the same time, however, the degree to which it manifests is inconsistent across and within studies. Some of these inconsistencies stem from obvious factors, while the moderators driving others have yet to be identified or are poorly understood. This study synthesizes and organizes these contextual factors, providing recommendations for future research.</div></div>","PeriodicalId":48279,"journal":{"name":"Current Opinion in Psychology","volume":"67 ","pages":"Article 102211"},"PeriodicalIF":6.9,"publicationDate":"2025-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-14DOI: 10.1016/j.copsyc.2025.102215
Jan Pfänder , Niels G. Mede , Viktoria Cologna
Public trust in science is vital for tackling global challenges. Recently, global surveys and Many Labs collaborations have begun to broaden the scope of research. However, these studies have also highlighted theoretical and methodological challenges. Here, we review these challenges and argue that beyond expanding geographical coverage, greater conceptual clarity and harmonized measures are essential to improve comparability across studies on trust in science. We conclude by encouraging reflection on the normative assumptions that currently guide research on trust in science.
{"title":"Global studies on trust in science suggest new theoretical and methodological directions","authors":"Jan Pfänder , Niels G. Mede , Viktoria Cologna","doi":"10.1016/j.copsyc.2025.102215","DOIUrl":"10.1016/j.copsyc.2025.102215","url":null,"abstract":"<div><div>Public trust in science is vital for tackling global challenges. Recently, global surveys and Many Labs collaborations have begun to broaden the scope of research. However, these studies have also highlighted theoretical and methodological challenges. Here, we review these challenges and argue that beyond expanding geographical coverage, greater conceptual clarity and harmonized measures are essential to improve comparability across studies on trust in science. We conclude by encouraging reflection on the normative assumptions that currently guide research on trust in science.</div></div>","PeriodicalId":48279,"journal":{"name":"Current Opinion in Psychology","volume":"67 ","pages":"Article 102215"},"PeriodicalIF":6.9,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-14DOI: 10.1016/j.copsyc.2025.102212
Laura Joyner
Mistrust in science can arise from the belief that science or scientists act in ways that undermines our wellbeing or go against our best interests (Jaiswal & Halktis, 2019). Such actions may also constitute a perceived moral violation. Considering how science and scientists are perceived to uphold or undermine moral norms and values may therefore provide helpful insights for understanding relationships of trust. In this review of the trust literature, I explore some of the ways that individuals or communities may perceive different categories of moral values (i.e., Harm, Purity/Sanctity, Authority, Loyalty, and Fairness) as being upheld or undermined by science or scientists. Firstly, examples of harm are discussed (e.g., physical and spiritual harms), followed by research on trust in science and individual differences (i.e., disgust sensitivity, religiosity, and worldviews and ideologies). Research around social identity, and fairness are also examined. Identifying where and why perceived moral violations may arise could be helpful for furthering our understanding relationships of mistrust in science and developing tailored interventions to build and sustain trust. It also provides an opportunity for scientists and researchers to reflect on the moral values that they and any communities they seek to work with hold to ensure any procedures and practices do not inadvertently undermine the trust relationship.
{"title":"Moral values & trust in science","authors":"Laura Joyner","doi":"10.1016/j.copsyc.2025.102212","DOIUrl":"10.1016/j.copsyc.2025.102212","url":null,"abstract":"<div><div>Mistrust in science can arise from the belief that science or scientists act in ways that undermines our wellbeing or go against our best interests (Jaiswal & Halktis, 2019). Such actions may also constitute a perceived moral violation. Considering how science and scientists are perceived to uphold or undermine moral norms and values may therefore provide helpful insights for understanding relationships of trust. In this review of the trust literature, I explore some of the ways that individuals or communities may perceive different categories of moral values (i.e., Harm, Purity/Sanctity, Authority, Loyalty, and Fairness) as being upheld or undermined by science or scientists. Firstly, examples of harm are discussed (e.g., physical and spiritual harms), followed by research on trust in science and individual differences (i.e., disgust sensitivity, religiosity, and worldviews and ideologies). Research around social identity, and fairness are also examined. Identifying where and why perceived moral violations may arise could be helpful for furthering our understanding relationships of mistrust in science and developing tailored interventions to build and sustain trust. It also provides an opportunity for scientists and researchers to reflect on the moral values that they and any communities they seek to work with hold to ensure any procedures and practices do not inadvertently undermine the trust relationship.</div></div>","PeriodicalId":48279,"journal":{"name":"Current Opinion in Psychology","volume":"67 ","pages":"Article 102212"},"PeriodicalIF":6.9,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.copsyc.2025.102214
Gale M. Sinatra
Democracies depends on citizens to make informed decisions about their health, wellbeing, and environmental sustainability. Science is complex and thus science-informed decisions and policy requires trust in qualified experts. Mistrust of experts can contribute to science doubt, resistance, and denial. This article reviews psychological issues behind these challenges as well as the role of epistemic trust in science understanding and acceptance. It also offers suggestions about building public trust in science.
{"title":"The erosion of trust is contributing to science denial","authors":"Gale M. Sinatra","doi":"10.1016/j.copsyc.2025.102214","DOIUrl":"10.1016/j.copsyc.2025.102214","url":null,"abstract":"<div><div>Democracies depends on citizens to make informed decisions about their health, wellbeing, and environmental sustainability. Science is complex and thus science-informed decisions and policy requires trust in qualified experts. Mistrust of experts can contribute to science doubt, resistance, and denial. This article reviews psychological issues behind these challenges as well as the role of epistemic trust in science understanding and acceptance. It also offers suggestions about building public trust in science.</div></div>","PeriodicalId":48279,"journal":{"name":"Current Opinion in Psychology","volume":"67 ","pages":"Article 102214"},"PeriodicalIF":6.9,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.copsyc.2025.102216
Matthew J. Hornsey, Aimee E. Smith, Samuel Pearson, Christian Bretter, Jarren L. Nylund
Mistrust of the scientific consensus around issues such as climate change and vaccination is mainstream, compromising our ability to respond to existential global threats. In the wrong hands, Generative AI can spread misinformation with unprecedented scale and psychological sophistication. However, large language models (LLMs) have also shown considerable promise for reducing misinformation and conspiracy theories, potentially revolutionizing science communication. This review summarizes the rapidly evolving frontier of empirical research on how conversational AI such as ChatGPT can be used to defuse mistrust of science around hot-button scientific issues. These studies find negligible evidence that LLM responds to human queries by reproducing conspiracy theories or misinformation about scientific topics. Rather, conversations with LLMs typically reduce participants’ levels of science skepticism and misinformation endorsement. We conclude that LLMs (in their current form) have potential to complement existing science communication strategies, provided their use is accompanied by safeguards that preserve informational integrity and public trust.
{"title":"Using conversational AI to reduce science skepticism","authors":"Matthew J. Hornsey, Aimee E. Smith, Samuel Pearson, Christian Bretter, Jarren L. Nylund","doi":"10.1016/j.copsyc.2025.102216","DOIUrl":"10.1016/j.copsyc.2025.102216","url":null,"abstract":"<div><div>Mistrust of the scientific consensus around issues such as climate change and vaccination is mainstream, compromising our ability to respond to existential global threats. In the wrong hands, Generative AI can spread misinformation with unprecedented scale and psychological sophistication. However, large language models (LLMs) have also shown considerable promise for reducing misinformation and conspiracy theories, potentially revolutionizing science communication. This review summarizes the rapidly evolving frontier of empirical research on how conversational AI such as ChatGPT can be used to defuse mistrust of science around hot-button scientific issues. These studies find negligible evidence that LLM responds to human queries by reproducing conspiracy theories or misinformation about scientific topics. Rather, conversations with LLMs typically reduce participants’ levels of science skepticism and misinformation endorsement. We conclude that LLMs (in their current form) have potential to complement existing science communication strategies, provided their use is accompanied by safeguards that preserve informational integrity and public trust.</div></div>","PeriodicalId":48279,"journal":{"name":"Current Opinion in Psychology","volume":"67 ","pages":"Article 102216"},"PeriodicalIF":6.9,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145531198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1016/j.copsyc.2025.102208
Yaniv Shani , Marcel Zeelenberg
Willful ignorance is often framed as a strategy for avoiding moral responsibility in social decision making. We propose a broader view: individuals also avoid or seek information in purely individual contexts as a way to regulate emotions. People may delay confronting themselves to useful, yet painful, truths, or, paradoxically, pursue distressing but useless information to relieve uncertainty. This duality reflects a strategic balance between the emotional costs of knowing and the psychological discomfort of not knowing. We review recent research illustrating how information avoidance and search serve both self-protection and moral regulation. Ultimately, willful ignorance is reframed as a dynamic emotion-regulation strategy that helps individuals navigate the tension between uncertainty, truth, and emotional endurance in both social and personal domains.
{"title":"The pain of suspecting and the comforts of knowing the worst","authors":"Yaniv Shani , Marcel Zeelenberg","doi":"10.1016/j.copsyc.2025.102208","DOIUrl":"10.1016/j.copsyc.2025.102208","url":null,"abstract":"<div><div>Willful ignorance is often framed as a strategy for avoiding moral responsibility in social decision making. We propose a broader view: individuals also avoid or seek information in purely individual contexts as a way to regulate emotions. People may delay confronting themselves to useful, yet painful, truths, or, paradoxically, pursue distressing but useless information to relieve uncertainty. This duality reflects a strategic balance between the emotional costs of knowing and the psychological discomfort of not knowing. We review recent research illustrating how information avoidance and search serve both self-protection and moral regulation. Ultimately, willful ignorance is reframed as a dynamic emotion-regulation strategy that helps individuals navigate the tension between uncertainty, truth, and emotional endurance in both social and personal domains.</div></div>","PeriodicalId":48279,"journal":{"name":"Current Opinion in Psychology","volume":"67 ","pages":"Article 102208"},"PeriodicalIF":6.9,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145485613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1016/j.copsyc.2025.102209
Linda Thunström, Klaas van ’t Veld, Jason F. Shogren
Informational policies such as labeling requirements and public awareness campaigns generally attract higher public support and face less political resistance than more interventionist policy measures such as taxes or bans. Yet, their behavioral impact is small, partly due to willful ignorance. This review discusses evidence of scalable, low-cost interventions that may reduce willful ignorance and increase information uptake. We group these interventions into two categories: (1) making information harder to ignore, through greater salience, strategic placement, and personalization; and (2) increasing perceived net benefits of becoming informed, by simplifying information, boosting self-efficacy, encouraging contemplation, framing outcomes as gains, bundling with valued content, or offering incentives. Evidence suggests these interventions can be effective at enhancing information uptake, but their impact often varies by context and population. We highlight the potential of using machine learning and AI to optimize the interventions’ effectiveness, through both audience targeting and content tailoring.
{"title":"Interventions that reduce willful ignorance of policy-relevant information","authors":"Linda Thunström, Klaas van ’t Veld, Jason F. Shogren","doi":"10.1016/j.copsyc.2025.102209","DOIUrl":"10.1016/j.copsyc.2025.102209","url":null,"abstract":"<div><div>Informational policies such as labeling requirements and public awareness campaigns generally attract higher public support and face less political resistance than more interventionist policy measures such as taxes or bans. Yet, their behavioral impact is small, partly due to willful ignorance. This review discusses evidence of scalable, low-cost interventions that may reduce willful ignorance and increase information uptake. We group these interventions into two categories: (1) making information harder to ignore, through greater salience, strategic placement, and personalization; and (2) increasing perceived net benefits of becoming informed, by simplifying information, boosting self-efficacy, encouraging contemplation, framing outcomes as gains, bundling with valued content, or offering incentives. Evidence suggests these interventions can be effective at enhancing information uptake, but their impact often varies by context and population. We highlight the potential of using machine learning and AI to optimize the interventions’ effectiveness, through both audience targeting and content tailoring.</div></div>","PeriodicalId":48279,"journal":{"name":"Current Opinion in Psychology","volume":"67 ","pages":"Article 102209"},"PeriodicalIF":6.9,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145485612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-03DOI: 10.1016/j.copsyc.2025.102202
Toni G.L.A. van der Meer, Michael Hameleers
Science today operates in an environment increasingly described as a crisis of trust, where confidence in institutions has eroded and consensus over truth is fragmented. While still among the most trusted actors, science faces pressing trust-related challenges: populist rhetoric can frame scientists as part of a detached elite, polarized debates fuel delegitimizing narratives, scientific knowledge is increasingly presented as another opinion and therewith competing against direct experiences and gut feelings, and news media dynamics can intensify a spiral of negativity in which scandals and threat-oriented framings overshadow science's constructive role. These dynamics undermine science's epistemic authority and risk fueling disengagement from knowledge altogether. We caution against the rise of epistemic indifference, where individuals lose motivation to seek, evaluate, or trust knowledge, and highlight the need to safeguard the legitimacy of science in an era of pervasive skepticism.
{"title":"Science and the crisis of trust","authors":"Toni G.L.A. van der Meer, Michael Hameleers","doi":"10.1016/j.copsyc.2025.102202","DOIUrl":"10.1016/j.copsyc.2025.102202","url":null,"abstract":"<div><div>Science today operates in an environment increasingly described as a crisis of trust, where confidence in institutions has eroded and consensus over truth is fragmented. While still among the most trusted actors, science faces pressing trust-related challenges: populist rhetoric can frame scientists as part of a detached elite, polarized debates fuel delegitimizing narratives, scientific knowledge is increasingly presented as another opinion and therewith competing against direct experiences and gut feelings, and news media dynamics can intensify a spiral of negativity in which scandals and threat-oriented framings overshadow science's constructive role. These dynamics undermine science's epistemic authority and risk fueling disengagement from knowledge altogether. We caution against the rise of epistemic indifference, where individuals lose motivation to seek, evaluate, or trust knowledge, and highlight the need to safeguard the legitimacy of science in an era of pervasive skepticism.</div></div>","PeriodicalId":48279,"journal":{"name":"Current Opinion in Psychology","volume":"67 ","pages":"Article 102202"},"PeriodicalIF":6.9,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145441452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31DOI: 10.1016/j.copsyc.2025.102206
Marlene Sophie Altenmüller
Revealing “the person behind the science” (i.e., personal self-disclosure) is common advice for science communicators to bridge a stereotypical distance, foster trust, and communicate effectively. A review of the literature, however, paints a disenchanting picture: Self-disclosure in science communication is a trade-off. While having the potential to increase warmth-related perceptions (e.g., closeness, benevolence, liking), it also comes at the cost of decreasing competence-related perceptions (e.g., expertise). Overall, these ambivalent effects result in lacking downstream impact (e.g., on behavioral intentions, funding and policy support) and might even bear risks. Altogether, empirical findings question the value of this popular practical recommendation and highlight the need for theory-driven, evidence-based research in science communication.
{"title":"Personal disclosure in science communication","authors":"Marlene Sophie Altenmüller","doi":"10.1016/j.copsyc.2025.102206","DOIUrl":"10.1016/j.copsyc.2025.102206","url":null,"abstract":"<div><div>Revealing “the person behind the science” (i.e., personal self-disclosure) is common advice for science communicators to bridge a stereotypical distance, foster trust, and communicate effectively. A review of the literature, however, paints a disenchanting picture: Self-disclosure in science communication is a trade-off. While having the potential to increase warmth-related perceptions (e.g., closeness, benevolence, liking), it also comes at the cost of decreasing competence-related perceptions (e.g., expertise). Overall, these ambivalent effects result in lacking downstream impact (e.g., on behavioral intentions, funding and policy support) and might even bear risks. Altogether, empirical findings question the value of this popular practical recommendation and highlight the need for theory-driven, evidence-based research in science communication.</div></div>","PeriodicalId":48279,"journal":{"name":"Current Opinion in Psychology","volume":"67 ","pages":"Article 102206"},"PeriodicalIF":6.9,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145424228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}