Perceived medical disinformation and public trust: Commentary on Grimes and Greenhalgh (2024)

IF 2.1 4区 医学 Q3 HEALTH CARE SCIENCES & SERVICES Journal of evaluation in clinical practice Pub Date : 2024-10-18 DOI:10.1111/jep.14202
Brian Baigrie PhD, Mathew Mercuri PhD
{"title":"Perceived medical disinformation and public trust: Commentary on Grimes and Greenhalgh (2024)","authors":"Brian Baigrie PhD,&nbsp;Mathew Mercuri PhD","doi":"10.1111/jep.14202","DOIUrl":null,"url":null,"abstract":"<p>When people are misled about important topics, whether it results from an honest mistake, negligence, unconscious bias or is intentional (as in the case of disinformation), people can suffer serious harm.<span><sup>1</sup></span> In contrast to an honest mistake, disinformation is disseminated by those who are actively engaged in an attempt to mislead. We have seen the use of disinformation in the context of vaccines, resulting in vaccine refusal, which is a significant threat to public health.<span><sup>2, 3</sup></span> A recent article published in this journal by Grimes and Greenhalgh<span><sup>4</sup></span> draws welcome attention to the problem of disinformation in the context of the dissemination of antivaccine disinformation on social media when its purveyors are members of a profession that holds a privileged position of trust with respect to the public, which it is argued amplifies the harm.</p><p>Though there is a large body of research examining the influence perceived experts who recommend vaccination have on public uptake, the influence of such experts who act in the opposite role through the dissemination of disinformation has received scant attention (Harris 2024: 2).<span><sup>2</sup></span> Given evidence that vaccine influencers (whether pro or antivaccine) play a disproportionate (and often oversized) role in the public's vaccine uptake, Grimes and Greenhalgh<span><sup>4</sup></span> reasonably contend that the combination of a respected source of information and social media amplifies the direct harm caused by disinformation when people are misled into making decisions respecting the safety of vaccines (i.e., reducing vaccine uptake, elevating individual risk of morbidity and mortality, and increasing the chance of disease outbreaks).<span><sup>5-7</sup></span> There is also the indirect harm through the erosion of public trust<span><sup>8</sup></span> and the curbing of critical discussion by physicians and experts who may be concerned about fallout from weighing into online discussion where denialism has been orchestrated and arguably has emboldened people to push back violently against public health.</p><p>When we consider this issue in light of a 2023 <i>JAMA</i> study,<span><sup>9</sup></span> which showed 52 prominent physicians who disseminated misinformation to large audiences on social media, Grimes and Greenhalgh raise the important question: How should medical regulators respond to the problem of disinformation spread on social media by physicians? The authors mount an argument to the effect that regulatory bodies have an ethical duty to act against doctors who intentionally mislead the public through (e.g.,) the misinterpretation of data, the fabrication of statistics, and sometimes just making things up; i.e., one way to counter the spread of online disinformation is to appeal to governments and relevant regulatory bodies to disincentivize doctors acting as content creators from doing so. There are precedents for such regulation. Legislation is already in place in some jurisdictions, as noted by Edwards<span><sup>10</sup></span> “that regulates content, in whatever medium it appears … laws on data protection, copyright, pornography, spam, race hate, malicious communication, misleading advertising, libel, fraud, equity legislation and image rights.” Misleading advertising violates the trust of the public and undermines the integrity of businesses, and it is not clear that the promotion of unproven Covid-19 treatments, such as antimalarial drugs, should be treated any differently.</p><p>Governments in Canada, the UK, and in Europe have attempted to address the issue of online harm through the introduction of legislation that imposes a duty to act responsibly on social media platforms, including the adoption of policies to identify and mitigate risk, filing a safety plan with a government regulator, and new transparency requirements, which require platforms to publish details on how it addresses harmful content, as well as sharing data with the government and with accredited independent researchers and civil society. Only Europe's Digital Services Act addresses a broad set of social harms, including disinformation and content that undermines democratic elections.</p><p>Strategies clearly are needed to deal with the problem of disinformation in the public health sector. A great deal of work remains to understand the scale of antivaccine influencers, who in many ways conduct themselves like biomedical experts; i.e., their online postings typically advance scientific arguments and share scientific links, signalling their expertise in their social media profiles and in their postings. They engage the scientific literature but appear to reject the scientific consensus, even when this consensus (as often was the case during the Covid-19 pandemic) is a moving target.<span><sup>2</sup></span> Often a substitution of expertise is involved, with content creators trading on their physician or scientist credentials and weighing in on matters where they lack relevant expertise.</p><p>Clinicians and scientists should be free to debate issues of clinical care and effectiveness. Indeed, most things we study, including technologies/medicines used in clinical care, leave room for interpretation, or can be better understood with additional investigation. The question is where does reasonable disagreement on interpretation of the data end and what is the appropriate forum for such debate? It is not debate that is problematic—neither clinicians nor scientists should be persecuted for holding a different view when the data allows for such, and the debate is motivated by the spirit of scientific inquiry. What is concerning is the intention to use one's position and status with the public to advance an agenda that is not in the public's best interest. It is that context where advocating a view is not in the spirit of debate, but in the realm of disinformation. The implication is that if we seek to implement a policy to regulate the dissemination of disinformation, it is incumbent that we understand what disinformation is, including the sorts of things that affect the amount of disinformation and how they affect it.<span><sup>1</sup></span> One concern about the authors’ framing of the problem at hand is that while it is the case that disinformation is always misleading, it is not always intended to mislead. Most forms of disinformation, such as lies and propaganda, are misleading because the source intends the information to be misleading. Conspiracy theories and fake alarm calls are misleading because the source systematically benefits (e.g., politically, economically) from their being misleading.<span><sup>1</sup></span> Denialism, as defined by Hoofnagle and Hoofnagle,<span><sup>11</sup></span> and which may be the agenda for the spread of disinformation, is motivated by a desire to undermine a policy, rather than advance understanding of an issue.</p><p>For a regulatory policy to work, one needs a mechanism to differentiate the various types of inaccurate information; notably, intentionally misleading information from accidental falsehoods (mistakes). A candidate mechanism fit for the purpose of public health will need to be sensitive to the dynamic nature of science, as evidenced by the evolving understanding of SARS-CoV-2's lethality, infectiousness, and constant updating of information, which could be due to epistemic uncertainty (e.g., limits of our data and analysis tools) or evolution of the virus (or both). An effective mechanism will also need to provide a way to assess whether misleading scientific information (whether outdated or incomplete) is to count as misinformation (or as disinformation). The authors pass over this important question quickly but do cite as a consideration “whether statements appear to have been made in good faith” (2024: 4). Many public health service announcements on a range of protective measures, such as social distancing, face coverings, lockdowns, and research studies published during the pandemic could be classified as misinformation but not as disinformation unless one takes the view that there was an intent to mislead the public. One could take the view that public health messaging concerning face coverings, for example, during the pandemic misled the public, but this is not to say that this was the intention behind the mixed messaging on face coverings, nor a justification for the WHO disseminating inaccurate information to the effect that face masks do not confer a benefit over and above social distancing. There may be times that misleading the public can be a prudential strategy towards a further end (such as ensuring that front line health care professionals are prioritised in the allocation of scarce medical resources). Whether this lack of transparency is acceptable to the public is another matter altogether, as there is always a risk it may backfire, leading to erosion of the public's trust in public health, which one could argue happened during the COVID-19 pandemic.</p><p>Given the authors own characterisation of disinformation as misinformation that is spread deliberately (intentionally), how then are regulators to determine whether a vaccine influencer intended to mislead the public through their social media postings? How then are regulators to determine whether the postings were made in good faith? One line of response is that misinformation counts as disinformation if it is not corrected in a timely manner in step with an evolving understanding of the most effective mitigation strategies during a public health emergency. The challenges we highlight should not be seen as a reason to accept disinformation. Rather, we need to think deeply about how to best approach the problem so as to not make matters worse for the public and the advancement of knowledge.</p><p>The strategy advanced by Grimes and Greenhalgh<span><sup>4</sup></span> is that one way to counter the spread of online disinformation is to appeal to governments and relevant regulatory bodies to disincentivize purveyors of disinformation from doing so. Online information is largely hosted by global platforms (Google, Facebook), and audience attention is determined by their competing algorithms. The authors pass over a second strategy for regulating disinformation, which involves regulating the platforms that host content that, for example, fuels vaccine hesitancy. The second strategy is to encourage social media companies to modify their platform so that fewer people are exposed to disinformation. These platforms do have the ability to reduce disinformation and to close the accounts of those who are disseminating it. President Trump was “deplatformed” by both Facebook and Twitter both after he declared the US election “stolen” and appeared to incite against democracy. This action was applauded by many, but others were concerned about whether private platforms should have such immense powers to control speech rather than elected governments, or courts. In the past, both the UK and the EU have encouraged the large internet platform companies to self-regulate by signing them up to a Code of Practice on Disinformation, including closing down fake accounts and demonetising the spread of disinformation.<span><sup>12</sup></span> Another Promising strategy is to impose a “duty of care” in relation to harmful content on social media, which could establish mechanisms to verify academic and professional credentials and the identification of signals within profiles to identify authorities on health-related topics (Harris 2024, 8).<span><sup>2</sup></span></p><p>The authors’ proposal of a policy to disincentivize content creators, rather than police social media platforms, in turn, raises important questions about the scope of regulatory oversight. For example, Lewis<span><sup>12</sup></span> raises the interesting question about the 2-year delay on the part of the WHO to recognise that SARS-COV-2 is airborne, and not transmitted in line with “decades-old infection-control teachings about how respiratory viruses generally pass from one person to another”.<span><sup>12</sup></span> The WHO tweeted on March 20, 2021, that “#fact: Covid is NOT airborne,” capitalising the word NOT to highlight their confidence in the truth of this position. It was not until December 2021 that the WHO first used the word airborne on a WHO webpage, stating that “transmission can occur through long-range airborne transmission.” Lewis points out that the WHO's messaging finally echoed “a chorus of aerosol and public-health experts had been trying to get it to say since the earliest days of the outbreak”.<span><sup>12</sup></span></p><p>The mainstream media frequently focus on the role of the anti-vaccination movement in reducing vaccine acceptance, and Grimes and Greenhalgh rightly point out that some medical professionals also bear a share of responsibility for fuelling public hesitancy about vaccines and the ongoing erosion of public trust. A commonly held belief is that anti-vaccination messages lead to reduced acceptance, which leads to reduced coverage which, in turn, causes outbreaks. Even so, it is widely recognised that barriers to high vaccination coverage extend beyond negative messaging about vaccination. The authors note approvingly the WHO's listing of vaccine hesitancy as one of the top ten threats to global health in 2019. Notably, the WHO also included fragile health systems and weak primary care in the list of top threats. These factors also influence vaccine uptake and need to be factored into judgements about the extent of the harm caused by disinformation.</p><p>The authors chastise clinicians and scientists who advanced opinions on social media, based on evidence that is “selective, questionable or already refuted.” With the exception of the legalistic suggestion of “a totality of evidence,” there is little by way of suggestion as to what counts as good evidence during a public health emergency when there is an urgent need for novel science on the fly.<span><sup>13, 14</sup></span> When the WHO, for example, discounted field epidemiology reports and laboratory-based aerosol studies (among other sources of evidence for airborne transmission) on the grounds that it was not definitive, did they run afoul of the position that what counts is a totality (preponderance) of evidence? Were they (i.e., the WHO) not selecting data to support their deeply entrenched view about the transmission of respiratory viruses, as Geenhalgh herself has been quoted as conceding?<span><sup>12</sup></span></p><p>A final concern is that the policy sketched by Grimes and Greenhalgh is deeply wedded to a top-down paternalistic approach that overlooks the diverse beliefs, values and trusted sources of information within communities and population subgroups. A third promising strategy that can counter the harm caused by disinformation is to help internet users to modify their online habits in an effort to minimise the chance that they will form false beliefs on the basis of misleading claims. Inoculation theory has been put forward<span><sup>15</sup></span> as a way to reduce susceptibility to misinformation by informing people about how they might be misinformed, but operationalizing this approach has been elusive both at a theoretical level and a practical level.<span><sup>4</sup></span> note that certain demographic groups are more susceptible to disinformation than others, and doubtless fragile health systems and weak primary care are drivers of this susceptibility to disinformation. Helping to mitigate the impact of fragile systems of care on susceptibility to uptake of disinformation may take the form of building partnerships with communities that yield insights into measuring, monitoring, and characterising misinformation that will empower the public to look for signs of deception in the information itself.</p><p>Grimes and Greenhalgh raise issues that have important implications on the health of society and our collective ability to implement countermeasures to public health threats. The volume of disinformation and its purported role in spurring hesitancy in or undermining of public health measures, such as vaccines, may imply a lack of systems of accountability. Traditionally, clinicians and scientists have been accountable to their profession or communities. Academic journals and the community of scientists are generally considered the check and balance of misinformation. These activities would take place out of the eyes of the public, and so the impact of “getting it wrong”, whether intentional or by mistake, was often minimal. However, as these discussions have become more public, the traditional mechanisms of accountability are often insufficient. We welcome Grimes and Greenhalgh's call for reconsidering how we ensure accountability. How we do so and who is given the power as the arbitrator of reasonable debate and assessing motivation, however, is not a trivial task. Indeed, admitting we have a problem is the first step.</p><p>The authors declare no conflicts of interest.</p><p>This is a commentary, and ethics approval is not required at our institutions.</p>","PeriodicalId":15997,"journal":{"name":"Journal of evaluation in clinical practice","volume":"31 3","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jep.14202","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of evaluation in clinical practice","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jep.14202","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

When people are misled about important topics, whether it results from an honest mistake, negligence, unconscious bias or is intentional (as in the case of disinformation), people can suffer serious harm.1 In contrast to an honest mistake, disinformation is disseminated by those who are actively engaged in an attempt to mislead. We have seen the use of disinformation in the context of vaccines, resulting in vaccine refusal, which is a significant threat to public health.2, 3 A recent article published in this journal by Grimes and Greenhalgh4 draws welcome attention to the problem of disinformation in the context of the dissemination of antivaccine disinformation on social media when its purveyors are members of a profession that holds a privileged position of trust with respect to the public, which it is argued amplifies the harm.

Though there is a large body of research examining the influence perceived experts who recommend vaccination have on public uptake, the influence of such experts who act in the opposite role through the dissemination of disinformation has received scant attention (Harris 2024: 2).2 Given evidence that vaccine influencers (whether pro or antivaccine) play a disproportionate (and often oversized) role in the public's vaccine uptake, Grimes and Greenhalgh4 reasonably contend that the combination of a respected source of information and social media amplifies the direct harm caused by disinformation when people are misled into making decisions respecting the safety of vaccines (i.e., reducing vaccine uptake, elevating individual risk of morbidity and mortality, and increasing the chance of disease outbreaks).5-7 There is also the indirect harm through the erosion of public trust8 and the curbing of critical discussion by physicians and experts who may be concerned about fallout from weighing into online discussion where denialism has been orchestrated and arguably has emboldened people to push back violently against public health.

When we consider this issue in light of a 2023 JAMA study,9 which showed 52 prominent physicians who disseminated misinformation to large audiences on social media, Grimes and Greenhalgh raise the important question: How should medical regulators respond to the problem of disinformation spread on social media by physicians? The authors mount an argument to the effect that regulatory bodies have an ethical duty to act against doctors who intentionally mislead the public through (e.g.,) the misinterpretation of data, the fabrication of statistics, and sometimes just making things up; i.e., one way to counter the spread of online disinformation is to appeal to governments and relevant regulatory bodies to disincentivize doctors acting as content creators from doing so. There are precedents for such regulation. Legislation is already in place in some jurisdictions, as noted by Edwards10 “that regulates content, in whatever medium it appears … laws on data protection, copyright, pornography, spam, race hate, malicious communication, misleading advertising, libel, fraud, equity legislation and image rights.” Misleading advertising violates the trust of the public and undermines the integrity of businesses, and it is not clear that the promotion of unproven Covid-19 treatments, such as antimalarial drugs, should be treated any differently.

Governments in Canada, the UK, and in Europe have attempted to address the issue of online harm through the introduction of legislation that imposes a duty to act responsibly on social media platforms, including the adoption of policies to identify and mitigate risk, filing a safety plan with a government regulator, and new transparency requirements, which require platforms to publish details on how it addresses harmful content, as well as sharing data with the government and with accredited independent researchers and civil society. Only Europe's Digital Services Act addresses a broad set of social harms, including disinformation and content that undermines democratic elections.

Strategies clearly are needed to deal with the problem of disinformation in the public health sector. A great deal of work remains to understand the scale of antivaccine influencers, who in many ways conduct themselves like biomedical experts; i.e., their online postings typically advance scientific arguments and share scientific links, signalling their expertise in their social media profiles and in their postings. They engage the scientific literature but appear to reject the scientific consensus, even when this consensus (as often was the case during the Covid-19 pandemic) is a moving target.2 Often a substitution of expertise is involved, with content creators trading on their physician or scientist credentials and weighing in on matters where they lack relevant expertise.

Clinicians and scientists should be free to debate issues of clinical care and effectiveness. Indeed, most things we study, including technologies/medicines used in clinical care, leave room for interpretation, or can be better understood with additional investigation. The question is where does reasonable disagreement on interpretation of the data end and what is the appropriate forum for such debate? It is not debate that is problematic—neither clinicians nor scientists should be persecuted for holding a different view when the data allows for such, and the debate is motivated by the spirit of scientific inquiry. What is concerning is the intention to use one's position and status with the public to advance an agenda that is not in the public's best interest. It is that context where advocating a view is not in the spirit of debate, but in the realm of disinformation. The implication is that if we seek to implement a policy to regulate the dissemination of disinformation, it is incumbent that we understand what disinformation is, including the sorts of things that affect the amount of disinformation and how they affect it.1 One concern about the authors’ framing of the problem at hand is that while it is the case that disinformation is always misleading, it is not always intended to mislead. Most forms of disinformation, such as lies and propaganda, are misleading because the source intends the information to be misleading. Conspiracy theories and fake alarm calls are misleading because the source systematically benefits (e.g., politically, economically) from their being misleading.1 Denialism, as defined by Hoofnagle and Hoofnagle,11 and which may be the agenda for the spread of disinformation, is motivated by a desire to undermine a policy, rather than advance understanding of an issue.

For a regulatory policy to work, one needs a mechanism to differentiate the various types of inaccurate information; notably, intentionally misleading information from accidental falsehoods (mistakes). A candidate mechanism fit for the purpose of public health will need to be sensitive to the dynamic nature of science, as evidenced by the evolving understanding of SARS-CoV-2's lethality, infectiousness, and constant updating of information, which could be due to epistemic uncertainty (e.g., limits of our data and analysis tools) or evolution of the virus (or both). An effective mechanism will also need to provide a way to assess whether misleading scientific information (whether outdated or incomplete) is to count as misinformation (or as disinformation). The authors pass over this important question quickly but do cite as a consideration “whether statements appear to have been made in good faith” (2024: 4). Many public health service announcements on a range of protective measures, such as social distancing, face coverings, lockdowns, and research studies published during the pandemic could be classified as misinformation but not as disinformation unless one takes the view that there was an intent to mislead the public. One could take the view that public health messaging concerning face coverings, for example, during the pandemic misled the public, but this is not to say that this was the intention behind the mixed messaging on face coverings, nor a justification for the WHO disseminating inaccurate information to the effect that face masks do not confer a benefit over and above social distancing. There may be times that misleading the public can be a prudential strategy towards a further end (such as ensuring that front line health care professionals are prioritised in the allocation of scarce medical resources). Whether this lack of transparency is acceptable to the public is another matter altogether, as there is always a risk it may backfire, leading to erosion of the public's trust in public health, which one could argue happened during the COVID-19 pandemic.

Given the authors own characterisation of disinformation as misinformation that is spread deliberately (intentionally), how then are regulators to determine whether a vaccine influencer intended to mislead the public through their social media postings? How then are regulators to determine whether the postings were made in good faith? One line of response is that misinformation counts as disinformation if it is not corrected in a timely manner in step with an evolving understanding of the most effective mitigation strategies during a public health emergency. The challenges we highlight should not be seen as a reason to accept disinformation. Rather, we need to think deeply about how to best approach the problem so as to not make matters worse for the public and the advancement of knowledge.

The strategy advanced by Grimes and Greenhalgh4 is that one way to counter the spread of online disinformation is to appeal to governments and relevant regulatory bodies to disincentivize purveyors of disinformation from doing so. Online information is largely hosted by global platforms (Google, Facebook), and audience attention is determined by their competing algorithms. The authors pass over a second strategy for regulating disinformation, which involves regulating the platforms that host content that, for example, fuels vaccine hesitancy. The second strategy is to encourage social media companies to modify their platform so that fewer people are exposed to disinformation. These platforms do have the ability to reduce disinformation and to close the accounts of those who are disseminating it. President Trump was “deplatformed” by both Facebook and Twitter both after he declared the US election “stolen” and appeared to incite against democracy. This action was applauded by many, but others were concerned about whether private platforms should have such immense powers to control speech rather than elected governments, or courts. In the past, both the UK and the EU have encouraged the large internet platform companies to self-regulate by signing them up to a Code of Practice on Disinformation, including closing down fake accounts and demonetising the spread of disinformation.12 Another Promising strategy is to impose a “duty of care” in relation to harmful content on social media, which could establish mechanisms to verify academic and professional credentials and the identification of signals within profiles to identify authorities on health-related topics (Harris 2024, 8).2

The authors’ proposal of a policy to disincentivize content creators, rather than police social media platforms, in turn, raises important questions about the scope of regulatory oversight. For example, Lewis12 raises the interesting question about the 2-year delay on the part of the WHO to recognise that SARS-COV-2 is airborne, and not transmitted in line with “decades-old infection-control teachings about how respiratory viruses generally pass from one person to another”.12 The WHO tweeted on March 20, 2021, that “#fact: Covid is NOT airborne,” capitalising the word NOT to highlight their confidence in the truth of this position. It was not until December 2021 that the WHO first used the word airborne on a WHO webpage, stating that “transmission can occur through long-range airborne transmission.” Lewis points out that the WHO's messaging finally echoed “a chorus of aerosol and public-health experts had been trying to get it to say since the earliest days of the outbreak”.12

The mainstream media frequently focus on the role of the anti-vaccination movement in reducing vaccine acceptance, and Grimes and Greenhalgh rightly point out that some medical professionals also bear a share of responsibility for fuelling public hesitancy about vaccines and the ongoing erosion of public trust. A commonly held belief is that anti-vaccination messages lead to reduced acceptance, which leads to reduced coverage which, in turn, causes outbreaks. Even so, it is widely recognised that barriers to high vaccination coverage extend beyond negative messaging about vaccination. The authors note approvingly the WHO's listing of vaccine hesitancy as one of the top ten threats to global health in 2019. Notably, the WHO also included fragile health systems and weak primary care in the list of top threats. These factors also influence vaccine uptake and need to be factored into judgements about the extent of the harm caused by disinformation.

The authors chastise clinicians and scientists who advanced opinions on social media, based on evidence that is “selective, questionable or already refuted.” With the exception of the legalistic suggestion of “a totality of evidence,” there is little by way of suggestion as to what counts as good evidence during a public health emergency when there is an urgent need for novel science on the fly.13, 14 When the WHO, for example, discounted field epidemiology reports and laboratory-based aerosol studies (among other sources of evidence for airborne transmission) on the grounds that it was not definitive, did they run afoul of the position that what counts is a totality (preponderance) of evidence? Were they (i.e., the WHO) not selecting data to support their deeply entrenched view about the transmission of respiratory viruses, as Geenhalgh herself has been quoted as conceding?12

A final concern is that the policy sketched by Grimes and Greenhalgh is deeply wedded to a top-down paternalistic approach that overlooks the diverse beliefs, values and trusted sources of information within communities and population subgroups. A third promising strategy that can counter the harm caused by disinformation is to help internet users to modify their online habits in an effort to minimise the chance that they will form false beliefs on the basis of misleading claims. Inoculation theory has been put forward15 as a way to reduce susceptibility to misinformation by informing people about how they might be misinformed, but operationalizing this approach has been elusive both at a theoretical level and a practical level.4 note that certain demographic groups are more susceptible to disinformation than others, and doubtless fragile health systems and weak primary care are drivers of this susceptibility to disinformation. Helping to mitigate the impact of fragile systems of care on susceptibility to uptake of disinformation may take the form of building partnerships with communities that yield insights into measuring, monitoring, and characterising misinformation that will empower the public to look for signs of deception in the information itself.

Grimes and Greenhalgh raise issues that have important implications on the health of society and our collective ability to implement countermeasures to public health threats. The volume of disinformation and its purported role in spurring hesitancy in or undermining of public health measures, such as vaccines, may imply a lack of systems of accountability. Traditionally, clinicians and scientists have been accountable to their profession or communities. Academic journals and the community of scientists are generally considered the check and balance of misinformation. These activities would take place out of the eyes of the public, and so the impact of “getting it wrong”, whether intentional or by mistake, was often minimal. However, as these discussions have become more public, the traditional mechanisms of accountability are often insufficient. We welcome Grimes and Greenhalgh's call for reconsidering how we ensure accountability. How we do so and who is given the power as the arbitrator of reasonable debate and assessing motivation, however, is not a trivial task. Indeed, admitting we have a problem is the first step.

The authors declare no conflicts of interest.

This is a commentary, and ethics approval is not required at our institutions.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
感知到的虚假医疗信息与公众信任:对 Grimes 和 Greenhalgh(2024 年)的评论。
当人们在重要话题上被误导时,无论是由于诚实的错误、疏忽、无意识的偏见还是故意的(如虚假信息),人们都会受到严重的伤害与诚实的错误相反,虚假信息是由那些积极从事误导企图的人传播的。我们看到在疫苗方面使用虚假信息,导致拒绝接种疫苗,这是对公众健康的重大威胁。2,3 Grimes和Greenhalgh4最近在本刊上发表的一篇文章令人欢迎地关注了在社交媒体上传播反疫苗虚假信息的背景下的虚假信息问题,这些虚假信息的传播者是在公众信任方面拥有特权地位的专业人员,他们认为这放大了危害。尽管有大量的研究调查了推荐接种疫苗的专家对公众接受的影响,但通过传播虚假信息而扮演相反角色的专家的影响却很少受到关注(Harris 2024: 2)鉴于有证据表明,疫苗影响者(无论是支持还是反对疫苗)在公众接种疫苗方面发挥了不成比例的(而且往往是过大的)作用,Grimes和Greenhalgh4合理地认为,当人们被误导做出尊重疫苗安全性的决定(即减少疫苗接种,提高个人发病率和死亡率的风险)时,受人尊敬的信息来源和社交媒体的结合放大了虚假信息造成的直接伤害。并增加疾病爆发的机会)。还有一种间接伤害,即公众信任受到侵蚀,医生和专家的批评性讨论受到限制,这些医生和专家可能担心在网上讨论中权衡后果,在网上讨论中,否认主义被精心策划,可以说是鼓励了人们对公共卫生进行暴力反击。2023年《美国医学会杂志》(JAMA)的一项研究显示,52名著名医生在社交媒体上向大量受众传播虚假信息。当我们根据这项研究来考虑这个问题时,Grimes和Greenhalgh提出了一个重要问题:医疗监管机构应该如何应对医生在社交媒体上传播虚假信息的问题?作者提出了一个论点,大意是监管机构有道德义务对那些故意误导公众的医生采取行动,例如,误解数据,捏造统计数据,有时只是编造事实;也就是说,对抗网络虚假信息传播的一种方法是呼吁政府和相关监管机构阻止医生作为内容创作者这样做。这种监管有先例。正如爱德华兹所指出的那样,一些司法管辖区已经制定了立法,“对内容进行监管,无论它以何种媒介出现……关于数据保护、版权、色情、垃圾邮件、种族仇恨、恶意通信、误导性广告、诽谤、欺诈、公平立法和形象权的法律。”误导性广告违背了公众的信任,破坏了企业的诚信,而且目前尚不清楚,推广抗疟药等未经证实的Covid-19治疗方法是否应该受到区别对待。加拿大、英国和欧洲各国政府试图通过立法来解决网络伤害问题,这些立法规定社交媒体平台有责任采取负责任的行动,包括采取识别和降低风险的政策,向政府监管机构提交安全计划,以及新的透明度要求,要求平台公布其如何处理有害内容的细节。以及与政府、经过认证的独立研究人员和民间社会共享数据。只有欧洲的《数字服务法案》(Digital Services Act)解决了一系列广泛的社会危害,包括破坏民主选举的虚假信息和内容。显然,需要制定战略来处理公共卫生部门的虚假信息问题。要了解反疫苗影响者的规模还有很多工作要做,他们在很多方面都像生物医学专家一样行事;例如,他们的在线帖子通常会推进科学论点并分享科学链接,在他们的社交媒体资料和帖子中显示他们的专业知识。他们参与科学文献,但似乎拒绝科学共识,即使这种共识(就像在Covid-19大流行期间经常出现的情况一样)是一个不断变化的目标通常会涉及到专业知识的替代,内容创作者会利用他们的医生或科学家证书,并在他们缺乏相关专业知识的情况下进行权衡。临床医生和科学家应该自由地讨论临床护理和有效性问题。 事实上,我们研究的大多数事物,包括临床护理中使用的技术/药物,都有解释的余地,或者可以通过额外的调查来更好地理解。问题是,对数据解释的合理分歧在哪里结束,这种辩论的合适论坛是什么?争论不是有问题的——临床医生和科学家都不应该因为持有不同的观点而受到迫害,当数据允许这样做的时候,辩论是由科学探究的精神推动的。令人担忧的是意图利用自己在公众中的地位和地位来推进一项不符合公众最佳利益的议程。在这种情况下,提倡一种观点不是出于辩论的精神,而是出于虚假信息的范畴。这意味着,如果我们试图实施一项政策来规范虚假信息的传播,我们就有责任理解什么是虚假信息,包括影响虚假信息数量的各种事情以及它们是如何影响它的关于作者对当前问题的框架的一个担忧是,尽管虚假信息总是具有误导性,但它并不总是有意误导。大多数形式的虚假信息,如谎言和宣传,都具有误导性,因为消息来源有意误导信息。阴谋论和假警报之所以具有误导性,是因为消息来源从它们的误导中系统性地获益(例如,在政治上、经济上)正如Hoofnagle和Hoofnagle所定义的那样,否认主义可能是传播虚假信息的议程,其动机是破坏政策,而不是促进对问题的理解。要使监管政策起作用,就需要一种机制来区分不同类型的不准确信息;值得注意的是,故意从偶然的谎言(错误)中误导信息。适合公共卫生目的的候选机制将需要对科学的动态特性敏感,正如对SARS-CoV-2的致命性、传染性和信息不断更新的理解所证明的那样,这可能是由于认知的不确定性(例如,我们的数据和分析工具的局限性)或病毒的进化(或两者兼有)。一个有效的机制还需要提供一种方法来评估误导性的科学信息(无论是过时的还是不完整的)是否算作错误信息(或虚假信息)。作者很快跳过了这个重要的问题,但确实引用了“声明是否看起来是善意的”作为考虑因素(2024:4)。许多公共卫生服务公告关于一系列保护措施,如保持社交距离、蒙面、封锁和在大流行期间发表的研究报告,可以被归类为错误信息,但不能被归类为虚假信息,除非有人认为有误导公众的意图。人们可以认为,例如,在大流行期间,有关口罩的公共卫生信息误导了公众,但这并不是说这是关于口罩的混合信息背后的意图,也不是世卫组织传播不准确信息的理由,即口罩不能带来超过社交距离的好处。有时,误导公众可能是一种达到进一步目的的审慎策略(例如,确保在分配稀缺的医疗资源时优先考虑一线卫生保健专业人员)。公众是否能接受这种缺乏透明度完全是另一回事,因为总是有可能适得其反,导致公众对公共卫生的信任受到侵蚀,有人可能会说,在COVID-19大流行期间就发生了这种情况。鉴于作者自己将虚假信息描述为故意(故意)传播的错误信息,那么监管机构如何确定疫苗影响者是否有意通过其社交媒体帖子误导公众?那么,监管机构如何确定这些帖子是否出于善意?一种回应是,在突发公共卫生事件期间,如果不对错误信息进行及时纠正,并与对最有效缓解战略的不断发展的理解保持一致,则错误信息将被视为虚假信息。我们强调的挑战不应被视为接受虚假信息的理由。相反,我们需要深入思考如何最好地解决这个问题,以免对公众和知识的进步造成更大的影响。Grimes和Greenhalgh4提出的策略是,对抗网络虚假信息传播的一种方法是呼吁政府和相关监管机构阻止虚假信息提供者这样做。 在线信息主要由全球平台(b谷歌、Facebook)托管,受众的注意力由它们相互竞争的算法决定。作者没有提到监管虚假信息的第二种策略,即监管那些承载诸如助长疫苗犹豫等内容的平台。第二个策略是鼓励社交媒体公司修改其平台,以减少接触虚假信息的人。这些平台确实有能力减少虚假信息,并关闭那些传播虚假信息的人的账户。特朗普总统在宣布美国大选“被盗”并似乎煽动反对民主之后,被脸书和推特“拆台”。这一行动受到了许多人的赞扬,但也有人担心,私人平台是否应该拥有如此巨大的权力来控制言论,而不是民选政府或法院。过去,英国和欧盟都鼓励大型互联网平台公司通过签署《虚假信息行为准则》(Code of Practice on Disinformation)进行自我监管,包括关闭虚假账户和禁止虚假信息的传播另一个有希望的策略是对社交媒体上的有害内容施加“注意义务”,这可以建立机制来验证学术和专业证书,并识别个人资料中的信号,以确定与健康有关的主题的权威(Harris 2024, 8)。2作者提出的政策建议是抑制内容创作者的积极性,而不是监管社交媒体平台,反过来,提出了关于监管监督范围的重要问题。例如,刘易斯提出了一个有趣的问题,即世界卫生组织推迟了两年才认识到SARS-COV-2是通过空气传播的,而不是根据“几十年前关于呼吸道病毒通常如何从一个人传播到另一个人”的感染控制教义传播的世卫组织于2021年3月20日在推特上发布了“#事实:新冠病毒不是通过空气传播的”,将“不是”这个词大写,以突出他们对这一立场的真实性的信心。直到2021年12月,世界卫生组织才在其网页上首次使用了“空气传播”一词,称“传播可以通过远程空气传播”。刘易斯指出,世界卫生组织的信息最终回应了“自疫情爆发初期以来,气溶胶和公共卫生专家一直试图让它说的话”。主流媒体经常关注反疫苗运动在降低疫苗接受度方面的作用,Grimes和Greenhalgh正确地指出,一些医疗专业人员也对助长公众对疫苗的犹豫和公众信任的持续侵蚀负有责任。一种普遍的看法是,反对接种疫苗的信息导致接受度降低,从而导致覆盖率降低,进而导致疫情。即便如此,人们普遍认识到,阻碍高疫苗接种覆盖率的障碍不仅仅是关于疫苗接种的负面信息。作者赞赏地注意到,世界卫生组织将疫苗犹豫列为2019年全球健康的十大威胁之一。值得注意的是,世卫组织还将脆弱的卫生系统和薄弱的初级保健列入了主要威胁名单。这些因素也影响疫苗的吸收,需要在判断虚假信息造成的危害程度时加以考虑。作者们谴责了那些在社交媒体上发表观点的临床医生和科学家,他们的依据是“有选择性的、有问题的或已经被反驳的”证据。除了“全部证据”的法律建议外,在公共卫生紧急情况下,当迫切需要新科学的时候,几乎没有什么建议可以算作好的证据。例如,当世界卫生组织以现场流行病学报告和基于实验室的气溶胶研究(以及其他空气传播的证据来源)不确定为由,不考虑它们时,它们是否与“重要的是证据的总体(优势)”这一立场相冲突?难道他们(即世界卫生组织)没有选择数据来支持他们关于呼吸道病毒传播的根深蒂固的观点,正如引用Geenhalgh本人所承认的那样?最后一个值得关注的问题是,格莱姆斯和格林哈尔希所描绘的政策与自上而下的家长式方法紧密相连,忽视了社区和人口群体内部不同的信仰、价值观和可信赖的信息来源。第三个有希望的策略是帮助互联网用户改变他们的上网习惯,以尽量减少他们在误导性声明的基础上形成错误信念的机会。 接种理论已经被提出,作为一种通过告诉人们他们可能会被误导来降低对错误信息的易感性的方法,但是在理论层面和实践层面上,这种方法的操作都是难以捉摸的。4 .请注意,某些人口群体比其他人更容易受到虚假信息的影响,而脆弱的卫生系统和薄弱的初级保健无疑是这种易受虚假信息影响的驱动因素。帮助减轻脆弱的保健系统对接受虚假信息的易感性的影响,可以采取与社区建立伙伴关系的形式,从而产生对衡量、监测和描述虚假信息的见解,从而使公众能够在信息本身中寻找欺骗的迹象。Grimes和Greenhalgh提出的问题对社会健康和我们对公共卫生威胁实施对策的集体能力具有重要影响。虚假信息的数量及其在刺激对公共卫生措施(如疫苗)的犹豫或破坏方面所起的作用可能意味着缺乏问责制。传统上,临床医生和科学家对他们的专业或社区负责。学术期刊和科学家群体通常被认为是对错误信息的制衡。这些活动将在公众的视线之外进行,因此,无论是有意还是无意,“做错”的影响往往是最小的。然而,随着这些讨论变得更加公开,传统的问责机制往往是不够的。我们欢迎格莱姆斯和格林哈尔呼吁重新考虑如何确保问责制。然而,我们如何做到这一点,谁被赋予作为合理辩论和评估动机的仲裁者的权力,并不是一项微不足道的任务。事实上,承认我们有问题是第一步。作者声明无利益冲突。这是一篇评论,我们的机构不需要伦理批准。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.80
自引率
4.20%
发文量
143
审稿时长
3-8 weeks
期刊介绍: The Journal of Evaluation in Clinical Practice aims to promote the evaluation and development of clinical practice across medicine, nursing and the allied health professions. All aspects of health services research and public health policy analysis and debate are of interest to the Journal whether studied from a population-based or individual patient-centred perspective. Of particular interest to the Journal are submissions on all aspects of clinical effectiveness and efficiency including evidence-based medicine, clinical practice guidelines, clinical decision making, clinical services organisation, implementation and delivery, health economic evaluation, health process and outcome measurement and new or improved methods (conceptual and statistical) for systematic inquiry into clinical practice. Papers may take a classical quantitative or qualitative approach to investigation (or may utilise both techniques) or may take the form of learned essays, structured/systematic reviews and critiques.
期刊最新文献
Digital Health and Fiscal Credibility in Low- and Middle-Income Countries: A Scoping Review of Practice-Based Evidence. Beyond the p Value Dichotomy: Alternatives for Statistical Inference-A Critical Review. The Current Situation and Conflicts Regarding Family Members' Participation in Postoperative Care for Patients with Atrial Fibrillation: A Qualitative Study. When Pills Get a Pass and Lifestyle Treatments Don't: Misapplication of Phase III Logic to Phase IV Evaluation in Health Care. Randomized Controlled Trials and Real-World Evidence in Allergen Immunotherapy: A Critical Reflection on Methodological Paradigms, Ethical Implications, and Industry Influence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1