{"title":"Identifying and counteracting fraudulent responses in online recruitment for health research: a scoping review.","authors":"Josielli Comachio, Adam Poulsen, Adeola Bamgboje-Ayodele, Aidan Tan, Julie Ayre, Rebecca Raeside, Rajshri Roy, Edel O'Hagan","doi":"10.1136/bmjebm-2024-113170","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>This study aimed to describe how health researchers identify and counteract fraudulent responses when recruiting participants online.</p><p><strong>Design: </strong>Scoping review.</p><p><strong>Eligibility criteria: </strong>Peer-reviewed studies published in English; studies that report on the online recruitment of participants for health research; and studies that specifically describe methodologies or strategies to detect and address fraudulent responses during the online recruitment of research participants.</p><p><strong>Sources of evidence: </strong>Nine databases, including Medline, Informit, AMED, CINAHL, Embase, Cochrane CENTRAL, IEEE Xplore, Scopus and Web of Science, were searched from inception to April 2024.</p><p><strong>Charting methods: </strong>Two authors independently screened and selected each study and performed data extraction, following the Joanna Briggs Institute's methodological guidance for scoping reviews and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews guidelines. A predefined framework guided the evaluation of fraud identification and mitigation strategies within the studies included. This framework, adapted from a participatory mapping study that identified indicators of fraudulent survey responses, allowed for systematic assessment and comparison of the effectiveness of various antifraud strategies across studies.</p><p><strong>Results: </strong>23 studies were included. 18 studies (78%) reported encountering fraudulent responses. Among the studies reviewed, the proportion of participants excluded for fraudulent or suspicious responses ranged from as low as 3% to as high as 94%. Survey completion time was used in six studies (26%) to identify fraud, with completion times under 5 min flagged as suspicious. 12 studies (52%) focused on non-confirming responses, identifying implausible text patterns through specific questions, consistency checks and open-ended questions. Four studies examined temporal events, such as unusual survey completion times. Seven studies (30%) reported on geographical incongruity, using IP address verification and location screening. Incentives were reported in 17 studies (73%), with higher incentives often increasing fraudulent responses. Mitigation strategies included using in-built survey features like Completely Automated Public Turing test to tell Computers and Humans Apart (34%), manual verification (21%) and video checks (8%). Most studies recommended multiple detection methods to maintain data integrity.</p><p><strong>Conclusion: </strong>There is insufficient evaluation of strategies to mitigate fraud in online health research, which hinders the ability to offer evidence-based guidance to researchers on their effectiveness. Researchers should employ a combination of strategies to counteract fraudulent responses when recruiting online to optimise data integrity.</p>","PeriodicalId":9059,"journal":{"name":"BMJ Evidence-Based Medicine","volume":" ","pages":""},"PeriodicalIF":9.0000,"publicationDate":"2024-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ Evidence-Based Medicine","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1136/bmjebm-2024-113170","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives: This study aimed to describe how health researchers identify and counteract fraudulent responses when recruiting participants online.
Design: Scoping review.
Eligibility criteria: Peer-reviewed studies published in English; studies that report on the online recruitment of participants for health research; and studies that specifically describe methodologies or strategies to detect and address fraudulent responses during the online recruitment of research participants.
Sources of evidence: Nine databases, including Medline, Informit, AMED, CINAHL, Embase, Cochrane CENTRAL, IEEE Xplore, Scopus and Web of Science, were searched from inception to April 2024.
Charting methods: Two authors independently screened and selected each study and performed data extraction, following the Joanna Briggs Institute's methodological guidance for scoping reviews and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews guidelines. A predefined framework guided the evaluation of fraud identification and mitigation strategies within the studies included. This framework, adapted from a participatory mapping study that identified indicators of fraudulent survey responses, allowed for systematic assessment and comparison of the effectiveness of various antifraud strategies across studies.
Results: 23 studies were included. 18 studies (78%) reported encountering fraudulent responses. Among the studies reviewed, the proportion of participants excluded for fraudulent or suspicious responses ranged from as low as 3% to as high as 94%. Survey completion time was used in six studies (26%) to identify fraud, with completion times under 5 min flagged as suspicious. 12 studies (52%) focused on non-confirming responses, identifying implausible text patterns through specific questions, consistency checks and open-ended questions. Four studies examined temporal events, such as unusual survey completion times. Seven studies (30%) reported on geographical incongruity, using IP address verification and location screening. Incentives were reported in 17 studies (73%), with higher incentives often increasing fraudulent responses. Mitigation strategies included using in-built survey features like Completely Automated Public Turing test to tell Computers and Humans Apart (34%), manual verification (21%) and video checks (8%). Most studies recommended multiple detection methods to maintain data integrity.
Conclusion: There is insufficient evaluation of strategies to mitigate fraud in online health research, which hinders the ability to offer evidence-based guidance to researchers on their effectiveness. Researchers should employ a combination of strategies to counteract fraudulent responses when recruiting online to optimise data integrity.
目的:本研究旨在描述健康研究人员在网上招募参与者时如何识别和抵制欺诈反应。设计:范围审查。入选标准:以英文发表的同行评议研究;报告在线招募健康研究参与者的研究;以及专门描述在在线招募研究参与者期间检测和处理欺诈反应的方法或策略的研究。证据来源:从成立到2024年4月,检索了9个数据库,包括Medline、Informit、AMED、CINAHL、Embase、Cochrane CENTRAL、IEEE explore、Scopus和Web of Science。图表方法:两位作者独立筛选和选择每个研究并进行数据提取,遵循乔安娜布里格斯研究所的范围评价方法指导和系统评价和荟萃分析扩展范围评价指南的首选报告项目。一个预先确定的框架指导在所包括的研究中对识别和减轻欺诈战略的评价。该框架改编自一项参与性测绘研究,该研究确定了欺诈调查回应的指标,允许系统评估和比较研究中各种反欺诈策略的有效性。结果:共纳入23项研究。18项研究(78%)报告遇到欺诈性回复。在审查的研究中,因欺诈或可疑回答而被排除在外的参与者比例从低至3%到高至94%不等。6项研究(26%)使用调查完成时间来识别欺诈,完成时间低于5分钟被标记为可疑。12项研究(52%)关注非确认回复,通过具体问题、一致性检查和开放式问题识别不可信的文本模式。四项研究考察了时间事件,比如不寻常的调查完成时间。七项研究(30%)报告了地理不一致,使用IP地址验证和位置筛选。17项研究(73%)报告了动机,较高的动机往往会增加欺诈反应。缓解策略包括使用内置的调查功能,如全自动公共图灵测试来区分计算机和人类(34%),手动验证(21%)和视频检查(8%)。大多数研究建议采用多种检测方法来保持数据的完整性。结论:对减少在线卫生研究欺诈的策略评估不足,这阻碍了为研究人员提供基于证据的有效性指导的能力。在网上招聘时,研究人员应该采用多种策略来抵消欺诈反应,以优化数据完整性。
期刊介绍:
BMJ Evidence-Based Medicine (BMJ EBM) publishes original evidence-based research, insights and opinions on what matters for health care. We focus on the tools, methods, and concepts that are basic and central to practising evidence-based medicine and deliver relevant, trustworthy and impactful evidence.
BMJ EBM is a Plan S compliant Transformative Journal and adheres to the highest possible industry standards for editorial policies and publication ethics.