{"title":"How do you solve a problem like COREQ? A critique of Tong et al.’s (2007) Consolidated Criteria for Reporting Qualitative Research","authors":"Virginia Braun , Victoria Clarke","doi":"10.1016/j.metip.2024.100155","DOIUrl":null,"url":null,"abstract":"<div><p>In this paper, we argue that COREQ – the consolidated criteria for reporting qualitative research (Tong et al., 2007) – is a problem, and a problem in need of a solution. COREQ is not just a problem because – as Buus and Perron (2020) argued – there are important questions about the credibility of the development of the checklist. COREQ is a problem because some in the (qualitative) research community treat it as generic and universally applicable, and maintain that the use of COREQ by authors and evaluators will result in better – more transparent and complete – reporting. But, as we will show, COREQ is far from generic, and its use can contribute to methodologically <em>incongruent</em> reporting. We develop our argument that the use of COREQ should be confined to the reporting and evaluation of what we term ‘small q’ qualitative research, by critically discussing the definition of qualitative research in COREQ, the conflation of reflexivity and bias, and the presumed universality of saturation, certain analytic practices and outputs, and participant validation. However, even demarcating a limited frame of ‘qualitative’ for the application of COREQ doesn't eliminate all the problems. We contend that COREQ needs extensive refinement to ensure it promotes more transparent and complete reporting, <em>especially</em> when used by less experienced researchers and evaluators. In the absence of such revision, we invite journal editors to consider whether the flaws in COREQ render it untrustworthy as a reporting quality tool. Going forward, we suggest research <em>values</em>, rather than consolidation or consensus, offer a sounder foundation for developing assessment tools for reporting quality.</p></div>","PeriodicalId":93338,"journal":{"name":"Methods in Psychology (Online)","volume":"11 ","pages":"Article 100155"},"PeriodicalIF":0.0000,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2590260124000213/pdfft?md5=aa50f8a379c7d7e4914a221934457dac&pid=1-s2.0-S2590260124000213-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Methods in Psychology (Online)","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2590260124000213","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Psychology","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we argue that COREQ – the consolidated criteria for reporting qualitative research (Tong et al., 2007) – is a problem, and a problem in need of a solution. COREQ is not just a problem because – as Buus and Perron (2020) argued – there are important questions about the credibility of the development of the checklist. COREQ is a problem because some in the (qualitative) research community treat it as generic and universally applicable, and maintain that the use of COREQ by authors and evaluators will result in better – more transparent and complete – reporting. But, as we will show, COREQ is far from generic, and its use can contribute to methodologically incongruent reporting. We develop our argument that the use of COREQ should be confined to the reporting and evaluation of what we term ‘small q’ qualitative research, by critically discussing the definition of qualitative research in COREQ, the conflation of reflexivity and bias, and the presumed universality of saturation, certain analytic practices and outputs, and participant validation. However, even demarcating a limited frame of ‘qualitative’ for the application of COREQ doesn't eliminate all the problems. We contend that COREQ needs extensive refinement to ensure it promotes more transparent and complete reporting, especially when used by less experienced researchers and evaluators. In the absence of such revision, we invite journal editors to consider whether the flaws in COREQ render it untrustworthy as a reporting quality tool. Going forward, we suggest research values, rather than consolidation or consensus, offer a sounder foundation for developing assessment tools for reporting quality.