{"title":"Comment on—The effectiveness of positive psychological interventions for patients with cancer: A systematic review and meta-analysis","authors":"Xu Linyu MSN, RN, Xutong Zheng Ph.D Candidate, RN, Hao Huang MSN, RN, Aiping Wang MSN","doi":"10.1111/jocn.17410","DOIUrl":null,"url":null,"abstract":"<p>We have carefully read the recent paper ‘The effectiveness of positive psychological interventions for patients with cancer: A systematic review and meta-analysis’ by Tian et al. (Tian et al., <span>2024</span>) in the Journal of Clinical Nursing and have some concerns regarding the ‘3.7 Data Synthesis’ section.</p><p>The authors stated:Heterogeneity was determined using the I<sup>2</sup> value statistic test. If I<sup>2</sup> ≤50%, heterogeneity between studies was considered low and a fixed-effects model was used; and if I<sup>2</sup> >50%, heterogeneity between studies was high and a random-effects model was used.</p><p>In reality, for data analysis, due to the variations of complex interventions, we should only use a random-effects model (Borenstein et al., <span>2010</span>; Dettori et al., <span>2022</span>). The strategy of starting with a fixed-effect model and then moving to a random-effects model if the test for heterogeneity is significant relies on a flawed logic and should be strongly discouraged. The selection of a model should be based solely on the question of which model fits the distribution of effect sizes and thus takes account of the relevant source(s) of error. When studies are gathered from the published literature, the random-effects model is generally a more plausible match. The fixed-effect model is based on the assumption that all studies in the meta-analysis share a common (true) effect size. Under the fixed-effect model, the summary effect is an estimate of the effect which is common to all studies in the analysis. Under the random-effects model, the summary effect is an estimate of the mean of a distribution of true effects.</p><p>Additionally, the Cochrane Handbook for Systematic Reviews of Interventions Version 6.4, 2023 (Higgins et al., <span>2023</span>) indicates that the decision between fixed- and random-effects meta-analyses has been the subject of much debate, and no universal recommendation is provided. The handbook outlines six key considerations for making this choice, with the third point stating:</p><p>‘Under any interpretation, a fixed-effect meta-analysis ignores heterogeneity. If the method is used, it is therefore important to supplement it with a statistical investigation of the extent of heterogeneity.’</p><p>In section 4.5.2 of the ‘Hope’ chapter (Figures), the author employed a fixed-effects model. By utilizing Review Manager 5.4.1 and the dataset provided by the author, we conducted an analysis using a random-effects model. The results indicate a slight variance in the test for overall effect, as depicted in Figures 1 and 2, which could lead to an imprecise estimation of effect size and subsequently affect our confidence in the evidence.</p><p>Therefore, we suggest that using a random-effects model is a more reasonable approach. We welcome the author to engage in potential follow-up discussions with us.</p><p>None.</p><p>None.</p>","PeriodicalId":50236,"journal":{"name":"Journal of Clinical Nursing","volume":"34 4","pages":"1103-1105"},"PeriodicalIF":3.5000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/jocn.17410","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Clinical Nursing","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/jocn.17410","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"NURSING","Score":null,"Total":0}
引用次数: 0
Abstract
We have carefully read the recent paper ‘The effectiveness of positive psychological interventions for patients with cancer: A systematic review and meta-analysis’ by Tian et al. (Tian et al., 2024) in the Journal of Clinical Nursing and have some concerns regarding the ‘3.7 Data Synthesis’ section.
The authors stated:Heterogeneity was determined using the I2 value statistic test. If I2 ≤50%, heterogeneity between studies was considered low and a fixed-effects model was used; and if I2 >50%, heterogeneity between studies was high and a random-effects model was used.
In reality, for data analysis, due to the variations of complex interventions, we should only use a random-effects model (Borenstein et al., 2010; Dettori et al., 2022). The strategy of starting with a fixed-effect model and then moving to a random-effects model if the test for heterogeneity is significant relies on a flawed logic and should be strongly discouraged. The selection of a model should be based solely on the question of which model fits the distribution of effect sizes and thus takes account of the relevant source(s) of error. When studies are gathered from the published literature, the random-effects model is generally a more plausible match. The fixed-effect model is based on the assumption that all studies in the meta-analysis share a common (true) effect size. Under the fixed-effect model, the summary effect is an estimate of the effect which is common to all studies in the analysis. Under the random-effects model, the summary effect is an estimate of the mean of a distribution of true effects.
Additionally, the Cochrane Handbook for Systematic Reviews of Interventions Version 6.4, 2023 (Higgins et al., 2023) indicates that the decision between fixed- and random-effects meta-analyses has been the subject of much debate, and no universal recommendation is provided. The handbook outlines six key considerations for making this choice, with the third point stating:
‘Under any interpretation, a fixed-effect meta-analysis ignores heterogeneity. If the method is used, it is therefore important to supplement it with a statistical investigation of the extent of heterogeneity.’
In section 4.5.2 of the ‘Hope’ chapter (Figures), the author employed a fixed-effects model. By utilizing Review Manager 5.4.1 and the dataset provided by the author, we conducted an analysis using a random-effects model. The results indicate a slight variance in the test for overall effect, as depicted in Figures 1 and 2, which could lead to an imprecise estimation of effect size and subsequently affect our confidence in the evidence.
Therefore, we suggest that using a random-effects model is a more reasonable approach. We welcome the author to engage in potential follow-up discussions with us.
我们仔细阅读了Tian et al. (Tian et al., 2024)最近发表在《临床护理杂志》上的论文《积极心理干预对癌症患者的有效性:系统回顾和荟萃分析》,并对“3.7数据综合”部分有一些担忧。作者指出:采用I2值统计检验确定异质性。如果I2≤50%,则认为研究间异质性较低,采用固定效应模型;如果I2 >;50%,则研究间异质性高,采用随机效应模型。实际上,对于数据分析,由于复杂干预措施的变化,我们应该只使用随机效应模型(Borenstein et al., 2010;Dettori et al., 2022)。从固定效应模型开始,然后在异质性检验显著的情况下转向随机效应模型,这种策略依赖于有缺陷的逻辑,应该强烈反对。模型的选择应完全基于哪个模型适合效应大小分布的问题,从而考虑到相关的误差来源。当研究从已发表的文献中收集时,随机效应模型通常是一个更合理的匹配。固定效应模型的基础假设是,meta分析中的所有研究都有一个共同的(真实的)效应大小。在固定效应模型下,总结效应是对分析中所有研究的共同效应的估计。在随机效应模型下,汇总效应是对真实效应分布的均值的估计。此外,《Cochrane干预措施系统评价手册》第6.4版,2023年(Higgins et al., 2023)表明,固定效应和随机效应meta分析之间的选择一直是争论的主题,没有提供普遍的建议。该手册概述了做出这一选择的六个关键考虑因素,其中第三点指出:“在任何解释下,固定效应荟萃分析都会忽略异质性。如果使用这种方法,那么对异质性程度进行统计调查是很重要的。在“希望”一章(图)的4.5.2节中,作者采用了固定效应模型。利用Review Manager 5.4.1和作者提供的数据集,采用随机效应模型进行分析。如图1和2所示,结果表明在总体效应的测试中存在轻微的差异,这可能导致对效应大小的不精确估计,从而影响我们对证据的信心。因此,我们建议使用随机效应模型是一种更合理的方法。我们欢迎作者与我们进行潜在的后续讨论。
期刊介绍:
The Journal of Clinical Nursing (JCN) is an international, peer reviewed, scientific journal that seeks to promote the development and exchange of knowledge that is directly relevant to all spheres of nursing practice. The primary aim is to promote a high standard of clinically related scholarship which advances and supports the practice and discipline of nursing. The Journal also aims to promote the international exchange of ideas and experience that draws from the different cultures in which practice takes place. Further, JCN seeks to enrich insight into clinical need and the implications for nursing intervention and models of service delivery. Emphasis is placed on promoting critical debate on the art and science of nursing practice.
JCN is essential reading for anyone involved in nursing practice, whether clinicians, researchers, educators, managers, policy makers, or students. The development of clinical practice and the changing patterns of inter-professional working are also central to JCN''s scope of interest. Contributions are welcomed from other health professionals on issues that have a direct impact on nursing practice.
We publish high quality papers from across the methodological spectrum that make an important and novel contribution to the field of clinical nursing (regardless of where care is provided), and which demonstrate clinical application and international relevance.