{"title":"Antidepressants vs . placebo: not merely a quantitative difference in response","authors":"K. Fountoulakis, H. Möller","doi":"10.1017/S1461145711000964","DOIUrl":null,"url":null,"abstract":"Horder et al. (in press) criticize the results of Kirsch et al. (2008) solely on the basis of meta-analytical methodology. In the letter by Matthews (2011) commenting on our paper (Fountoulakis & Moller, 2010) this emphasis on methodological issues is again central. Matthews comments on the method used (pooling drug and placebo arms separately) and on the use of effect size instead of raw HAMD scores.\n\nWe argue that the problem does not lie in methodological issues. We performed the re-analysis by using simple averaging, weighting by sample size, weighted by the inverse variance and also with precision weighted analysis. The differences in the results of these different approaches were not significant at all although some similarity was shown between the results of precision weighted analysis and those reported by Kirsch et al. Horder et al. are right when suggesting that the 1.80 difference can be derived with the precision weighted analysis method but the effect size with this method is not 0.32 as Kirsch et al. report, but 0.28. So no matter which method is used, it seems that Kirsch et al. had flawed calculations. To say that a method is ‘unusual’ or ‘idiosyncratic’ does not necessarily imply it is inappropriate or false. It is assumed that all trial results belong to a single distribution; that is why a meta-analysis is possible even with a random-effects model. Practically the way one groups and pools arms and trials plays little if any role. That is why ultimately, the correction of the effect size is minimal and all methods give an effect size of 0.28–0.35 and a difference in raw HAMD change from baseline between placebo and active drug of 1.78–2.93 (Table 1). Even the results after simple averaging do not deviate, probably because the number of RCTs is high enough …","PeriodicalId":394244,"journal":{"name":"The International Journal of Neuropsychopharmacology","volume":"164 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The International Journal of Neuropsychopharmacology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/S1461145711000964","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Horder et al. (in press) criticize the results of Kirsch et al. (2008) solely on the basis of meta-analytical methodology. In the letter by Matthews (2011) commenting on our paper (Fountoulakis & Moller, 2010) this emphasis on methodological issues is again central. Matthews comments on the method used (pooling drug and placebo arms separately) and on the use of effect size instead of raw HAMD scores.
We argue that the problem does not lie in methodological issues. We performed the re-analysis by using simple averaging, weighting by sample size, weighted by the inverse variance and also with precision weighted analysis. The differences in the results of these different approaches were not significant at all although some similarity was shown between the results of precision weighted analysis and those reported by Kirsch et al. Horder et al. are right when suggesting that the 1.80 difference can be derived with the precision weighted analysis method but the effect size with this method is not 0.32 as Kirsch et al. report, but 0.28. So no matter which method is used, it seems that Kirsch et al. had flawed calculations. To say that a method is ‘unusual’ or ‘idiosyncratic’ does not necessarily imply it is inappropriate or false. It is assumed that all trial results belong to a single distribution; that is why a meta-analysis is possible even with a random-effects model. Practically the way one groups and pools arms and trials plays little if any role. That is why ultimately, the correction of the effect size is minimal and all methods give an effect size of 0.28–0.35 and a difference in raw HAMD change from baseline between placebo and active drug of 1.78–2.93 (Table 1). Even the results after simple averaging do not deviate, probably because the number of RCTs is high enough …