{"title":"Adapting power calculations to include a superiority margin: what are the implications?","authors":"Samuel Bishara","doi":"10.11613/BM.2024.010101","DOIUrl":null,"url":null,"abstract":"<p><p>This paper examines the application of super-superiority margins in study power calculations. Unlike traditional power calculations, which primarily aim to reject the null hypothesis by any margin, a super-superiority margin establishes a clinically significant threshold. Despite potential benefits, this approach, akin to a non-inferiority calculation but in an opposing direction, is rarely used. Implementing a super-superiority margin separates the notion of the likely difference between two groups (the effect size) from the minimum clinically significant difference, without which inconsistent positions could be held. However, these are often used interchangeably. In an audit of 30 recent randomized controlled trial power calculations, four studies utilized the minimal acceptable difference, and nine utilized the expected difference. In the other studies, this was unclarified. In the <i>post hoc</i> scenario, this approach can shed light on the value of undertaking further studies, which is not apparent from the standard power calculation. The acceptance and rejection of the alternate hypothesis for super-superiority, non-inferiority, equivalence, and standard superiority studies have been compared. When a fixed minimal acceptable difference is applied, a study result will be in one of seven logical positions with regards to the simultaneous application of these hypotheses. The trend for increased trial size and the mirror approach of non-inferiority studies implies that newer interventions may be becoming less effective. Powering for superiority could counter this and ensure that a pre-trial evaluation of clinical significance has taken place, which is necessary to confirm that interventions are beneficial.</p>","PeriodicalId":94370,"journal":{"name":"Biochemia medica","volume":"34 1","pages":"010101"},"PeriodicalIF":0.0000,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10864028/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biochemia medica","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.11613/BM.2024.010101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper examines the application of super-superiority margins in study power calculations. Unlike traditional power calculations, which primarily aim to reject the null hypothesis by any margin, a super-superiority margin establishes a clinically significant threshold. Despite potential benefits, this approach, akin to a non-inferiority calculation but in an opposing direction, is rarely used. Implementing a super-superiority margin separates the notion of the likely difference between two groups (the effect size) from the minimum clinically significant difference, without which inconsistent positions could be held. However, these are often used interchangeably. In an audit of 30 recent randomized controlled trial power calculations, four studies utilized the minimal acceptable difference, and nine utilized the expected difference. In the other studies, this was unclarified. In the post hoc scenario, this approach can shed light on the value of undertaking further studies, which is not apparent from the standard power calculation. The acceptance and rejection of the alternate hypothesis for super-superiority, non-inferiority, equivalence, and standard superiority studies have been compared. When a fixed minimal acceptable difference is applied, a study result will be in one of seven logical positions with regards to the simultaneous application of these hypotheses. The trend for increased trial size and the mirror approach of non-inferiority studies implies that newer interventions may be becoming less effective. Powering for superiority could counter this and ensure that a pre-trial evaluation of clinical significance has taken place, which is necessary to confirm that interventions are beneficial.