The use of randomised control trials (RCTs) has become widespread in economics and other social sciences, and is likely to grow further as new digital tools and increasingly rich data facilitate the design and implementation of experiments. When rigorously designed and implemented, they have the potential to offer the most reliable empirical evidence on the causal impact of an intervention. But for that potential to be realised, RCTs need to be sufficiently powered to detect a meaningful effect, or to say confidently that the effect is negligible. This symposium offers practical insights for researchers on designing more powerful experiments and computing the required sample size, accompanied by tools that researchers can use in designing their own RCTs.
The first paper, by David McKenzie, discusses how to improve power at each stage of an RCT – design, implementation and analysis. While increasing sample size is the default option, McKenzie offers guidance on many other options available to researchers and why they work. The second paper, by Brendon McConnell and Marcos Vera-Hernández, dives into detailed aspects of implementing sample size calculations for different randomisation designs, and offers the formulae, tools and computer code necessary to implement them in practice. The final paper, by Brandon Hauser and Mauricio Olivares, studies hypothesis testing in randomised experiments, and its consequences for sample size calculations. The paper shows how small deviations from the most standard assumptions invalidate standard randomisation-based inference, and provides useful results and guidance for how to adapt power analysis to ensure that calculations remain valid.
{"title":"A symposium on power in experiments – new practical insights and tools: preface","authors":"Monica Costa Dias, Marcos Vera-Hernández","doi":"10.1111/1475-5890.70007","DOIUrl":"https://doi.org/10.1111/1475-5890.70007","url":null,"abstract":"<p>The use of randomised control trials (RCTs) has become widespread in economics and other social sciences, and is likely to grow further as new digital tools and increasingly rich data facilitate the design and implementation of experiments. When rigorously designed and implemented, they have the potential to offer the most reliable empirical evidence on the causal impact of an intervention. But for that potential to be realised, RCTs need to be sufficiently powered to detect a meaningful effect, or to say confidently that the effect is negligible. This symposium offers practical insights for researchers on designing more powerful experiments and computing the required sample size, accompanied by tools that researchers can use in designing their own RCTs.</p><p>The first paper, by David McKenzie, discusses how to improve power at each stage of an RCT – design, implementation and analysis. While increasing sample size is the default option, McKenzie offers guidance on many other options available to researchers and why they work. The second paper, by Brendon McConnell and Marcos Vera-Hernández, dives into detailed aspects of implementing sample size calculations for different randomisation designs, and offers the formulae, tools and computer code necessary to implement them in practice. The final paper, by Brandon Hauser and Mauricio Olivares, studies hypothesis testing in randomised experiments, and its consequences for sample size calculations. The paper shows how small deviations from the most standard assumptions invalidate standard randomisation-based inference, and provides useful results and guidance for how to adapt power analysis to ensure that calculations remain valid.</p>","PeriodicalId":51602,"journal":{"name":"Fiscal Studies","volume":"46 3","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/1475-5890.70007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145122635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Basic methods to compute required sample sizes are well understood and supported by widely available software. However, researchers often oversimplify their sample size calculations, overlooking relevant features of their experimental design. This paper compiles and systematises existing methods for sample size calculations for continuous and binary outcomes, both with and without covariates, and for both clustered and non-clustered randomised controlled trials. We present formulae accommodating panel data structures and uneven designs, and provide guidance on optimally allocating sample size between the number of clusters and the number of units per cluster. In addition, we discuss how to adjust calculations for multiple hypothesis testing and how to estimate power in more complex designs using simulation methods.
{"title":"Going beyond simple sample size calculations: a practitioner's guide","authors":"Brendon McConnell, Marcos Vera-Hernández","doi":"10.1111/1475-5890.70005","DOIUrl":"https://doi.org/10.1111/1475-5890.70005","url":null,"abstract":"<p>Basic methods to compute required sample sizes are well understood and supported by widely available software. However, researchers often oversimplify their sample size calculations, overlooking relevant features of their experimental design. This paper compiles and systematises existing methods for sample size calculations for continuous and binary outcomes, both with and without covariates, and for both clustered and non-clustered randomised controlled trials. We present formulae accommodating panel data structures and uneven designs, and provide guidance on optimally allocating sample size between the number of clusters and the number of units per cluster. In addition, we discuss how to adjust calculations for multiple hypothesis testing and how to estimate power in more complex designs using simulation methods.</p>","PeriodicalId":51602,"journal":{"name":"Fiscal Studies","volume":"46 3","pages":"323-348"},"PeriodicalIF":1.3,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/1475-5890.70005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145122627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper revisits the problem of power analysis and sample size calculations in randomised experiments, with a focus on settings where inference on average treatment effects is conducted using randomisation tests. While standard formulas based on the two-sample