Pub Date : 2025-04-11eCollection Date: 2025-01-01DOI: 10.1353/obs.2025.a956843
Benjamin Recht
This commentary proposes a framework for understanding the role of statistics in policymaking, regulation, and bureaucratic systems. I introduce the concept of "ex ante policy," describing statistical rules and procedures designed before data collection to govern future actions. Through examining examples, particularly clinical trials, I explore how ex ante policy serves as a calculus of bureaucracy, providing numerical foundations for governance through clear, transparent rules. The ex ante frame obviates heated debates about inferential interpretations of probability and statistical tests, p-values, and rituals. I conclude by calling for a deeper appreciation of statistics' bureaucratic function and suggesting new directions for research in policy-oriented statistical methodology.
{"title":"A Bureaucratic Theory of Statistics.","authors":"Benjamin Recht","doi":"10.1353/obs.2025.a956843","DOIUrl":"10.1353/obs.2025.a956843","url":null,"abstract":"<p><p>This commentary proposes a framework for understanding the role of statistics in policymaking, regulation, and bureaucratic systems. I introduce the concept of \"ex ante policy,\" describing statistical rules and procedures designed before data collection to govern future actions. Through examining examples, particularly clinical trials, I explore how ex ante policy serves as a calculus of bureaucracy, providing numerical foundations for governance through clear, transparent rules. The ex ante frame obviates heated debates about inferential interpretations of probability and statistical tests, p-values, and rituals. I conclude by calling for a deeper appreciation of statistics' bureaucratic function and suggesting new directions for research in policy-oriented statistical methodology.</p>","PeriodicalId":74335,"journal":{"name":"Observational studies","volume":"11 1","pages":"77-84"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12139714/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144251199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-11eCollection Date: 2025-01-01DOI: 10.1353/obs.2025.a956841
Arman Oganisian, Antonio Linero
Aronow et al. (2025) provide a convincing case for the special status of randomized controlled trials (RCTs) in which the propensity scores are known and can be used to make causal inferences. Here we provide a Bayesian perspective on their work by summarizing recent developments in the Bayesian literature on the topic. Whether the propensity score should play a role in Bayesian causal inference - and what that role(s) should be - has been a controversial topic for some time. We begin by describing Bayesian inference for population-level estimands and show that under commonly made (but not necessarily required) assumptions, the propensity score model has no role to play in Bayesian causal inference from a purist perspective. We discuss recent work on why these assumptions can be problematic - particularly in high-dimensional models - and discuss several Bayesian motivations for relaxing them. We describe out recent approaches for incorporating the propensity score correspond to di erent ways of relaxing these assumptions. Given these considerations, we illustrate how a Bayesian might approach the synethic examples of Aronow et al. (2025).
{"title":"Priors and Propensity Scores in Bayesian Causal Inference.","authors":"Arman Oganisian, Antonio Linero","doi":"10.1353/obs.2025.a956841","DOIUrl":"10.1353/obs.2025.a956841","url":null,"abstract":"<p><p>Aronow et al. (2025) provide a convincing case for the special status of randomized controlled trials (RCTs) in which the propensity scores are known and can be used to make causal inferences. Here we provide a Bayesian perspective on their work by summarizing recent developments in the Bayesian literature on the topic. Whether the propensity score should play a role in Bayesian causal inference - and what that role(s) should be - has been a controversial topic for some time. We begin by describing Bayesian inference for population-level estimands and show that under commonly made (but not necessarily required) assumptions, the propensity score model has no role to play in Bayesian causal inference from a purist perspective. We discuss recent work on why these assumptions can be problematic - particularly in high-dimensional models - and discuss several Bayesian motivations for relaxing them. We describe out recent approaches for incorporating the propensity score correspond to di erent ways of relaxing these assumptions. Given these considerations, we illustrate how a Bayesian might approach the synethic examples of Aronow et al. (2025).</p>","PeriodicalId":74335,"journal":{"name":"Observational studies","volume":"11 1","pages":"47-60"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12139722/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144251203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-11eCollection Date: 2025-01-01DOI: 10.1353/obs.2025.a956839
Peng Ding
Aronow et al. (2024) provide a great service to the causal inference community by delineating the key results in Robins and Ritov (1997). They show that randomized controlled trials (RCTs) ensure much stronger statistical inference than unconfounded observational studies even though nonparametric identification is identical in both settings. These results are in sharp contrast to the claim in Pearl and Mackenzie (2018) that RCTs are not the gold standard of causal analysis. Pearl and Mackenzie's (2018) claim is false and misleading for empirical researchers who want to infer causal effects based on data with finite sample sizes. I will further review what randomization can and cannot guarantee more broadly. In particular, I will highlight the value of randomization-based inference in RCTs, the limit of randomization alone for more complicated causal inference questions, and the importance of sensitivity analysis in observational studies.
{"title":"What randomization can and cannot guarantee.","authors":"Peng Ding","doi":"10.1353/obs.2025.a956839","DOIUrl":"10.1353/obs.2025.a956839","url":null,"abstract":"<p><p>Aronow et al. (2024) provide a great service to the causal inference community by delineating the key results in Robins and Ritov (1997). They show that randomized controlled trials (RCTs) ensure much stronger statistical inference than unconfounded observational studies even though nonparametric identification is identical in both settings. These results are in sharp contrast to the claim in Pearl and Mackenzie (2018) that RCTs are not the gold standard of causal analysis. Pearl and Mackenzie's (2018) claim is false and misleading for empirical researchers who want to infer causal effects based on data with finite sample sizes. I will further review what randomization can and cannot guarantee more broadly. In particular, I will highlight the value of randomization-based inference in RCTs, the limit of randomization alone for more complicated causal inference questions, and the importance of sensitivity analysis in observational studies.</p>","PeriodicalId":74335,"journal":{"name":"Observational studies","volume":"11 1","pages":"27-40"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12139720/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144251205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-06DOI: 10.1353/obs.2024.a929114
Fei Wan, S. Sutcliffe, Jeffrey Zhang, Dylan Small
Abstract:The impact of matching on confounding control in case-control studies remains a subject of ongoing debate, with varying perspectives among researchers. While matching is a well-established method for controlling confounding in cohort studies, its effectiveness in mitigating confounding in case-control studies has long been questioned. Recent studies have determined that matching doesn't eliminate confounding but, instead, introduces a selection bias on top of the initial confounding, as indicated by causal diagram analysis. This conclusion suggests that the control of initial confounding through matching is either only partial or non-existent. However, this conclusion may not be accurate in exactly matched design because causal diagram cannot always reveal precisely the interplay between the initial confounding and the matching induced selection effect. In this paper, we employ analytical results in conjunction with causal diagrams to demonstrate that the cancellation of the initial confounding by the selection effect is complete in exact individually matched case-control studies. Nevertheless, this cancellation results in a residual selection effect that establishes a backdoor connection between the matching factors and the outcome in the matched design. Failure to adjust for this residual selection effect leads to biased estimates of the exposure effect. Furthermore, this backdoor connection causes matching factors to act like confounding factors in the matched case-control design, which complicates the interpretation of the bias introduced by matching in current literature.
{"title":"Does matching introduce confounding or selection bias into the matched case-control design?","authors":"Fei Wan, S. Sutcliffe, Jeffrey Zhang, Dylan Small","doi":"10.1353/obs.2024.a929114","DOIUrl":"https://doi.org/10.1353/obs.2024.a929114","url":null,"abstract":"Abstract:The impact of matching on confounding control in case-control studies remains a subject of ongoing debate, with varying perspectives among researchers. While matching is a well-established method for controlling confounding in cohort studies, its effectiveness in mitigating confounding in case-control studies has long been questioned. Recent studies have determined that matching doesn't eliminate confounding but, instead, introduces a selection bias on top of the initial confounding, as indicated by causal diagram analysis. This conclusion suggests that the control of initial confounding through matching is either only partial or non-existent. However, this conclusion may not be accurate in exactly matched design because causal diagram cannot always reveal precisely the interplay between the initial confounding and the matching induced selection effect. In this paper, we employ analytical results in conjunction with causal diagrams to demonstrate that the cancellation of the initial confounding by the selection effect is complete in exact individually matched case-control studies. Nevertheless, this cancellation results in a residual selection effect that establishes a backdoor connection between the matching factors and the outcome in the matched design. Failure to adjust for this residual selection effect leads to biased estimates of the exposure effect. Furthermore, this backdoor connection causes matching factors to act like confounding factors in the matched case-control design, which complicates the interpretation of the bias introduced by matching in current literature.","PeriodicalId":74335,"journal":{"name":"Observational studies","volume":"321 4","pages":"1 - 9"},"PeriodicalIF":0.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141381359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-07DOI: 10.1353/obs.2023.a906626
David Rea, Dean R. Hyslop
Abstract:This paper describes a difference-in-difference control trial (DDCT) of an intervention designed to increase the take-up of an income support payment in the New Zealand welfare system. The intervention used a microsimulation model to identify potential claimants who were then contacted by either phone, email, or letter. The trial was designed as a DDCT because of ethical concerns associated with a fully randomized approach. The trial provided convincing evidence that the intervention would increase the take-up of the payment and a modified version was then implemented as an ongoing business process by the New Zealand Ministry of Social Development (MSD). The findings from the trial contribute to the literature about how best to increase the take-up of welfare payments. The study also demonstrates the value of using a difference-in-difference control trial.
{"title":"Using a difference-in-difference control trial to test an intervention aimed at increasing the take-up of a welfare payment in New Zealand","authors":"David Rea, Dean R. Hyslop","doi":"10.1353/obs.2023.a906626","DOIUrl":"https://doi.org/10.1353/obs.2023.a906626","url":null,"abstract":"Abstract:This paper describes a difference-in-difference control trial (DDCT) of an intervention designed to increase the take-up of an income support payment in the New Zealand welfare system. The intervention used a microsimulation model to identify potential claimants who were then contacted by either phone, email, or letter. The trial was designed as a DDCT because of ethical concerns associated with a fully randomized approach. The trial provided convincing evidence that the intervention would increase the take-up of the payment and a modified version was then implemented as an ongoing business process by the New Zealand Ministry of Social Development (MSD). The findings from the trial contribute to the literature about how best to increase the take-up of welfare payments. The study also demonstrates the value of using a difference-in-difference control trial.","PeriodicalId":74335,"journal":{"name":"Observational studies","volume":"9 1","pages":"49 - 72"},"PeriodicalIF":0.0,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46729290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-07DOI: 10.1353/obs.2023.a906628
David Watson
Abstract:Healthcare-associated infections are serious adverse events that occur during a hospital admission. Quantifying the impact of these infections on inpatient length of stay and cost has important policy implications due to the Hospital-Acquired Conditions Reduction Program in the United States. However, most studies on this topic are flawed because they do not account for when a healthcare-associated infection occurred during a hospital admission. Such an approach leads to selection bias because patients with longer hospital stays are more likely to experience an infection due to their increased exposure time. Time of infection is often not incorporated into the estimation strategy because this information is unknown, yet there are no methods that account for the selection bias in this scenario. To address this problem, we propose a sensitivity analysis for matched pairs designs for assessing the effect of healthcare-associated infections on length of stay and cost when time of infection is unknown. The approach models the probability of infection, or the assignment mechanism, as proportional to a power function of the uninfected length of stay, where the sensitivity parameter is the value of the power. The general idea is to incorporate the degree of exposure into the probability of an infection occurring. Under this size-biased assignment mechanism, we develop hypothesis tests under a sharp null hypothesis of constant multiplicative effects. The approach is demonstrated on a pediatric cohort of inpatient encounters and compared to benchmark estimates that properly account for time of infection. The results reaffirm the severe degree of bias when not accounting for time of infection and also show that the proposed sensitivity analysis captures the benchmark estimates for plausible and theoretically justified values of the sensitivity parameter.
{"title":"Size-biased sensitivity analysis for matched pairs design to assess the impact of healthcare-associated infections","authors":"David Watson","doi":"10.1353/obs.2023.a906628","DOIUrl":"https://doi.org/10.1353/obs.2023.a906628","url":null,"abstract":"Abstract:Healthcare-associated infections are serious adverse events that occur during a hospital admission. Quantifying the impact of these infections on inpatient length of stay and cost has important policy implications due to the Hospital-Acquired Conditions Reduction Program in the United States. However, most studies on this topic are flawed because they do not account for when a healthcare-associated infection occurred during a hospital admission. Such an approach leads to selection bias because patients with longer hospital stays are more likely to experience an infection due to their increased exposure time. Time of infection is often not incorporated into the estimation strategy because this information is unknown, yet there are no methods that account for the selection bias in this scenario. To address this problem, we propose a sensitivity analysis for matched pairs designs for assessing the effect of healthcare-associated infections on length of stay and cost when time of infection is unknown. The approach models the probability of infection, or the assignment mechanism, as proportional to a power function of the uninfected length of stay, where the sensitivity parameter is the value of the power. The general idea is to incorporate the degree of exposure into the probability of an infection occurring. Under this size-biased assignment mechanism, we develop hypothesis tests under a sharp null hypothesis of constant multiplicative effects. The approach is demonstrated on a pediatric cohort of inpatient encounters and compared to benchmark estimates that properly account for time of infection. The results reaffirm the severe degree of bias when not accounting for time of infection and also show that the proposed sensitivity analysis captures the benchmark estimates for plausible and theoretically justified values of the sensitivity parameter.","PeriodicalId":74335,"journal":{"name":"Observational studies","volume":"9 1","pages":"1 - 24"},"PeriodicalIF":0.0,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42324694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-07DOI: 10.1353/obs.2023.a906624
Luke Keele, Matthew Lenard, Luke Miratrix, Lindsay Page
Abstract:Many interventions occur in settings where treatments are applied to groups. For example, a math intervention may be implemented for all students in some schools and withheld from students in other schools. When such treatments are non-randomly allocated, researchers can use statistical adjustment to make treated and control groups similar in terms of observed characteristics. Recent work in statistics has developed a form of matching, known as multilevel matching, that is designed for contexts where treatments are clustered. In this article, we provide a tutorial on how to analyze clustered treatment using multilevel matching. We use a real data application to explain the full set of steps for the analysis of a clustered observational study.
{"title":"A Software Tutorial for Matching in Clustered Observational Studies","authors":"Luke Keele, Matthew Lenard, Luke Miratrix, Lindsay Page","doi":"10.1353/obs.2023.a906624","DOIUrl":"https://doi.org/10.1353/obs.2023.a906624","url":null,"abstract":"Abstract:Many interventions occur in settings where treatments are applied to groups. For example, a math intervention may be implemented for all students in some schools and withheld from students in other schools. When such treatments are non-randomly allocated, researchers can use statistical adjustment to make treated and control groups similar in terms of observed characteristics. Recent work in statistics has developed a form of matching, known as multilevel matching, that is designed for contexts where treatments are clustered. In this article, we provide a tutorial on how to analyze clustered treatment using multilevel matching. We use a real data application to explain the full set of steps for the analysis of a clustered observational study.","PeriodicalId":74335,"journal":{"name":"Observational studies","volume":"9 1","pages":"73 - 96"},"PeriodicalIF":0.0,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45559753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract:Some causal parameters are defined on subgroups of the observed data, such as the average treatment effect on the treated and variations thereof. We explain how such parameters can be defined through parameters in a marginal structural (working) model. We illustrate how existing software can be used for doubly robust effect estimation of those parameters. Our proposal for confidence interval estimation is based on the delta method. All concepts are illustrated by estimands and data from the data challenge of the 2022 American Causal Inference Conference.
{"title":"Doubly Robust Estimation of Average Treatment Effects on the Treated through Marginal Structural Models","authors":"M. Schomaker, Philipp F. M. Baumann","doi":"10.1353/obs.2023.0025","DOIUrl":"https://doi.org/10.1353/obs.2023.0025","url":null,"abstract":"Abstract:Some causal parameters are defined on subgroups of the observed data, such as the average treatment effect on the treated and variations thereof. We explain how such parameters can be defined through parameters in a marginal structural (working) model. We illustrate how existing software can be used for doubly robust effect estimation of those parameters. Our proposal for confidence interval estimation is based on the delta method. All concepts are illustrated by estimands and data from the data challenge of the 2022 American Causal Inference Conference.","PeriodicalId":74335,"journal":{"name":"Observational studies","volume":"9 1","pages":"43 - 57"},"PeriodicalIF":0.0,"publicationDate":"2023-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41487639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract:Introducing novel causal estimators usually involves simulation studies run by the statistician developing the estimator, but this traditional approach can be fraught: simulation design is often favorable to the new method, unfavorable results might never be published, and comparison across estimators is difficult. The American Causal Inference Conference (ACIC) data challenges offer an alternative. As organizers of the 2022 challenge, we generated thousands of data sets similar to real-world policy evaluations and baked in true causal impacts unknown to participants. Participating teams then competed on an even playing field, using their cutting-edge methods to estimate those effects. In total, 20 teams submitted results from 58 estimators that used a range of approaches. We found several important factors driving performance that are not commonly used in business-as-usual applied policy evaluations, pointing to ways future evaluations could achieve more precise and nuanced estimates of policy impacts. Top-performing methods used flexible modeling of outcome-covariate and outcome-participation relationships as well as regularization of subgroup estimates. Furthermore, we found that model-based uncertainty intervals tended to outperform bootstrap-based ones. Lastly, and counter to our expectations, we found that analyzing large-n patient-level data does not improve performance relative to analyzing smaller-n data aggregated to the primary care practice level, given that in our simulated data sets practices (not individual patients) decided whether to join the intervention. Ultimately, we hope this competition helped identify methods that are best suited for evaluating which social policies move the needle for the individuals and communities they serve.
{"title":"Causal Methods Madness: Lessons Learned from the 2022 ACIC Competition to Estimate Health Policy Impacts","authors":"Daniel Thal, M. Finucane","doi":"10.1353/obs.2023.0023","DOIUrl":"https://doi.org/10.1353/obs.2023.0023","url":null,"abstract":"Abstract:Introducing novel causal estimators usually involves simulation studies run by the statistician developing the estimator, but this traditional approach can be fraught: simulation design is often favorable to the new method, unfavorable results might never be published, and comparison across estimators is difficult. The American Causal Inference Conference (ACIC) data challenges offer an alternative. As organizers of the 2022 challenge, we generated thousands of data sets similar to real-world policy evaluations and baked in true causal impacts unknown to participants. Participating teams then competed on an even playing field, using their cutting-edge methods to estimate those effects. In total, 20 teams submitted results from 58 estimators that used a range of approaches. We found several important factors driving performance that are not commonly used in business-as-usual applied policy evaluations, pointing to ways future evaluations could achieve more precise and nuanced estimates of policy impacts. Top-performing methods used flexible modeling of outcome-covariate and outcome-participation relationships as well as regularization of subgroup estimates. Furthermore, we found that model-based uncertainty intervals tended to outperform bootstrap-based ones. Lastly, and counter to our expectations, we found that analyzing large-n patient-level data does not improve performance relative to analyzing smaller-n data aggregated to the primary care practice level, given that in our simulated data sets practices (not individual patients) decided whether to join the intervention. Ultimately, we hope this competition helped identify methods that are best suited for evaluating which social policies move the needle for the individuals and communities they serve.","PeriodicalId":74335,"journal":{"name":"Observational studies","volume":"9 1","pages":"27 - 3"},"PeriodicalIF":0.0,"publicationDate":"2023-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44338192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract:We applied propensity score weighted regression and double machine learning in the 2022 American Causal Inference Conference Data Challenge. Our double machine learning method achieved the second lowest overall RMSE among all official submissions, but performed less well on heterogeneous treatment effect estimation due to lack of regularization.
{"title":"Estimating Treatment Effect with Propensity Score Weighted Regression and Double Machine Learning","authors":"Jun Xue, Wei Zhong Goh, Dana Rotz","doi":"10.1353/obs.2023.0028","DOIUrl":"https://doi.org/10.1353/obs.2023.0028","url":null,"abstract":"Abstract:We applied propensity score weighted regression and double machine learning in the 2022 American Causal Inference Conference Data Challenge. Our double machine learning method achieved the second lowest overall RMSE among all official submissions, but performed less well on heterogeneous treatment effect estimation due to lack of regularization.","PeriodicalId":74335,"journal":{"name":"Observational studies","volume":"10 6","pages":"83 - 90"},"PeriodicalIF":0.0,"publicationDate":"2023-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41291815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}