Abstract Mediation analysis is popular in examining the extent to which the effect of an exposure on an outcome is through an intermediate variable. When the exposure is subject to misclassification, the effects estimated can be severely biased. In this paper, when the mediator is binary, we first study the bias on traditional direct and indirect effect estimates in the presence of conditional non-differential misclassification of a binary exposure. We show that in the absence of interaction, the misclassification of the exposure will bias the direct effect towards the null but can bias the indirect effect in either direction. We then develop an EM algorithm approach to correcting for the misclassification, and conduct simulation studies to assess the performance of the correction approach. Finally, we apply the approach to National Center for Health Statistics birth certificate data to study the effect of smoking status on the preterm birth mediated through pre-eclampsia.
{"title":"Causal Mediation Analysis in the Presence of a Misclassified Binary Exposure","authors":"Zhichao Jiang, T. VanderWeele","doi":"10.1515/em-2016-0006","DOIUrl":"https://doi.org/10.1515/em-2016-0006","url":null,"abstract":"Abstract Mediation analysis is popular in examining the extent to which the effect of an exposure on an outcome is through an intermediate variable. When the exposure is subject to misclassification, the effects estimated can be severely biased. In this paper, when the mediator is binary, we first study the bias on traditional direct and indirect effect estimates in the presence of conditional non-differential misclassification of a binary exposure. We show that in the absence of interaction, the misclassification of the exposure will bias the direct effect towards the null but can bias the indirect effect in either direction. We then develop an EM algorithm approach to correcting for the misclassification, and conduct simulation studies to assess the performance of the correction approach. Finally, we apply the approach to National Center for Health Statistics birth certificate data to study the effect of smoking status on the preterm birth mediated through pre-eclampsia.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89314487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract When studying the causal effect of x on y, researchers may conduct regression and report a confidence interval for the slope coefficient β x ${beta }_{x}$ . This common confidence interval provides an assessment of uncertainty from sampling error, but it does not assess uncertainty from confounding. An intervention on x may produce a response in y that is unexpected, and our misinterpretation of the slope happens when there are confounding factors w. When w are measured we may conduct multiple regression, but when w are unmeasured it is common practice to include a precautionary statement when reporting the confidence interval, warning against unwarranted causal interpretation. If the goal is robust causal interpretation then we can do something more informative. Uncertainty, in the specification of three confounding parameters can be propagated through an equation to produce a confounding interval. Here, we develop supporting mathematical theory and describe an example application. Our proposed methodology applies well to studies of a continuous response or rare outcome. It is a general method for quantifying error from model uncertainty. Whereas, confidence intervals are used to assess uncertainty from unmeasured individuals, confounding intervals can be used to assess uncertainty from unmeasured attributes.
在研究x对y的因果关系时,研究人员可以进行回归并报告斜率系数β x ${beta}_{x}$的置信区间。这个通用置信区间提供了抽样误差不确定性的评估,但它不能评估混杂的不确定性。对x的干预可能会在y中产生意想不到的响应,当存在混淆因素w时,我们对斜率的误解就会发生。当w被测量时,我们可能会进行多元回归,但当w未被测量时,通常的做法是在报告置信区间时包括预防性声明,警告不合理的因果解释。如果目标是健全的因果解释,那么我们可以做一些更有信息量的事情。不确定性,在规定的三个混杂参数可以通过一个方程传播产生一个混杂区间。在这里,我们开发了支持数学理论并描述了一个示例应用程序。我们提出的方法适用于连续反应或罕见结果的研究。这是对模型不确定性误差进行量化的一般方法。然而,置信区间用于评估来自未测量个体的不确定性,混淆区间可用于评估来自未测量属性的不确定性。
{"title":"Regression analysis of unmeasured confounding","authors":"B. Knaeble, B. Osting, M. Abramson","doi":"10.1515/em-2019-0028","DOIUrl":"https://doi.org/10.1515/em-2019-0028","url":null,"abstract":"Abstract When studying the causal effect of x on y, researchers may conduct regression and report a confidence interval for the slope coefficient β x ${beta }_{x}$ . This common confidence interval provides an assessment of uncertainty from sampling error, but it does not assess uncertainty from confounding. An intervention on x may produce a response in y that is unexpected, and our misinterpretation of the slope happens when there are confounding factors w. When w are measured we may conduct multiple regression, but when w are unmeasured it is common practice to include a precautionary statement when reporting the confidence interval, warning against unwarranted causal interpretation. If the goal is robust causal interpretation then we can do something more informative. Uncertainty, in the specification of three confounding parameters can be propagated through an equation to produce a confounding interval. Here, we develop supporting mathematical theory and describe an example application. Our proposed methodology applies well to studies of a continuous response or rare outcome. It is a general method for quantifying error from model uncertainty. Whereas, confidence intervals are used to assess uncertainty from unmeasured individuals, confounding intervals can be used to assess uncertainty from unmeasured attributes.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81842128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Instrumental variables is a popular method in epidemiology and related fields, to estimate causal effects in the presence of unmeasured confounding. Traditionally, instrumental variable analyses have been confined to linear models, in which the causal parameter of interest is typically estimated with two-stage least squares. Recently, the methodology has been extended in several directions, including two-stage estimation and so-called G-estimation in nonlinear (e. g. logistic and Cox proportional hazards) models. This paper presents a new R package, ivtools, which implements many of these new instrumental variable methods. We briefly review the theory of two-stage estimation and G-estimation, and illustrate the functionality of the ivtools package by analyzing publicly available data from a cohort study on vitamin D and mortality.
{"title":"Instrumental Variable Estimation with the R Package ivtools","authors":"Arvid Sjolander, T. Martinussen","doi":"10.1515/EM-2018-0024","DOIUrl":"https://doi.org/10.1515/EM-2018-0024","url":null,"abstract":"Abstract Instrumental variables is a popular method in epidemiology and related fields, to estimate causal effects in the presence of unmeasured confounding. Traditionally, instrumental variable analyses have been confined to linear models, in which the causal parameter of interest is typically estimated with two-stage least squares. Recently, the methodology has been extended in several directions, including two-stage estimation and so-called G-estimation in nonlinear (e. g. logistic and Cox proportional hazards) models. This paper presents a new R package, ivtools, which implements many of these new instrumental variable methods. We briefly review the theory of two-stage estimation and G-estimation, and illustrate the functionality of the ivtools package by analyzing publicly available data from a cohort study on vitamin D and mortality.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"68 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81261198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Marginal structural models (MSM) with inverse probability weighting (IPW) are used to estimate causal effects of time-varying treatments, but can result in erratic finite-sample performance when there is low overlap in covariate distributions across different treatment patterns. Modifications to IPW which target the average treatment effect (ATE) estimand either introduce bias or rely on unverifiable parametric assumptions and extrapolation. This paper extends an alternate estimand, the ATE on the overlap population (ATO) which is estimated on a sub-population with a reasonable probability of receiving alternate treatment patterns in time-varying treatment settings. To estimate the ATO within an MSM framework, this paper extends a stochastic pruning method based on the posterior predictive treatment assignment (PPTA) (Zigler, C. M., and M. Cefalu. 2017. “Posterior Predictive Treatment Assignment for Estimating Causal Effects with Limited Overlap.” eprint arXiv:1710.08749.) as well as a weighting analog (Li, F., K. L. Morgan, and A. M. Zaslavsky. 2018. “Balancing Covariates via Propensity Score Weighting.” Journal of the American Statistical Association 113: 390–400, https://doi.org/10.1080/01621459.2016.1260466.) to the time-varying treatment setting. Simulations demonstrate the performance of these extensions compared against IPW and stabilized weighting with regard to bias, efficiency, and coverage. Finally, an analysis using these methods is performed on Medicare beneficiaries residing across 18,480 ZIP codes in the U.S. to evaluate the effect of coal-fired power plant emissions exposure on ischemic heart disease (IHD) hospitalization, accounting for seasonal patterns that lead to change in treatment over time.
具有逆概率加权(IPW)的边际结构模型(MSM)用于估计时变处理的因果效应,但当不同处理模式的协变量分布重叠度较低时,可能导致有限样本性能不稳定。针对平均治疗效果(ATE)估计的IPW修改要么引入偏差,要么依赖于无法验证的参数假设和外推。本文扩展了一个替代估计,即重叠群体(ATO)的ATE,该估计是在时变治疗设置中接受替代治疗模式的合理概率的亚群体上估计的。为了在MSM框架内估计ATO,本文扩展了一种基于后检预测处理分配(PPTA)的随机修剪方法(Zigler, C. M.和M. Cefalu. 2017)。“估计有限重叠因果效应的后验预测治疗分配”。)以及加权模拟(Li, F., K. L. Morgan, and a . M. Zaslavsky. 2018)。“通过倾向得分加权平衡协变量。”美国统计协会杂志113:390-400,https://doi.org/10.1080/01621459.2016.1260466.)的时变治疗设置。仿真证明了这些扩展与IPW和稳定加权相比在偏置、效率和覆盖方面的性能。最后,使用这些方法对居住在美国18480个邮政编码的医疗保险受益人进行了分析,以评估燃煤电厂排放暴露对缺血性心脏病(IHD)住院治疗的影响,并考虑了导致治疗随时间变化的季节性模式。
{"title":"Posterior predictive treatment assignment methods for causal inference in the context of time-varying treatments","authors":"Shirley X Liao, Lucas R. F. Henneman, C. Zigler","doi":"10.1515/em-2019-0024","DOIUrl":"https://doi.org/10.1515/em-2019-0024","url":null,"abstract":"Abstract Marginal structural models (MSM) with inverse probability weighting (IPW) are used to estimate causal effects of time-varying treatments, but can result in erratic finite-sample performance when there is low overlap in covariate distributions across different treatment patterns. Modifications to IPW which target the average treatment effect (ATE) estimand either introduce bias or rely on unverifiable parametric assumptions and extrapolation. This paper extends an alternate estimand, the ATE on the overlap population (ATO) which is estimated on a sub-population with a reasonable probability of receiving alternate treatment patterns in time-varying treatment settings. To estimate the ATO within an MSM framework, this paper extends a stochastic pruning method based on the posterior predictive treatment assignment (PPTA) (Zigler, C. M., and M. Cefalu. 2017. “Posterior Predictive Treatment Assignment for Estimating Causal Effects with Limited Overlap.” eprint arXiv:1710.08749.) as well as a weighting analog (Li, F., K. L. Morgan, and A. M. Zaslavsky. 2018. “Balancing Covariates via Propensity Score Weighting.” Journal of the American Statistical Association 113: 390–400, https://doi.org/10.1080/01621459.2016.1260466.) to the time-varying treatment setting. Simulations demonstrate the performance of these extensions compared against IPW and stabilized weighting with regard to bias, efficiency, and coverage. Finally, an analysis using these methods is performed on Medicare beneficiaries residing across 18,480 ZIP codes in the U.S. to evaluate the effect of coal-fired power plant emissions exposure on ischemic heart disease (IHD) hospitalization, accounting for seasonal patterns that lead to change in treatment over time.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90796155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Interrupted time series are increasingly being used to evaluate the population-wide implementation of public health interventions. However, the resulting estimates of intervention impact can be severely biased if underlying disease trends are not adequately accounted for. Control series offer a potential solution to this problem, but there is little guidance on how to use them to produce trend-adjusted estimates. To address this lack of guidance, we show how interrupted time series can be analysed when the control and intervention series share confounders, i. e. when they share a common trend. We show that the intervention effect can be estimated by subtracting the control series from the intervention series and analysing the difference using linear regression or, if a log-linear model is assumed, by including the control series as an offset in a Poisson regression with robust standard errors. The methods are illustrated with two examples.
{"title":"Analysing Interrupted Time Series with a Control","authors":"AnthonyG. Scott, V. Isham","doi":"10.1515/EM-2018-0010","DOIUrl":"https://doi.org/10.1515/EM-2018-0010","url":null,"abstract":"Abstract Interrupted time series are increasingly being used to evaluate the population-wide implementation of public health interventions. However, the resulting estimates of intervention impact can be severely biased if underlying disease trends are not adequately accounted for. Control series offer a potential solution to this problem, but there is little guidance on how to use them to produce trend-adjusted estimates. To address this lack of guidance, we show how interrupted time series can be analysed when the control and intervention series share confounders, i. e. when they share a common trend. We show that the intervention effect can be estimated by subtracting the control series from the intervention series and analysing the difference using linear regression or, if a log-linear model is assumed, by including the control series as an offset in a Poisson regression with robust standard errors. The methods are illustrated with two examples.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90081686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-01Epub Date: 2018-07-27DOI: 10.1515/em-2016-0014
Anders Huitfeldt, Andrew Goldstein, Sonja A Swanson
Standard measures of effect, including the risk ratio, the odds ratio, and the risk difference, are associated with a number of well-described shortcomings, and no consensus exists about the conditions under which investigators should choose one effect measure over another. In this paper, we introduce a new framework for reasoning about choice of effect measure by linking two separate versions of the risk ratio to a counterfactual causal model. In our approach, effects are defined in terms of "counterfactual outcome state transition parameters", that is, the proportion of those individuals who would not have been a case by the end of follow-up if untreated, who would have responded to treatment by becoming a case; and the proportion of those individuals who would have become a case by the end of follow-up if untreated who would have responded to treatment by not becoming a case. Although counterfactual outcome state transition parameters are generally not identified from the data without strong monotonicity assumptions, we show that when they stay constant between populations, there are important implications for model specification, meta-analysis, and research generalization.
{"title":"The choice of effect measure for binary outcomes: Introducing counterfactual outcome state transition parameters.","authors":"Anders Huitfeldt, Andrew Goldstein, Sonja A Swanson","doi":"10.1515/em-2016-0014","DOIUrl":"10.1515/em-2016-0014","url":null,"abstract":"<p><p>Standard measures of effect, including the risk ratio, the odds ratio, and the risk difference, are associated with a number of well-described shortcomings, and no consensus exists about the conditions under which investigators should choose one effect measure over another. In this paper, we introduce a new framework for reasoning about choice of effect measure by linking two separate versions of the risk ratio to a counterfactual causal model. In our approach, effects are defined in terms of \"counterfactual outcome state transition parameters\", that is, the proportion of those individuals who would not have been a case by the end of follow-up if untreated, who would have responded to treatment by becoming a case; and the proportion of those individuals who would have become a case by the end of follow-up if untreated who would have responded to treatment by not becoming a case. Although counterfactual outcome state transition parameters are generally not identified from the data without strong monotonicity assumptions, we show that when they stay constant between populations, there are important implications for model specification, meta-analysis, and research generalization.</p>","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1515/em-2016-0014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36847538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In randomized cancer screening trials where asymptomatic individuals are assigned to undergo a regimen of screening examinations or standard care, the primary objective typically is to estimate the effect of screening assignment on cancer-specific mortality by carrying out an ’intention-to-screen’ analysis. However, most of the participants in the trial will be cancer-free; only those developing a genuine cancer that is screening-detectable can potentially benefit from screening induced early treatments. Here we consider measuring the effect of early treatments in this partially latent subpopulation in terms of reduction in case fatality. To formalize the estimands and identifying assumptions in a causal modeling framework, we first define two measures, namely proportional and absolute case-fatality reduction, using potential outcomes notation. We re-derive an earlier proposed estimator for the former, and propose a new estimator for the latter motivated by the instrumental variable approach. The methods are illustrated using data from the US National Lung Screening Trial, with specific attention to estimation in the presence of censoring and competing risks.
{"title":"Estimating Case-Fatality Reduction from Randomized Screening Trials","authors":"S. Saha, Z. Liu, O. Saarela","doi":"10.1515/EM-2018-0007","DOIUrl":"https://doi.org/10.1515/EM-2018-0007","url":null,"abstract":"Abstract In randomized cancer screening trials where asymptomatic individuals are assigned to undergo a regimen of screening examinations or standard care, the primary objective typically is to estimate the effect of screening assignment on cancer-specific mortality by carrying out an ’intention-to-screen’ analysis. However, most of the participants in the trial will be cancer-free; only those developing a genuine cancer that is screening-detectable can potentially benefit from screening induced early treatments. Here we consider measuring the effect of early treatments in this partially latent subpopulation in terms of reduction in case fatality. To formalize the estimands and identifying assumptions in a causal modeling framework, we first define two measures, namely proportional and absolute case-fatality reduction, using potential outcomes notation. We re-derive an earlier proposed estimator for the former, and propose a new estimator for the latter motivated by the instrumental variable approach. The methods are illustrated using data from the US National Lung Screening Trial, with specific attention to estimation in the presence of censoring and competing risks.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"57 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82546758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. M. Mortensen, C. P. Hansen, K. Overvad, S. Lundbye-Christensen, E. Parner
Abstract Regression analyses for time-to-event data are commonly performed by Cox regression. Recently, an alternative method, the pseudo-observation method, has been introduced. This method offers new possibilities of analyzing data exploring cumulative risks on both a multiplicative and an additive risk scale, in contrast to the multiplicative Cox regression model for hazard rates. Hence, the pseudo-observation method enables assessment of interaction on an additive scale. However, the pseudo-observation method implies more strict model assumptions regarding entry and censoring but avoids the assumption of proportional hazards (except from combined analyses of several time intervals where assumptions of constant hazard ratios, risk differences and relative risks may be imposed). Only few descriptions of the use of the method are accessible for epidemiologists. In this paper, we present the pseudo-observation method from a user-oriented point of view aiming at facilitating the use of this relatively new analytical tool. Using data from the Diet, Cancer and Health Cohort we give a detailed example of the application of the pseudo-observation method on time-to-event data with delayed entry and right censoring. We discuss model control and suggest analytic strategies when assumptions are not met. The introductory model control in the data example showed that data did not fulfill the assumptions of the pseudo-observation method. This was caused by selection of healthier participants at older baseline ages and a change in the distribution of study participants according to outcome risk during the inclusion period. Both selection effects need to be addressed in any time-to-event analysis and we show how these effects are accounted for in the pseudo-observation analysis. The pseudo-observation method provides us with a statistical tool which makes it possible to analyse cohort data on both multiplicative and additive risk scales including assessment of biological interaction on the risk difference scale. Thus, it might be a relevant choice of method – especially if the focus is to investigate interaction from a public health point of view.
{"title":"The Pseudo-Observation Analysis of Time-To-Event Data. Example from the Danish Diet, Cancer and Health Cohort Illustrating Assumptions, Model Validation and Interpretation of Results","authors":"L. M. Mortensen, C. P. Hansen, K. Overvad, S. Lundbye-Christensen, E. Parner","doi":"10.1515/EM-2017-0015","DOIUrl":"https://doi.org/10.1515/EM-2017-0015","url":null,"abstract":"Abstract Regression analyses for time-to-event data are commonly performed by Cox regression. Recently, an alternative method, the pseudo-observation method, has been introduced. This method offers new possibilities of analyzing data exploring cumulative risks on both a multiplicative and an additive risk scale, in contrast to the multiplicative Cox regression model for hazard rates. Hence, the pseudo-observation method enables assessment of interaction on an additive scale. However, the pseudo-observation method implies more strict model assumptions regarding entry and censoring but avoids the assumption of proportional hazards (except from combined analyses of several time intervals where assumptions of constant hazard ratios, risk differences and relative risks may be imposed). Only few descriptions of the use of the method are accessible for epidemiologists. In this paper, we present the pseudo-observation method from a user-oriented point of view aiming at facilitating the use of this relatively new analytical tool. Using data from the Diet, Cancer and Health Cohort we give a detailed example of the application of the pseudo-observation method on time-to-event data with delayed entry and right censoring. We discuss model control and suggest analytic strategies when assumptions are not met. The introductory model control in the data example showed that data did not fulfill the assumptions of the pseudo-observation method. This was caused by selection of healthier participants at older baseline ages and a change in the distribution of study participants according to outcome risk during the inclusion period. Both selection effects need to be addressed in any time-to-event analysis and we show how these effects are accounted for in the pseudo-observation analysis. The pseudo-observation method provides us with a statistical tool which makes it possible to analyse cohort data on both multiplicative and additive risk scales including assessment of biological interaction on the risk difference scale. Thus, it might be a relevant choice of method – especially if the focus is to investigate interaction from a public health point of view.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"428 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76540809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Keele, C. Sharoky, M. Sellers, C. Wirtalla, R. Kelz
Abstract Confounding by indication is a critical challenge in evaluating the effectiveness of surgical interventions using observational data. The threat from confounding is compounded when using medical claims data due to the inability to measure risk severity. If there are unobserved differences in risk severity across patients, treatment effect estimates based on methods such a multivariate regression may be biased in an unknown direction. A research design based on instrumental variables offers one possibility for reducing bias from unobserved confounding compared to risk adjustment with observed confounders. This study investigates whether a physician’s preference for operative care is a valid instrumental variable for studying the effect of emergency surgery. We review the plausibility of the necessary causal assumptions in an investigation of the effect of emergency general surgery (EGS) on inpatient mortality among adults using medical claims data from Florida, Pennsylvania, and New York in 2012–2013. In a departure from the extant literature, we use the framework of stochastic monotonicity which is more plausible in the context of a preference-based instrument. We compare estimates from an instrumental variables design to estimates from a design based on matching that assumes all confounders are observed. Estimates from matching show lower mortality rates for patients that undergo EGS compared to estimates based in the instrumental variables framework. Results vary substantially by condition type. We also present sensitivity analyses as well as bounds for the population level average treatment effect. We conclude with a discussion of the interpretation of estimates from both approaches.
{"title":"An Instrumental Variables Design for the Effect of Emergency General Surgery","authors":"L. Keele, C. Sharoky, M. Sellers, C. Wirtalla, R. Kelz","doi":"10.1515/EM-2017-0012","DOIUrl":"https://doi.org/10.1515/EM-2017-0012","url":null,"abstract":"Abstract Confounding by indication is a critical challenge in evaluating the effectiveness of surgical interventions using observational data. The threat from confounding is compounded when using medical claims data due to the inability to measure risk severity. If there are unobserved differences in risk severity across patients, treatment effect estimates based on methods such a multivariate regression may be biased in an unknown direction. A research design based on instrumental variables offers one possibility for reducing bias from unobserved confounding compared to risk adjustment with observed confounders. This study investigates whether a physician’s preference for operative care is a valid instrumental variable for studying the effect of emergency surgery. We review the plausibility of the necessary causal assumptions in an investigation of the effect of emergency general surgery (EGS) on inpatient mortality among adults using medical claims data from Florida, Pennsylvania, and New York in 2012–2013. In a departure from the extant literature, we use the framework of stochastic monotonicity which is more plausible in the context of a preference-based instrument. We compare estimates from an instrumental variables design to estimates from a design based on matching that assumes all confounders are observed. Estimates from matching show lower mortality rates for patients that undergo EGS compared to estimates based in the instrumental variables framework. Results vary substantially by condition type. We also present sensitivity analyses as well as bounds for the population level average treatment effect. We conclude with a discussion of the interpretation of estimates from both approaches.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"221 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76647450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nabila Parveen, E. Moodie, J. Cox, G. Lambert, J. Otis, M. Roger, B. Brenner
Abstract An exciting new direction in HIV research is centered on using molecular phylogenetics to understand the social and behavioral drivers of HIV transmission. SPOT was an intervention designed to offer HIV point of care testing to men who have sex with men at a community-based site in Montreal, Canada; at the time of testing, a research questionnaire was also deployed to collect data on socio-demographic and behavioral characteristics of participating men. The men taking part in SPOT could be viewed, from the research perspective, as having been recruited via a convenience sample. Among men who were found to be HIV positive, phylogenetic cluster size was measured using a large cohort of HIV-positive individuals in the province of Quebec. The cluster size is likely subject to under-estimation. In this paper, we use SPOT data to evaluate the association between HIV transmission cluster size and the number of sex partners for MSM, after adjusting for the SPOT sampling scheme and correcting for measurement error in cluster size by leveraging external data sources. The sampling weights for SPOT participants were calculated from another study of men who have sex with men in Montreal by fitting a weight-adjusted model, whereas measurement error was corrected using the simulation-extrapolation conditional on covariates approach.
{"title":"New Challenges in HIV Research: Combining Phylogenetic Cluster Size and Epidemiological Data","authors":"Nabila Parveen, E. Moodie, J. Cox, G. Lambert, J. Otis, M. Roger, B. Brenner","doi":"10.1515/EM-2017-0017","DOIUrl":"https://doi.org/10.1515/EM-2017-0017","url":null,"abstract":"Abstract An exciting new direction in HIV research is centered on using molecular phylogenetics to understand the social and behavioral drivers of HIV transmission. SPOT was an intervention designed to offer HIV point of care testing to men who have sex with men at a community-based site in Montreal, Canada; at the time of testing, a research questionnaire was also deployed to collect data on socio-demographic and behavioral characteristics of participating men. The men taking part in SPOT could be viewed, from the research perspective, as having been recruited via a convenience sample. Among men who were found to be HIV positive, phylogenetic cluster size was measured using a large cohort of HIV-positive individuals in the province of Quebec. The cluster size is likely subject to under-estimation. In this paper, we use SPOT data to evaluate the association between HIV transmission cluster size and the number of sex partners for MSM, after adjusting for the SPOT sampling scheme and correcting for measurement error in cluster size by leveraging external data sources. The sampling weights for SPOT participants were calculated from another study of men who have sex with men in Montreal by fitting a weight-adjusted model, whereas measurement error was corrected using the simulation-extrapolation conditional on covariates approach.","PeriodicalId":37999,"journal":{"name":"Epidemiologic Methods","volume":"36 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72627266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}