Pub Date : 2026-01-01Epub Date: 2025-09-08DOI: 10.1097/EDE.0000000000001907
Wen Wei Loh
Drawing causal conclusions about nonrandomized exposures rests on assuming no uncontrolled confounding, but it is rarely justifiable to rule out all putative violations of this routinely made yet empirically untestable assumption. Alternatively, this assumption can be avoided by leveraging negative control outcomes using the control outcome calibration approach (COCA). The existing COCA estimator of the average causal effect relies on correctly specifying the mean negative control outcome model, with a closed-form solution for the main exposure effect. In this article, we propose a doubly robust COCA estimator of the average causal effect that relaxes this modeling requirement and permits effect modification through covariate-exposure interaction terms. The doubly robust COCA estimator uses correctly specified exposure and focal outcome models to protect against biases from an incorrectly specified negative control outcome model. The ability to obtain unbiased point estimates and inferences is empirically evaluated using a simulation study. We demonstrate doubly robust COCA using a publicly available dataset to evaluate the effect of volunteering on mental health. This method is practical and easy to implement and permits unbiased estimation of causal effects even amid uncontrolled confounding.
{"title":"Doubly Robust Control Outcome Calibration Approach Estimation of Conditional Effects with Uncontrolled Confounding.","authors":"Wen Wei Loh","doi":"10.1097/EDE.0000000000001907","DOIUrl":"10.1097/EDE.0000000000001907","url":null,"abstract":"<p><p>Drawing causal conclusions about nonrandomized exposures rests on assuming no uncontrolled confounding, but it is rarely justifiable to rule out all putative violations of this routinely made yet empirically untestable assumption. Alternatively, this assumption can be avoided by leveraging negative control outcomes using the control outcome calibration approach (COCA). The existing COCA estimator of the average causal effect relies on correctly specifying the mean negative control outcome model, with a closed-form solution for the main exposure effect. In this article, we propose a doubly robust COCA estimator of the average causal effect that relaxes this modeling requirement and permits effect modification through covariate-exposure interaction terms. The doubly robust COCA estimator uses correctly specified exposure and focal outcome models to protect against biases from an incorrectly specified negative control outcome model. The ability to obtain unbiased point estimates and inferences is empirically evaluated using a simulation study. We demonstrate doubly robust COCA using a publicly available dataset to evaluate the effect of volunteering on mental health. This method is practical and easy to implement and permits unbiased estimation of causal effects even amid uncontrolled confounding.</p>","PeriodicalId":11779,"journal":{"name":"Epidemiology","volume":" ","pages":"98-106"},"PeriodicalIF":4.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145014238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-09-15DOI: 10.1097/EDE.0000000000001917
Rachael K Ross, Matthew P Fox, Catherine R Lesko, Jacqueline E Rudolph, Lauren C Zalla, Jessie K Edwards
Measurement error is ubiquitous in the data used for epidemiologic research and can lead to meaningful information bias. Analytic approaches to address measurement error and quantitative bias analyses examining the potential impact of measurement error on study results often leverage validation data that provides information about the relationship between the true measure and the available imperfect measure, quantified by measurement error parameters such as sensitivity and specificity in the binary case. Leveraging validation data often requires transporting these measurement error parameters from the validation data to the target sample of interest (that may or may not include individuals from the validation data). In this paper, we examine the independence assumptions required to transport measurement error parameters from the validation data to the target sample, highlighting how the required assumption differs depending on the form of the measurement error parameters (i.e., whether it is the true measure conditional on the imperfect measure or vice versa). We then illustrate how diagrams can clarify the conditions under which the required assumptions hold and thus what measurement error parameters can be validly transported. This work provides practical tools for epidemiologists to address measurement error using validation data in applied research.
{"title":"Using Measurement Error Parameters From Validation Data.","authors":"Rachael K Ross, Matthew P Fox, Catherine R Lesko, Jacqueline E Rudolph, Lauren C Zalla, Jessie K Edwards","doi":"10.1097/EDE.0000000000001917","DOIUrl":"10.1097/EDE.0000000000001917","url":null,"abstract":"<p><p>Measurement error is ubiquitous in the data used for epidemiologic research and can lead to meaningful information bias. Analytic approaches to address measurement error and quantitative bias analyses examining the potential impact of measurement error on study results often leverage validation data that provides information about the relationship between the true measure and the available imperfect measure, quantified by measurement error parameters such as sensitivity and specificity in the binary case. Leveraging validation data often requires transporting these measurement error parameters from the validation data to the target sample of interest (that may or may not include individuals from the validation data). In this paper, we examine the independence assumptions required to transport measurement error parameters from the validation data to the target sample, highlighting how the required assumption differs depending on the form of the measurement error parameters (i.e., whether it is the true measure conditional on the imperfect measure or vice versa). We then illustrate how diagrams can clarify the conditions under which the required assumptions hold and thus what measurement error parameters can be validly transported. This work provides practical tools for epidemiologists to address measurement error using validation data in applied research.</p>","PeriodicalId":11779,"journal":{"name":"Epidemiology","volume":" ","pages":"67-72"},"PeriodicalIF":4.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145063734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-25DOI: 10.1097/EDE.0000000000001921
Juan Gago, Christopher Boyer, Marc Lipsitch
Effective antimicrobial stewardship requires unbiased assessment of the benefits and harms of different treatment strategies, considering both immediate patient outcomes and patterns of antimicrobial resistance. In principle, these benefits and harms can be expressed as causal contrasts between treatment strategies and, therefore, should be ideally suited for study under the potential outcomes framework. However, causal inference in this setting is complicated by interference between individuals (or units) due to the indirect effects of antibiotic treatment, including the spread of resistant bacteria to others. These indirect effects complicate the assessment of trade-offs as benefits are mostly due to the direct effects among those treated, while harms are more diffuse and, therefore, harder to measure. While causal frameworks and study designs that accommodate interference have previously been proposed, they have been applied predominantly to the study of vaccines, which differ from antimicrobial interventions in fundamental ways. In this article, we review these existing approaches and propose alternative adaptations tailored to the study of antimicrobial treatment strategies.
{"title":"How Should We Study the Indirect Effects of Antimicrobial Treatment Strategies?: A Causal Perspective.","authors":"Juan Gago, Christopher Boyer, Marc Lipsitch","doi":"10.1097/EDE.0000000000001921","DOIUrl":"10.1097/EDE.0000000000001921","url":null,"abstract":"<p><p>Effective antimicrobial stewardship requires unbiased assessment of the benefits and harms of different treatment strategies, considering both immediate patient outcomes and patterns of antimicrobial resistance. In principle, these benefits and harms can be expressed as causal contrasts between treatment strategies and, therefore, should be ideally suited for study under the potential outcomes framework. However, causal inference in this setting is complicated by interference between individuals (or units) due to the indirect effects of antibiotic treatment, including the spread of resistant bacteria to others. These indirect effects complicate the assessment of trade-offs as benefits are mostly due to the direct effects among those treated, while harms are more diffuse and, therefore, harder to measure. While causal frameworks and study designs that accommodate interference have previously been proposed, they have been applied predominantly to the study of vaccines, which differ from antimicrobial interventions in fundamental ways. In this article, we review these existing approaches and propose alternative adaptations tailored to the study of antimicrobial treatment strategies.</p>","PeriodicalId":11779,"journal":{"name":"Epidemiology","volume":" ","pages":"88-97"},"PeriodicalIF":4.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145430678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-25DOI: 10.1097/EDE.0000000000001926
Christopher B Boyer, Kendrick Qijun Li, Xu Shi, Eric J Tchetgen Tchetgen
The test-negative design (TND) is widely used to evaluate vaccine effectiveness in real-world settings. In a TND study, individuals with similar symptoms who seek care are tested, and effectiveness is estimated by comparing vaccination histories of test-positive cases and test-negative controls. The TND is often justified on the grounds that it reduces confounding due to unmeasured health-seeking behavior, although this has not been formally described using potential outcomes. At the same time, concerns persist that conditioning on test receipt can introduce selection bias. We provide a formal justification of the TND under an assumption of odds ratio equi-confounding, where unmeasured confounders affect test-positive and test-negative individuals equivalently on the odds ratio scale. Health-seeking behavior is one plausible example. We also show that these results hold under the outcome-dependent sampling used in TNDs. We discuss the design implications of the equi-confounding assumption and provide alternative estimators for the marginal risk ratio among the vaccinated under equi-confounding, including outcome modeling and inverse probability weighting estimators as well as a semiparametric estimator that is doubly robust. When equi-confounding does not hold, we suggest a straightforward sensitivity analysis that parameterizes the magnitude of the deviation on the odds ratio scale. A simulation study evaluates the empirical performance of our proposed estimators under a wide range of scenarios. Finally, we discuss broader uses of test-negative outcomes to de-bias cohort studies in which testing is triggered by symptoms.
{"title":"Identification and Estimation of Vaccine Effectiveness in the Test-Negative Design Under Equi-confounding.","authors":"Christopher B Boyer, Kendrick Qijun Li, Xu Shi, Eric J Tchetgen Tchetgen","doi":"10.1097/EDE.0000000000001926","DOIUrl":"10.1097/EDE.0000000000001926","url":null,"abstract":"<p><p>The test-negative design (TND) is widely used to evaluate vaccine effectiveness in real-world settings. In a TND study, individuals with similar symptoms who seek care are tested, and effectiveness is estimated by comparing vaccination histories of test-positive cases and test-negative controls. The TND is often justified on the grounds that it reduces confounding due to unmeasured health-seeking behavior, although this has not been formally described using potential outcomes. At the same time, concerns persist that conditioning on test receipt can introduce selection bias. We provide a formal justification of the TND under an assumption of odds ratio equi-confounding, where unmeasured confounders affect test-positive and test-negative individuals equivalently on the odds ratio scale. Health-seeking behavior is one plausible example. We also show that these results hold under the outcome-dependent sampling used in TNDs. We discuss the design implications of the equi-confounding assumption and provide alternative estimators for the marginal risk ratio among the vaccinated under equi-confounding, including outcome modeling and inverse probability weighting estimators as well as a semiparametric estimator that is doubly robust. When equi-confounding does not hold, we suggest a straightforward sensitivity analysis that parameterizes the magnitude of the deviation on the odds ratio scale. A simulation study evaluates the empirical performance of our proposed estimators under a wide range of scenarios. Finally, we discuss broader uses of test-negative outcomes to de-bias cohort studies in which testing is triggered by symptoms.</p>","PeriodicalId":11779,"journal":{"name":"Epidemiology","volume":"37 1","pages":"77-87"},"PeriodicalIF":4.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145667722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-09-15DOI: 10.1097/EDE.0000000000001914
Katsiaryna Bykov, C Andrew Basham, Nazleen F Khan, Robert J Glynn, Shruti Belitkar, Seanna M Vine, Sungho Bea, Brian T Bateman, Krista F Huybrechts
Background: Selective serotonin reuptake inhibitors (SSRIs) are often co-prescribed with oxycodone, yet may potentiate respiratory depression. We aimed to assess the comparative effects of SSRIs on opioid overdose when added to oxycodone.
Methods: Using US commercial and public health insurance claims data (2004 - 2020), we conducted a cohort study in adults who initiated SSRI while on oxycodone. We assigned patients to one of five exposures (sertraline, citalopram, escitalopram, fluoxetine, and paroxetine) and followed them for opioid overdose (hospitalization or emergency room visit) for 365 days and while they stayed on both oxycodone and index SSRI. We used propensity score matching weights to adjust for potential confounders and weighted Cox proportional hazard models to estimate hazard ratios (HRs) with 95% confidence intervals (CIs).
Results: Among 753,263 eligible individuals (mean age 46 years [SD 16]; 527,340 females [70%]), 221,792 initiated sertraline, 173,352 citalopram, 153,968 escitalopram, 126,954 fluoxetine, and 77,197 paroxetine. Overall, 1250 opioid overdose events occurred, with incidence rates ranging from 10.8 to 15.2 per 1,000 person-years across individual SSRIs. Weighted HRs, relative to sertraline, were 1.24 (95% CI = 1.04, 1.50) for citalopram, 1.22 (95% CI = 1.01, 1.47) for escitalopram, 1.26 (95% CI = 1.04, 1.53) for fluoxetine, and 1.26 (95% CI = 1.01, 1.57) for paroxetine. No differences were observed across SSRIs other than sertraline.
Conclusions: In this study of individuals who added an SSRI to oxycodone, the incidence of opioid overdose was low. Patients who initiated sertraline experienced overdose at a slightly lower rate than patients who initiated other SSRIs.
{"title":"Comparative Risks of Opioid Overdose in Patients on Oxycodone Initiating Selective Serotonin Reuptake Inhibitors.","authors":"Katsiaryna Bykov, C Andrew Basham, Nazleen F Khan, Robert J Glynn, Shruti Belitkar, Seanna M Vine, Sungho Bea, Brian T Bateman, Krista F Huybrechts","doi":"10.1097/EDE.0000000000001914","DOIUrl":"10.1097/EDE.0000000000001914","url":null,"abstract":"<p><strong>Background: </strong>Selective serotonin reuptake inhibitors (SSRIs) are often co-prescribed with oxycodone, yet may potentiate respiratory depression. We aimed to assess the comparative effects of SSRIs on opioid overdose when added to oxycodone.</p><p><strong>Methods: </strong>Using US commercial and public health insurance claims data (2004 - 2020), we conducted a cohort study in adults who initiated SSRI while on oxycodone. We assigned patients to one of five exposures (sertraline, citalopram, escitalopram, fluoxetine, and paroxetine) and followed them for opioid overdose (hospitalization or emergency room visit) for 365 days and while they stayed on both oxycodone and index SSRI. We used propensity score matching weights to adjust for potential confounders and weighted Cox proportional hazard models to estimate hazard ratios (HRs) with 95% confidence intervals (CIs).</p><p><strong>Results: </strong>Among 753,263 eligible individuals (mean age 46 years [SD 16]; 527,340 females [70%]), 221,792 initiated sertraline, 173,352 citalopram, 153,968 escitalopram, 126,954 fluoxetine, and 77,197 paroxetine. Overall, 1250 opioid overdose events occurred, with incidence rates ranging from 10.8 to 15.2 per 1,000 person-years across individual SSRIs. Weighted HRs, relative to sertraline, were 1.24 (95% CI = 1.04, 1.50) for citalopram, 1.22 (95% CI = 1.01, 1.47) for escitalopram, 1.26 (95% CI = 1.04, 1.53) for fluoxetine, and 1.26 (95% CI = 1.01, 1.57) for paroxetine. No differences were observed across SSRIs other than sertraline.</p><p><strong>Conclusions: </strong>In this study of individuals who added an SSRI to oxycodone, the incidence of opioid overdose was low. Patients who initiated sertraline experienced overdose at a slightly lower rate than patients who initiated other SSRIs.</p>","PeriodicalId":11779,"journal":{"name":"Epidemiology","volume":" ","pages":"132-140"},"PeriodicalIF":4.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12697926/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145063715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-09-10DOI: 10.1097/EDE.0000000000001912
Carl Bonander, Marta Blangiardo, Ulf Strömberg
Bayesian disease-mapping models are widely used in small-area epidemiology to account for spatial correlation and stabilize estimates through spatial smoothing. In contrast, difference-in-differences (DID) methods-commonly used to estimate treatment effects from observational panel data-typically ignore spatial dependence. This paper integrates disease-mapping models into an imputation-based DID framework to address spatially structured residual variation and improve precision in small-area evaluations. The approach builds on recent advances in causal panel data methods, including two-way Mundlak estimation, to enable causal identification equivalent to fixed effects DID while incorporating spatiotemporal random effects. We implement the method using Integrated Nested Laplace Approximation, which supports flexible spatial and temporal structures and efficient Bayesian computation. Simulations show that, when the spatiotemporal structure is correctly specified, the approach improves precision and interval coverage compared with standard DID methods. We illustrate the method by evaluating local ice cleat distribution programs in Swedish municipalities.
{"title":"Spatial Difference-in-Differences with Bayesian Disease Mapping Models.","authors":"Carl Bonander, Marta Blangiardo, Ulf Strömberg","doi":"10.1097/EDE.0000000000001912","DOIUrl":"10.1097/EDE.0000000000001912","url":null,"abstract":"<p><p>Bayesian disease-mapping models are widely used in small-area epidemiology to account for spatial correlation and stabilize estimates through spatial smoothing. In contrast, difference-in-differences (DID) methods-commonly used to estimate treatment effects from observational panel data-typically ignore spatial dependence. This paper integrates disease-mapping models into an imputation-based DID framework to address spatially structured residual variation and improve precision in small-area evaluations. The approach builds on recent advances in causal panel data methods, including two-way Mundlak estimation, to enable causal identification equivalent to fixed effects DID while incorporating spatiotemporal random effects. We implement the method using Integrated Nested Laplace Approximation, which supports flexible spatial and temporal structures and efficient Bayesian computation. Simulations show that, when the spatiotemporal structure is correctly specified, the approach improves precision and interval coverage compared with standard DID methods. We illustrate the method by evaluating local ice cleat distribution programs in Swedish municipalities.</p>","PeriodicalId":11779,"journal":{"name":"Epidemiology","volume":" ","pages":"30-38"},"PeriodicalIF":4.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12643553/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145029365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-08-19DOI: 10.1097/EDE.0000000000001910
Frances E M Albers, Margarita Moreno-Betancur, Roger L Milne, Dallas R English, Brigid M Lynch, S Ghazaleh Dashti
{"title":"The Authors Respond.","authors":"Frances E M Albers, Margarita Moreno-Betancur, Roger L Milne, Dallas R English, Brigid M Lynch, S Ghazaleh Dashti","doi":"10.1097/EDE.0000000000001910","DOIUrl":"10.1097/EDE.0000000000001910","url":null,"abstract":"","PeriodicalId":11779,"journal":{"name":"Epidemiology","volume":" ","pages":"e2-e3"},"PeriodicalIF":4.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144872056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-10-02DOI: 10.1097/EDE.0000000000001923
Xavier Basagaña, Joan Ballester
Background: A new method for time series analysis was recently formulated and implemented that uses temporally aggregated outcome data to generate unbiased estimates of the underlying association between temporally disaggregated outcome and covariate data. However, the performance of the method was only tested in the context of the delayed nonlinear relation between temperature and mortality, and only in the case of the aggregation of sets of consecutive days.
Methods: We conduct a simulation analysis to test the performance of the method using (1) mortality and hospital admissions as health outcomes, (2) temperature and nitrogen dioxide as exposures, and (3) the three aggregation schemes most widely used in open-access health data, including aggregations of sets of nonconsecutive days.
Results: With sufficient data for analysis, the method can recover the underlying association for all combinations of outcomes, exposures, and aggregation schemes. The bias and variability of the estimates increase with the degree of aggregation of the outcome data, and they decrease with increasing sample size (length of dataset, number of cases). Remarkably, estimates are also unbiased even in extreme cases with weekly outcome data in an association confounded by the day of the week, such as those of air pollution models.
Conclusions: With sufficient data, the method is able to flexibly generate unbiased estimates, generalizing previous results to other outcomes, exposures, and types and degrees of aggregation. Such results can boost the use of available temporally aggregated health data for research, translation, and policymaking, especially in low-resource and rural areas.
{"title":"Unbiased Estimates Using Temporally Aggregated Outcome Data in Time Series Analysis: Generalization to Different Outcomes, Exposures, and Types of Aggregation.","authors":"Xavier Basagaña, Joan Ballester","doi":"10.1097/EDE.0000000000001923","DOIUrl":"10.1097/EDE.0000000000001923","url":null,"abstract":"<p><strong>Background: </strong>A new method for time series analysis was recently formulated and implemented that uses temporally aggregated outcome data to generate unbiased estimates of the underlying association between temporally disaggregated outcome and covariate data. However, the performance of the method was only tested in the context of the delayed nonlinear relation between temperature and mortality, and only in the case of the aggregation of sets of consecutive days.</p><p><strong>Methods: </strong>We conduct a simulation analysis to test the performance of the method using (1) mortality and hospital admissions as health outcomes, (2) temperature and nitrogen dioxide as exposures, and (3) the three aggregation schemes most widely used in open-access health data, including aggregations of sets of nonconsecutive days.</p><p><strong>Results: </strong>With sufficient data for analysis, the method can recover the underlying association for all combinations of outcomes, exposures, and aggregation schemes. The bias and variability of the estimates increase with the degree of aggregation of the outcome data, and they decrease with increasing sample size (length of dataset, number of cases). Remarkably, estimates are also unbiased even in extreme cases with weekly outcome data in an association confounded by the day of the week, such as those of air pollution models.</p><p><strong>Conclusions: </strong>With sufficient data, the method is able to flexibly generate unbiased estimates, generalizing previous results to other outcomes, exposures, and types and degrees of aggregation. Such results can boost the use of available temporally aggregated health data for research, translation, and policymaking, especially in low-resource and rural areas.</p>","PeriodicalId":11779,"journal":{"name":"Epidemiology","volume":" ","pages":"16-20"},"PeriodicalIF":4.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12643558/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145205930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-25DOI: 10.1097/EDE.0000000000001925
Rachael K Ross, Iván Díaz, Amy J Pitts, Elizabeth A Stuart, Kara E Rudolph
Randomized clinical trials are considered the gold standard for informing treatment guidelines, but results may not generalize to real-world populations. Generalizability is hindered by distributional differences in baseline covariates and treatment-outcome mediators. Approaches to address differences in covariates are well established, but approaches to address differences in mediators are more limited. Here, we consider the setting where trial activities that differ from usual-care settings (e.g., monetary compensation and follow-up visits frequency) affect treatment adherence. When treatment and adherence data are unavailable for the real-world target population, we cannot identify the mean outcome under a specific treatment assignment (i.e., mean potential outcome) in the target population. Therefore, we propose a sensitivity analysis in which a parameter for the relative difference in adherence to a specific treatment between the trial and the target, possibly conditional on covariates, must be specified. We discuss options for specification of the sensitivity analysis parameter based on external knowledge, including setting a range or specifying a probability distribution from which to repeatedly draw parameter values (i.e., use Monte Carlo sampling). We introduce two estimators for the mean counterfactual outcome in the target, which incorporate this sensitivity parameter, a plug-in estimator, and a one-step estimator that is double robust and supports the use of machine learning for estimating nuisance models. Finally, we apply the proposed approach to the motivating application where we transport the risk of relapse under two different medications for the treatment of opioid use disorder from a trial to a real-world population.
{"title":"Transporting Results from a Trial to an External Target Population When Trial Participation Impacts Adherence.","authors":"Rachael K Ross, Iván Díaz, Amy J Pitts, Elizabeth A Stuart, Kara E Rudolph","doi":"10.1097/EDE.0000000000001925","DOIUrl":"10.1097/EDE.0000000000001925","url":null,"abstract":"<p><p>Randomized clinical trials are considered the gold standard for informing treatment guidelines, but results may not generalize to real-world populations. Generalizability is hindered by distributional differences in baseline covariates and treatment-outcome mediators. Approaches to address differences in covariates are well established, but approaches to address differences in mediators are more limited. Here, we consider the setting where trial activities that differ from usual-care settings (e.g., monetary compensation and follow-up visits frequency) affect treatment adherence. When treatment and adherence data are unavailable for the real-world target population, we cannot identify the mean outcome under a specific treatment assignment (i.e., mean potential outcome) in the target population. Therefore, we propose a sensitivity analysis in which a parameter for the relative difference in adherence to a specific treatment between the trial and the target, possibly conditional on covariates, must be specified. We discuss options for specification of the sensitivity analysis parameter based on external knowledge, including setting a range or specifying a probability distribution from which to repeatedly draw parameter values (i.e., use Monte Carlo sampling). We introduce two estimators for the mean counterfactual outcome in the target, which incorporate this sensitivity parameter, a plug-in estimator, and a one-step estimator that is double robust and supports the use of machine learning for estimating nuisance models. Finally, we apply the proposed approach to the motivating application where we transport the risk of relapse under two different medications for the treatment of opioid use disorder from a trial to a real-world population.</p>","PeriodicalId":11779,"journal":{"name":"Epidemiology","volume":" ","pages":"39-49"},"PeriodicalIF":4.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12614279/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145376619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2025-11-25DOI: 10.1097/EDE.0000000000001927
Natalia E Poveda, Michael R Elliott, Neil K Mehta, Solveig A Cunningham
Background: Obesity dynamics early in life are likely important for long-term health, but have only been described piecemeal, because nationally representative longitudinal datasets are few and have limited follow-up duration.
Methods: We created a synthetic cohort by combining two US nationally representative datasets, the Early Childhood Longitudinal Study, Kindergarten Class of 1998-1999 (ECLS98; N = 21,120; ages 4-16 years; birth cohort 1991-1994), and the National Longitudinal Survey of Youth 1997 (NLSY97; N = 8,984; ages 12-41 years; birth cohort 1980-1984). We used the older-age cohort to impute future weight trajectories of children in the younger-age cohort by matching based on subject-level body mass index trajectories estimated via linear mixed models. We projected trajectories to age 41 years in 2035 for children observed up to a mean age of 13.5 years in 2007.
Results: The synthetic cohort (N = 10,102) showed that obesity prevalence increases from 10.0% at age 4 years to 56.3% at age 41 years. Obesity incidence peaks at ages 8 years (4.00/100 person-years [PY] [3.29-4.73]), 26 years (4.48/100 PY [3.04-5.92]), and 38 years (3.60/100 PY [0.00-8.91]).
Conclusions: This synthetic cohort approach can be used to characterize dynamics of obesity and other conditions by maximizing data from shorter "life segments." Findings suggest that today's young adults will continue to become heavier as they age. In addition to prevention before kindergarten entry, other periods for obesity prevention could be middle childhood, mid-twenties, and late thirties.
背景:生命早期的肥胖动态可能对长期健康很重要,但由于全国代表性的纵向数据集很少,随访时间有限,因此只能零星地描述。方法:我们通过结合两个具有美国全国代表性的数据集,即1998-1999年幼儿纵向研究幼儿园班级(ECLS98; N = 21,120;年龄4-16岁;1991-1994年出生队列)和1997年全国青年纵向调查(NLSY97; N = 8,984;年龄12-41岁;1980-1984年出生队列),创建了一个综合队列。我们使用年龄较大的队列,通过匹配通过线性混合模型估计的受试者水平体重指数轨迹,来推算年龄较小的队列中儿童未来的体重轨迹。我们预测了到2035年41岁的儿童的轨迹,2007年观察到的儿童平均年龄为13.5岁。结果:综合队列(N = 10,102)显示,肥胖患病率从4岁时的10.0%上升到41岁时的56.3%。肥胖发病率在8岁(4.00/100人-年[3.29-4.73])、26岁(4.48/100人-年[3.04-5.92])和38岁(3.60/100人-年[0.00-8.91])出现高峰。结论:这种综合队列方法可以通过最大化较短“生命段”的数据来描述肥胖和其他疾病的动态特征。研究结果表明,随着年龄的增长,今天的年轻人会继续变重。除了幼儿园入学前的预防外,其他预防肥胖的时期可能是童年中期、20多岁和30多岁。
{"title":"Obesity from Childhood to Mid-adulthood in the United States: A Synthetic Cohort Approach to Measuring Health Trajectories.","authors":"Natalia E Poveda, Michael R Elliott, Neil K Mehta, Solveig A Cunningham","doi":"10.1097/EDE.0000000000001927","DOIUrl":"10.1097/EDE.0000000000001927","url":null,"abstract":"<p><strong>Background: </strong>Obesity dynamics early in life are likely important for long-term health, but have only been described piecemeal, because nationally representative longitudinal datasets are few and have limited follow-up duration.</p><p><strong>Methods: </strong>We created a synthetic cohort by combining two US nationally representative datasets, the Early Childhood Longitudinal Study, Kindergarten Class of 1998-1999 (ECLS98; N = 21,120; ages 4-16 years; birth cohort 1991-1994), and the National Longitudinal Survey of Youth 1997 (NLSY97; N = 8,984; ages 12-41 years; birth cohort 1980-1984). We used the older-age cohort to impute future weight trajectories of children in the younger-age cohort by matching based on subject-level body mass index trajectories estimated via linear mixed models. We projected trajectories to age 41 years in 2035 for children observed up to a mean age of 13.5 years in 2007.</p><p><strong>Results: </strong>The synthetic cohort (N = 10,102) showed that obesity prevalence increases from 10.0% at age 4 years to 56.3% at age 41 years. Obesity incidence peaks at ages 8 years (4.00/100 person-years [PY] [3.29-4.73]), 26 years (4.48/100 PY [3.04-5.92]), and 38 years (3.60/100 PY [0.00-8.91]).</p><p><strong>Conclusions: </strong>This synthetic cohort approach can be used to characterize dynamics of obesity and other conditions by maximizing data from shorter \"life segments.\" Findings suggest that today's young adults will continue to become heavier as they age. In addition to prevention before kindergarten entry, other periods for obesity prevention could be middle childhood, mid-twenties, and late thirties.</p>","PeriodicalId":11779,"journal":{"name":"Epidemiology","volume":"37 1","pages":"121-131"},"PeriodicalIF":4.4,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12643562/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145667771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}