Pub Date : 2021-09-01DOI: 10.1177/26320843211061287
J. G. Brazeal, A. Alekseyenko, Hong Li, M. Fugal, K. Kirchoff, Courtney H. Marsh, D. Lewin, Jennifer D. Wu, J. Obeid, Kristin Wallace
Objective We evaluate data agreement between an electronic health record (EHR) sample abstracted by automated characterization with a standard abstracted by manual review. Study Design and Setting We obtain data for an epidemiology cohort study using standard manual abstraction of the EHR and automated identification of the same patients using a structured algorithm to query the EHR. Summary measures of agreement (e.g., Cohen’s kappa) are reported for 12 variables commonly used in epidemiological studies. Results Best agreement between abstraction methods is observed among demographic characteristics such as age, sex, and race, and for positive history of disease. Poor agreement is found in missing data and negative history, suggesting potential impact for researchers using automated EHR characterization. EHR data quality depends upon providers, who may be influenced by both institutional and federal government documentation guidelines. Conclusion Automated EHR abstraction discrepancies may decrease power and increase bias; therefore, caution is warranted when selecting variables from EHRs for epidemiological study using an automated characterization approach. Validation of automated methods must also continue to advance in sophistication with other technologies, such as machine learning and natural language processing, to extract non-structured data from the EHR, for application to EHR characterization for clinical epidemiology.
{"title":"Assessing quality and agreement of structured data in automatic versus manual abstraction of the electronic health record for a clinical epidemiology study","authors":"J. G. Brazeal, A. Alekseyenko, Hong Li, M. Fugal, K. Kirchoff, Courtney H. Marsh, D. Lewin, Jennifer D. Wu, J. Obeid, Kristin Wallace","doi":"10.1177/26320843211061287","DOIUrl":"https://doi.org/10.1177/26320843211061287","url":null,"abstract":"Objective We evaluate data agreement between an electronic health record (EHR) sample abstracted by automated characterization with a standard abstracted by manual review. Study Design and Setting We obtain data for an epidemiology cohort study using standard manual abstraction of the EHR and automated identification of the same patients using a structured algorithm to query the EHR. Summary measures of agreement (e.g., Cohen’s kappa) are reported for 12 variables commonly used in epidemiological studies. Results Best agreement between abstraction methods is observed among demographic characteristics such as age, sex, and race, and for positive history of disease. Poor agreement is found in missing data and negative history, suggesting potential impact for researchers using automated EHR characterization. EHR data quality depends upon providers, who may be influenced by both institutional and federal government documentation guidelines. Conclusion Automated EHR abstraction discrepancies may decrease power and increase bias; therefore, caution is warranted when selecting variables from EHRs for epidemiological study using an automated characterization approach. Validation of automated methods must also continue to advance in sophistication with other technologies, such as machine learning and natural language processing, to extract non-structured data from the EHR, for application to EHR characterization for clinical epidemiology.","PeriodicalId":74683,"journal":{"name":"Research methods in medicine & health sciences","volume":"2 1","pages":"168 - 178"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46584783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-01DOI: 10.1177/26320843211021500
Graham Hallett
{"title":"Editorial","authors":"Graham Hallett","doi":"10.1177/26320843211021500","DOIUrl":"https://doi.org/10.1177/26320843211021500","url":null,"abstract":"","PeriodicalId":74683,"journal":{"name":"Research methods in medicine & health sciences","volume":"2 1","pages":"90 - 90"},"PeriodicalIF":0.0,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/26320843211021500","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49033615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-18DOI: 10.1177/26320843211010061
F. Shiely, S. Millar
Background Accurately measuring BMI in large epidemiological studies is problematic as objective measurements are expensive, so subjective methodologies must usually suffice. The purpose of this study is to explore a new subjective method of BMI measurement: BMI self-selection. Methods A cross-sectional analysis of the Mitchelstown Cohort Rescreen study, a random sample of 1,354 men and women aged 51–77 years recruited from a single primary care centre. BMI self-selection was measured by asking patients to select their BMI category: underweight, normal weight, overweight, obese. Weight and height were also objectively measured. Results 79% were overweight or obese: 86% of males, 69% of females (P < 0.001) and 59% of these underestimated their BMI. The sensitivity for correct BMI self-selection for normal weight, overweight and obese was 77%, 61% and 11% respectively. In multivariable analysis, gender, higher education levels, being told by a health professional to lose weight, and being on a diet were significantly associated with correct BMI self-selection. There was a linear trend relationship between increasing BMI levels and correct selection of BMI; participants in the highest BMI quartile had an approximate eight-fold increased odds of correctly selecting their BMI when compared to participants within the lower overweight/obese quartiles (OR = 7.72, 95%CI:4.59, 12.98). Conclusions BMI self-selection may be useful for self-reporting BMI. Clinicians need to be aware of disparities between BMI self-selection at higher and lower BMI levels among overweight/obese patients and encourage preventative action for those at the lower levels to avoid weight gain and thus reduce their all-cause mortality risk.
{"title":"BMI self-selection: Exploring alternatives to self-reported BMI","authors":"F. Shiely, S. Millar","doi":"10.1177/26320843211010061","DOIUrl":"https://doi.org/10.1177/26320843211010061","url":null,"abstract":"Background Accurately measuring BMI in large epidemiological studies is problematic as objective measurements are expensive, so subjective methodologies must usually suffice. The purpose of this study is to explore a new subjective method of BMI measurement: BMI self-selection. Methods A cross-sectional analysis of the Mitchelstown Cohort Rescreen study, a random sample of 1,354 men and women aged 51–77 years recruited from a single primary care centre. BMI self-selection was measured by asking patients to select their BMI category: underweight, normal weight, overweight, obese. Weight and height were also objectively measured. Results 79% were overweight or obese: 86% of males, 69% of females (P < 0.001) and 59% of these underestimated their BMI. The sensitivity for correct BMI self-selection for normal weight, overweight and obese was 77%, 61% and 11% respectively. In multivariable analysis, gender, higher education levels, being told by a health professional to lose weight, and being on a diet were significantly associated with correct BMI self-selection. There was a linear trend relationship between increasing BMI levels and correct selection of BMI; participants in the highest BMI quartile had an approximate eight-fold increased odds of correctly selecting their BMI when compared to participants within the lower overweight/obese quartiles (OR = 7.72, 95%CI:4.59, 12.98). Conclusions BMI self-selection may be useful for self-reporting BMI. Clinicians need to be aware of disparities between BMI self-selection at higher and lower BMI levels among overweight/obese patients and encourage preventative action for those at the lower levels to avoid weight gain and thus reduce their all-cause mortality risk.","PeriodicalId":74683,"journal":{"name":"Research methods in medicine & health sciences","volume":"2 1","pages":"112 - 122"},"PeriodicalIF":0.0,"publicationDate":"2021-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/26320843211010061","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47568010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-18DOI: 10.1177/26320843211010044
S. Chakrabartty
Background Scales for evaluating insomnia differ in number of items, response format, and result in different scores distributions and score ranges and may not facilitate meaningful comparisons. Objectives Transform ordinal item-scores of three scales of insomnia to continuous, equidistant, monotonic, normally distributed scores, avoiding limitations of summative scoring of Likert scales. Methods Equidistant item-scores by weighted sum using data-driven weights to different levels of different items, considering cell frequencies of Item-Levels matrix, followed by normalization and conversion to [1, 10]. Equivalent test-scores (as sum of transformed item- scores) for a pair of scales were found by Normal Probability curves. Empirical illustration given. Results Transformed test-scores are continuous, monotonic and followed Normal distribution with no outliers and tied scores. Such test-scores facilitate ranking, better classification and meaningful comparison of scales of different lengths and formats and finding equivalent score combinations of two scales. For a given value of transformed test-score of a scale, easy alternate method avoiding integration proposed to find equivalent scores of another scales. Equivalent scores of scales help to relate various cut-off scores of different scales and uniformity in interpretations. Integration of various scales of insomnia is achieved by finding one-to-one correspondence among the equivalent score of various scales with correlation over 0.99 Conclusion Resultant test-scores facilitated undertaking analysis in parametric set up. Considering the theoretical advantages including meaningfulness of operations, better comparison, use of such method of transforming scores of Likert items/test is recommended test and items, Future studies were suggested.
{"title":"Integration of various scales for measurement of insomnia","authors":"S. Chakrabartty","doi":"10.1177/26320843211010044","DOIUrl":"https://doi.org/10.1177/26320843211010044","url":null,"abstract":"Background Scales for evaluating insomnia differ in number of items, response format, and result in different scores distributions and score ranges and may not facilitate meaningful comparisons. Objectives Transform ordinal item-scores of three scales of insomnia to continuous, equidistant, monotonic, normally distributed scores, avoiding limitations of summative scoring of Likert scales. Methods Equidistant item-scores by weighted sum using data-driven weights to different levels of different items, considering cell frequencies of Item-Levels matrix, followed by normalization and conversion to [1, 10]. Equivalent test-scores (as sum of transformed item- scores) for a pair of scales were found by Normal Probability curves. Empirical illustration given. Results Transformed test-scores are continuous, monotonic and followed Normal distribution with no outliers and tied scores. Such test-scores facilitate ranking, better classification and meaningful comparison of scales of different lengths and formats and finding equivalent score combinations of two scales. For a given value of transformed test-score of a scale, easy alternate method avoiding integration proposed to find equivalent scores of another scales. Equivalent scores of scales help to relate various cut-off scores of different scales and uniformity in interpretations. Integration of various scales of insomnia is achieved by finding one-to-one correspondence among the equivalent score of various scales with correlation over 0.99 Conclusion Resultant test-scores facilitated undertaking analysis in parametric set up. Considering the theoretical advantages including meaningfulness of operations, better comparison, use of such method of transforming scores of Likert items/test is recommended test and items, Future studies were suggested.","PeriodicalId":74683,"journal":{"name":"Research methods in medicine & health sciences","volume":"104 1","pages":"102 - 111"},"PeriodicalIF":0.0,"publicationDate":"2021-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/26320843211010044","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"66049506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-01DOI: 10.1177/2632084321996225
T. Mathes, O. Kuss
Background Meta-analysis of systematically reviewed studies on interventions is the cornerstone of evidence based medicine. In the following, we will introduce the common-beta beta-binomial (BB) model for meta-analysis with binary outcomes and elucidate its equivalence to panel count data models. Methods We present a variation of the standard “common-rho” BB (BBST model) for meta-analysis, namely a “common-beta” BB model. This model has an interesting connection to fixed-effect negative binomial regression models (FE-NegBin) for panel count data. Using this equivalence, it is possible to estimate an extension of the FE-NegBin with an additional multiplicative overdispersion term (RE-NegBin), while preserving a closed form likelihood. An advantage due to the connection to econometric models is, that the models can be easily implemented because “standard” statistical software for panel count data can be used. We illustrate the methods with two real-world example datasets. Furthermore, we show the results of a small-scale simulation study that compares the new models to the BBST. The input parameters of the simulation were informed by actually performed meta-analysis. Results In both example data sets, the NegBin, in particular the RE-NegBin showed a smaller effect and had narrower 95%-confidence intervals. In our simulation study, median bias was negligible for all methods, but the upper quartile for median bias suggested that BBST is most affected by positive bias. Regarding coverage probability, BBST and the RE-NegBin model outperformed the FE-NegBin model. Conclusion For meta-analyses with binary outcomes, the considered common-beta BB models may be valuable extensions to the family of BB models.
{"title":"Beta-binomial models for meta-analysis with binary outcomes: Variations, extensions, and additional insights from econometrics","authors":"T. Mathes, O. Kuss","doi":"10.1177/2632084321996225","DOIUrl":"https://doi.org/10.1177/2632084321996225","url":null,"abstract":"Background Meta-analysis of systematically reviewed studies on interventions is the cornerstone of evidence based medicine. In the following, we will introduce the common-beta beta-binomial (BB) model for meta-analysis with binary outcomes and elucidate its equivalence to panel count data models. Methods We present a variation of the standard “common-rho” BB (BBST model) for meta-analysis, namely a “common-beta” BB model. This model has an interesting connection to fixed-effect negative binomial regression models (FE-NegBin) for panel count data. Using this equivalence, it is possible to estimate an extension of the FE-NegBin with an additional multiplicative overdispersion term (RE-NegBin), while preserving a closed form likelihood. An advantage due to the connection to econometric models is, that the models can be easily implemented because “standard” statistical software for panel count data can be used. We illustrate the methods with two real-world example datasets. Furthermore, we show the results of a small-scale simulation study that compares the new models to the BBST. The input parameters of the simulation were informed by actually performed meta-analysis. Results In both example data sets, the NegBin, in particular the RE-NegBin showed a smaller effect and had narrower 95%-confidence intervals. In our simulation study, median bias was negligible for all methods, but the upper quartile for median bias suggested that BBST is most affected by positive bias. Regarding coverage probability, BBST and the RE-NegBin model outperformed the FE-NegBin model. Conclusion For meta-analyses with binary outcomes, the considered common-beta BB models may be valuable extensions to the family of BB models.","PeriodicalId":74683,"journal":{"name":"Research methods in medicine & health sciences","volume":"2 1","pages":"82 - 89"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2632084321996225","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46420470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-01DOI: 10.1177/2632084321996105
{"title":"Editorial","authors":"","doi":"10.1177/2632084321996105","DOIUrl":"https://doi.org/10.1177/2632084321996105","url":null,"abstract":"","PeriodicalId":74683,"journal":{"name":"Research methods in medicine & health sciences","volume":"2 1","pages":"50 - 50"},"PeriodicalIF":0.0,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2632084321996105","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49092190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-06DOI: 10.1177/2632084320984374
Ahtisham Younas, S. Inayat, Amara Sundus
Mixed methods reviews offer an excellent approach to synthesizing qualitative and quantitative evidence to generate more robust implications for practice, research, and policymaking. There are limited guidance and practical examples concerning the methods for adequately synthesizing qualitative and quantitative research findings in mixed reviews. This paper aims to illustrate the application and use of joint displays for qualitative and quantitative synthesis in mixed methods reviews. We used joint displays to synthesize and integrate qualitative and quantitative research findings in a segregated mixed methods review about male nursing students' challenges and experiences. In total, 36 qualitative, six quantitative, and one mixed-methods study was appraised and synthesized in the review. First, the qualitative and quantitative findings were analyzed and synthesized separately. The synthesized findings were integrated through tabular and visual joint displays at two levels of integration. At the first level, a statistics theme display was developed to compare the synthesized qualitative and quantitative findings and the number of studies from which the findings were generated. At the second level, the synthesized qualitative and quantitative findings supported by each other were integrated to identify confirmed, discordant, and expanded inferences using generalizing theme display. The use of two displays allowed in a robust and comprehensive synthesis of studies. Joint displays could serve as an excellent method for rigorous and transparent synthesis of qualitative and quantitative findings and the generation of adequate and relevant inferences in mixed methods reviews.
{"title":"Joint displays for qualitative-quantitative synthesis in mixed methods reviews","authors":"Ahtisham Younas, S. Inayat, Amara Sundus","doi":"10.1177/2632084320984374","DOIUrl":"https://doi.org/10.1177/2632084320984374","url":null,"abstract":"Mixed methods reviews offer an excellent approach to synthesizing qualitative and quantitative evidence to generate more robust implications for practice, research, and policymaking. There are limited guidance and practical examples concerning the methods for adequately synthesizing qualitative and quantitative research findings in mixed reviews. This paper aims to illustrate the application and use of joint displays for qualitative and quantitative synthesis in mixed methods reviews. We used joint displays to synthesize and integrate qualitative and quantitative research findings in a segregated mixed methods review about male nursing students' challenges and experiences. In total, 36 qualitative, six quantitative, and one mixed-methods study was appraised and synthesized in the review. First, the qualitative and quantitative findings were analyzed and synthesized separately. The synthesized findings were integrated through tabular and visual joint displays at two levels of integration. At the first level, a statistics theme display was developed to compare the synthesized qualitative and quantitative findings and the number of studies from which the findings were generated. At the second level, the synthesized qualitative and quantitative findings supported by each other were integrated to identify confirmed, discordant, and expanded inferences using generalizing theme display. The use of two displays allowed in a robust and comprehensive synthesis of studies. Joint displays could serve as an excellent method for rigorous and transparent synthesis of qualitative and quantitative findings and the generation of adequate and relevant inferences in mixed methods reviews.","PeriodicalId":74683,"journal":{"name":"Research methods in medicine & health sciences","volume":"2 1","pages":"91 - 101"},"PeriodicalIF":0.0,"publicationDate":"2021-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2632084320984374","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44997118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1177/2632084320979054
D. Petkov
{"title":"Editorial","authors":"D. Petkov","doi":"10.1177/2632084320979054","DOIUrl":"https://doi.org/10.1177/2632084320979054","url":null,"abstract":"","PeriodicalId":74683,"journal":{"name":"Research methods in medicine & health sciences","volume":"2 1","pages":"1 - 1"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2632084320979054","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49616413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1177/2632084320957207
Tat-Thang Vo, R. Porcher, S. Vansteelandt
Case mix differences between trials form an important factor that contributes to the statistical heterogeneity observed in a meta-analysis. In this paper, we propose two methods to assess whether important heterogeneity would remain if the different trials in the meta-analysis were conducted in one common population defined by a given case-mix. To achieve this goal, we first standardize results of different trials over the case-mix of a target population. We then quantify the amount of heterogeneity arising from case-mix and beyond case-mix reasons by using corresponding I2 statistics and prediction intervals. These new approaches enable a better understanding of the overall heterogeneity between trial results, and can be used to support standard heterogeneity assessments in individual participant data meta-analysis practice.
{"title":"Assessing the impact of case-mix heterogeneity in individual participant data meta-analysis: Novel use of I2 statistic and prediction interval","authors":"Tat-Thang Vo, R. Porcher, S. Vansteelandt","doi":"10.1177/2632084320957207","DOIUrl":"https://doi.org/10.1177/2632084320957207","url":null,"abstract":"Case mix differences between trials form an important factor that contributes to the statistical heterogeneity observed in a meta-analysis. In this paper, we propose two methods to assess whether important heterogeneity would remain if the different trials in the meta-analysis were conducted in one common population defined by a given case-mix. To achieve this goal, we first standardize results of different trials over the case-mix of a target population. We then quantify the amount of heterogeneity arising from case-mix and beyond case-mix reasons by using corresponding I2 statistics and prediction intervals. These new approaches enable a better understanding of the overall heterogeneity between trial results, and can be used to support standard heterogeneity assessments in individual participant data meta-analysis practice.","PeriodicalId":74683,"journal":{"name":"Research methods in medicine & health sciences","volume":"2 1","pages":"12 - 30"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2632084320957207","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45005979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01DOI: 10.1177/2632084320961043
E. Murray, E. Caniglia, L. Petito
When reporting results from randomized experiments, researchers often choose to present a per-protocol effect in addition to an intention-to-treat effect. However, these per-protocol effects are often described retrospectively, for example, comparing outcomes among individuals who adhered to their assigned treatment strategy throughout the study. This retrospective definition of a per-protocol effect is often confounded and cannot be interpreted causally because it encounters treatment-confounder feedback loops, where past confounders affect future treatment, and current treatment affects future confounders. Per-protocol effects estimated using this method are highly susceptible to the placebo paradox, also called the “healthy adherers” bias, where individuals who adhere to placebo appear to have better survival than those who don’t. This result is generally not due to a benefit of placebo, but rather is most often the result of uncontrolled confounding. Here, we aim to provide an overview to causal inference for survival outcomes with time-varying exposures for static interventions using inverse probability weighting. The basic concepts described here can also apply to other types of exposure strategies, although these may require additional design or analytic considerations. We provide a workshop guide with solutions manual, fully reproducible R, SAS, and Stata code, and a simulated dataset on a GitHub repository for the reader to explore.
{"title":"Causal survival analysis: A guide to estimating intention-to-treat and per-protocol effects from randomized clinical trials with non-adherence","authors":"E. Murray, E. Caniglia, L. Petito","doi":"10.1177/2632084320961043","DOIUrl":"https://doi.org/10.1177/2632084320961043","url":null,"abstract":"When reporting results from randomized experiments, researchers often choose to present a per-protocol effect in addition to an intention-to-treat effect. However, these per-protocol effects are often described retrospectively, for example, comparing outcomes among individuals who adhered to their assigned treatment strategy throughout the study. This retrospective definition of a per-protocol effect is often confounded and cannot be interpreted causally because it encounters treatment-confounder feedback loops, where past confounders affect future treatment, and current treatment affects future confounders. Per-protocol effects estimated using this method are highly susceptible to the placebo paradox, also called the “healthy adherers” bias, where individuals who adhere to placebo appear to have better survival than those who don’t. This result is generally not due to a benefit of placebo, but rather is most often the result of uncontrolled confounding. Here, we aim to provide an overview to causal inference for survival outcomes with time-varying exposures for static interventions using inverse probability weighting. The basic concepts described here can also apply to other types of exposure strategies, although these may require additional design or analytic considerations. We provide a workshop guide with solutions manual, fully reproducible R, SAS, and Stata code, and a simulated dataset on a GitHub repository for the reader to explore.","PeriodicalId":74683,"journal":{"name":"Research methods in medicine & health sciences","volume":"2 1","pages":"39 - 49"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/2632084320961043","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46869480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}