Abstract Configurational Comparative Methods (CCMs) aim to learn causal structures from datasets by exploiting Boolean sufficiency and necessity relationships. One important challenge for these methods is that such Boolean relationships are often not satisfied in real-life datasets, as these datasets usually contain noise. Hence, CCMs infer models that only approximately fit the data, introducing a risk of inferring incorrect or incomplete models, especially when data are also fragmented (have limited empirical diversity). To minimize this risk, evaluation measures for sufficiency and necessity should be sensitive to all relevant evidence. This article points out that the standard evaluation measures in CCMs, consistency and coverage, neglect certain evidence for these Boolean relationships. Correspondingly, two new measures, contrapositive consistency and contrapositive coverage, which are equivalent to the binary classification measures specificity and negative predictive value, respectively, are introduced to the CCM context as additions to consistency and coverage. A simulation experiment demonstrates that the introduced contrapositive measures indeed help to identify correct CCM models.
{"title":"Evaluating Boolean relationships in Configurational Comparative Methods","authors":"Luna De Souter","doi":"10.1515/jci-2023-0014","DOIUrl":"https://doi.org/10.1515/jci-2023-0014","url":null,"abstract":"Abstract Configurational Comparative Methods (CCMs) aim to learn causal structures from datasets by exploiting Boolean sufficiency and necessity relationships. One important challenge for these methods is that such Boolean relationships are often not satisfied in real-life datasets, as these datasets usually contain noise. Hence, CCMs infer models that only approximately fit the data, introducing a risk of inferring incorrect or incomplete models, especially when data are also fragmented (have limited empirical diversity). To minimize this risk, evaluation measures for sufficiency and necessity should be sensitive to all relevant evidence. This article points out that the standard evaluation measures in CCMs, consistency and coverage, neglect certain evidence for these Boolean relationships. Correspondingly, two new measures, contrapositive consistency and contrapositive coverage, which are equivalent to the binary classification measures specificity and negative predictive value, respectively, are introduced to the CCM context as additions to consistency and coverage. A simulation experiment demonstrates that the introduced contrapositive measures indeed help to identify correct CCM models.","PeriodicalId":48576,"journal":{"name":"Journal of Causal Inference","volume":"8 12","pages":""},"PeriodicalIF":1.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139457038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-01Epub Date: 2024-01-10DOI: 10.1515/jci-2023-0031
Amy J Pitts, Charlotte R Fowler
Many software packages have been developed to assist researchers in drawing directed acyclic graphs (DAGs), each with unique functionality and usability. We examine five of the most common software to generate DAGs: TikZ, DAGitty, ggdag, dagR, and igraph. For each package, we provide a general description of its background, analysis and visualization capabilities, and user-friendliness. Additionally in order to compare packages, we produce two DAGs in each software, the first featuring a simple confounding structure, while the second includes a more complex structure with three confounders and a mediator. We provide recommendations for when to use each software depending on the user's needs.
为了帮助研究人员绘制有向无环图(DAG),已经开发了许多软件包,每种软件都有独特的功能和可用性。我们研究了五种最常用的生成 DAG 的软件:TikZ、DAGitty、ggdag、dagR 和 igraph。我们对每个软件包的背景、分析和可视化能力以及用户友好性进行了总体描述。此外,为了对软件包进行比较,我们在每个软件中制作了两个 DAG,第一个 DAG 包含一个简单的混杂结构,第二个 DAG 包含一个包含三个混杂因素和一个中介因素的更复杂的结构。我们将根据用户的需求,为何时使用每种软件提供建议。
{"title":"Comparison of open-source software for producing directed acyclic graphs.","authors":"Amy J Pitts, Charlotte R Fowler","doi":"10.1515/jci-2023-0031","DOIUrl":"10.1515/jci-2023-0031","url":null,"abstract":"<p><p>Many software packages have been developed to assist researchers in drawing directed acyclic graphs (DAGs), each with unique functionality and usability. We examine five of the most common software to generate DAGs: Ti<i>k</i>Z, DAGitty, ggdag, dagR, and igraph. For each package, we provide a general description of its background, analysis and visualization capabilities, and user-friendliness. Additionally in order to compare packages, we produce two DAGs in each software, the first featuring a simple confounding structure, while the second includes a more complex structure with three confounders and a mediator. We provide recommendations for when to use each software depending on the user's needs.</p>","PeriodicalId":48576,"journal":{"name":"Journal of Causal Inference","volume":"12 1","pages":""},"PeriodicalIF":1.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10869111/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139742392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-17DOI: 10.30998/inference.v5i3.12353
Hilda Zubaidah, Gustaman Saragih
The textbooks play an important role in teaching and learning activity in language program. Because of the various textbooks provided, textbook analysis is seen as an important thing to be conducted in order to find out how the components of the textbook are served. This study was aimed to investigate to what extent the English textbook entitled “Bahasa Inggris: When English Rings a Bell” for eighth grade students meet the criteria of BSNP (linguistic features and presentation of materias). The linguistic features consist of language appropriateness while the presentation of materials consist of content appropriates, presentation appropriateness, and graphic appropriateness. This study was descriptive qualitative approach. The instrument used to collect the data is document study used in the form of checklist. A checklist was made adopted from BSNP (2011) framework. The results of this study showed that textbook entitled “When English Rings a Bell” for Eight Grade is suitable to be used in teaching learning process. The textbook achieved the fulfilment score of language appropriateness (100%), content appropriateness (81,25%), presentation appropriateness (88,89%), and graphics appropriateness (97,64%). This book is categorized “good” textbook by achieving the score of 95,07%. Thus, it can be concluded that textbook is suitable to be used in order to help the teaching learning process in the classroom with the help of other sources and teacher improvisation.
{"title":"LINGUISTIC FEATURES AND PRESENTATION OF MATERIALS ON ENGLISH TEXTBOOK “WHEN ENGLISH RINGS A BELL” BASED ON BSNP","authors":"Hilda Zubaidah, Gustaman Saragih","doi":"10.30998/inference.v5i3.12353","DOIUrl":"https://doi.org/10.30998/inference.v5i3.12353","url":null,"abstract":"The textbooks play an important role in teaching and learning activity in language program. Because of the various textbooks provided, textbook analysis is seen as an important thing to be conducted in order to find out how the components of the textbook are served. This study was aimed to investigate to what extent the English textbook entitled “Bahasa Inggris: When English Rings a Bell” for eighth grade students meet the criteria of BSNP (linguistic features and presentation of materias). The linguistic features consist of language appropriateness while the presentation of materials consist of content appropriates, presentation appropriateness, and graphic appropriateness. This study was descriptive qualitative approach. The instrument used to collect the data is document study used in the form of checklist. A checklist was made adopted from BSNP (2011) framework. The results of this study showed that textbook entitled “When English Rings a Bell” for Eight Grade is suitable to be used in teaching learning process. The textbook achieved the fulfilment score of language appropriateness (100%), content appropriateness (81,25%), presentation appropriateness (88,89%), and graphics appropriateness (97,64%). This book is categorized “good” textbook by achieving the score of 95,07%. Thus, it can be concluded that textbook is suitable to be used in order to help the teaching learning process in the classroom with the help of other sources and teacher improvisation.","PeriodicalId":48576,"journal":{"name":"Journal of Causal Inference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135525979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01DOI: 10.48550/arXiv.2303.05396
J. Peña
Abstract We present two methods for bounding the probabilities of benefit (a.k.a. the probability of necessity and sufficiency, i.e., the desired effect occurs if and only if exposed) and harm (i.e., the undesired effect occurs if and only if exposed) under unmeasured confounding. The first method computes the upper or lower bound of either probability as a function of the observed data distribution and two intuitive sensitivity parameters, which can then be presented to the analyst as a 2-D plot to assist in decision-making. The second method assumes the existence of a measured nondifferential proxy for the unmeasured confounder. Using this proxy, tighter bounds than the existing ones can be derived from just the observed data distribution.
{"title":"Bounding the probabilities of benefit and harm through sensitivity parameters and proxies","authors":"J. Peña","doi":"10.48550/arXiv.2303.05396","DOIUrl":"https://doi.org/10.48550/arXiv.2303.05396","url":null,"abstract":"Abstract We present two methods for bounding the probabilities of benefit (a.k.a. the probability of necessity and sufficiency, i.e., the desired effect occurs if and only if exposed) and harm (i.e., the undesired effect occurs if and only if exposed) under unmeasured confounding. The first method computes the upper or lower bound of either probability as a function of the observed data distribution and two intuitive sensitivity parameters, which can then be presented to the analyst as a 2-D plot to assist in decision-making. The second method assumes the existence of a measured nondifferential proxy for the unmeasured confounder. Using this proxy, tighter bounds than the existing ones can be derived from just the observed data distribution.","PeriodicalId":48576,"journal":{"name":"Journal of Causal Inference","volume":"4 1","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78561775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Understanding the mechanisms of action of interventions is a major general goal of scientific inquiry. The collection of statistical methods that use data to achieve this goal is referred to as mediation analysis. Natural direct and indirect effects provide a definition of mediation that matches scientific intuition, but they are not identified in the presence of time-varying confounding. Interventional effects have been proposed as a solution to this problem, but existing estimation methods are limited to assuming simple (e.g., linear) and unrealistic relations between the mediators, treatments, and confounders. We present an identification result for interventional effects in a general longitudinal data structure that allows flexibility in the specification of treatment-outcome, treatment-mediator, and mediator-outcome relationships. Identification is achieved under the standard no-unmeasured-confounders and positivity assumptions. In this article, we study semi-parametric efficiency theory for the functional identifying the mediation parameter, including the non-parametric efficiency bound, and was used to propose non-parametrically efficient estimators. Implementation of our estimators only relies on the availability of regression algorithms, and the estimators in a general framework that allows the analyst to use arbitrary regression machinery were developed. The estimators are doubly robust, n sqrt{n} -consistent, asymptotically Gaussian, under slow convergence rates for the regression algorithms used. This allows the use of flexible machine learning for regression while permitting uncertainty quantification through confidence intervals and p p -values. A free and open-source R package implementing the methods is available on GitHub. The proposed estimator to a motivating example from a trial of two medications for opioid-use disorder was applied, where we estimate the extent to which differences between the two treatments on risk of opioid use are mediated by craving symptoms.
{"title":"Efficient and flexible mediation analysis with time-varying mediators, treatments, and confounders","authors":"Iván Díaz, Nicholas T Williams, K. Rudolph","doi":"10.1515/jci-2022-0077","DOIUrl":"https://doi.org/10.1515/jci-2022-0077","url":null,"abstract":"Abstract Understanding the mechanisms of action of interventions is a major general goal of scientific inquiry. The collection of statistical methods that use data to achieve this goal is referred to as mediation analysis. Natural direct and indirect effects provide a definition of mediation that matches scientific intuition, but they are not identified in the presence of time-varying confounding. Interventional effects have been proposed as a solution to this problem, but existing estimation methods are limited to assuming simple (e.g., linear) and unrealistic relations between the mediators, treatments, and confounders. We present an identification result for interventional effects in a general longitudinal data structure that allows flexibility in the specification of treatment-outcome, treatment-mediator, and mediator-outcome relationships. Identification is achieved under the standard no-unmeasured-confounders and positivity assumptions. In this article, we study semi-parametric efficiency theory for the functional identifying the mediation parameter, including the non-parametric efficiency bound, and was used to propose non-parametrically efficient estimators. Implementation of our estimators only relies on the availability of regression algorithms, and the estimators in a general framework that allows the analyst to use arbitrary regression machinery were developed. The estimators are doubly robust, n sqrt{n} -consistent, asymptotically Gaussian, under slow convergence rates for the regression algorithms used. This allows the use of flexible machine learning for regression while permitting uncertainty quantification through confidence intervals and p p -values. A free and open-source R package implementing the methods is available on GitHub. The proposed estimator to a motivating example from a trial of two medications for opioid-use disorder was applied, where we estimate the extent to which differences between the two treatments on risk of opioid use are mediated by craving symptoms.","PeriodicalId":48576,"journal":{"name":"Journal of Causal Inference","volume":"10 2 1","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88192318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriel Danelian, Yohann Foucher, Maxime Léger, Florent Le Borgne, Arthur Chatton
Abstract Background The positivity assumption is crucial when drawing causal inferences from observational studies, but it is often overlooked in practice. A violation of positivity occurs when the sample contains a subgroup of individuals with an extreme relative frequency of experiencing one of the levels of exposure. To correctly estimate the causal effect, we must identify such individuals. For this purpose, we suggest a regression tree-based algorithm. Development Based on a succession of regression trees, the algorithm searches for combinations of covariate levels that result in subgroups of individuals with a low (un)exposed relative frequency. Application We applied the algorithm by reanalyzing four recently published medical studies. We identified the two violations of the positivity reported by the authors. In addition, we identified ten subgroups with a suspicion of violation. Conclusions The PoRT algorithm helps to detect in-sample positivity violations in causal studies. We implemented the algorithm in the R package RISCA to facilitate its use.
{"title":"Identification of in-sample positivity violations using regression trees: The PoRT algorithm","authors":"Gabriel Danelian, Yohann Foucher, Maxime Léger, Florent Le Borgne, Arthur Chatton","doi":"10.1515/jci-2022-0032","DOIUrl":"https://doi.org/10.1515/jci-2022-0032","url":null,"abstract":"Abstract Background The positivity assumption is crucial when drawing causal inferences from observational studies, but it is often overlooked in practice. A violation of positivity occurs when the sample contains a subgroup of individuals with an extreme relative frequency of experiencing one of the levels of exposure. To correctly estimate the causal effect, we must identify such individuals. For this purpose, we suggest a regression tree-based algorithm. Development Based on a succession of regression trees, the algorithm searches for combinations of covariate levels that result in subgroups of individuals with a low (un)exposed relative frequency. Application We applied the algorithm by reanalyzing four recently published medical studies. We identified the two violations of the positivity reported by the authors. In addition, we identified ten subgroups with a suspicion of violation. Conclusions The PoRT algorithm helps to detect in-sample positivity violations in causal studies. We implemented the algorithm in the R package RISCA to facilitate its use.","PeriodicalId":48576,"journal":{"name":"Journal of Causal Inference","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135501800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Personalized decision making targets the behavior of a specific individual, while population-based decision making concerns a subpopulation resembling that individual. This article clarifies the distinction between the two and explains why the former leads to more informed decisions. We further show that by combining experimental and observational studies, we can obtain valuable information about individual behavior and, consequently, improve decisions over those obtained from experimental studies alone. In particular, we show examples where such a combination discriminates between individuals who can benefit from a treatment and those who cannot – information that would not be revealed by experimental studies alone. We outline areas where this method could be of benefit to both policy makers and individuals involved.
{"title":"Personalized decision making – A conceptual introduction","authors":"Scott Mueller, Judea Pearl","doi":"10.1515/jci-2022-0050","DOIUrl":"https://doi.org/10.1515/jci-2022-0050","url":null,"abstract":"Abstract Personalized decision making targets the behavior of a specific individual, while population-based decision making concerns a subpopulation resembling that individual. This article clarifies the distinction between the two and explains why the former leads to more informed decisions. We further show that by combining experimental and observational studies, we can obtain valuable information about individual behavior and, consequently, improve decisions over those obtained from experimental studies alone. In particular, we show examples where such a combination discriminates between individuals who can benefit from a treatment and those who cannot – information that would not be revealed by experimental studies alone. We outline areas where this method could be of benefit to both policy makers and individuals involved.","PeriodicalId":48576,"journal":{"name":"Journal of Causal Inference","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136297925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract When a binary treatment D D is possibly endogenous, a binary instrument δ delta is often used to identify the “effect on compliers.” If covariates X X affect both D D and an outcome Y Y , X X should be controlled to identify the “ X X -conditional complier effect.” However, its nonparametric estimation leads to the well-known dimension problem. To avoid this problem while capturing the effect heterogeneity, we identify the complier effect heterogeneous with respect to only the one-dimensional “instrument score” E ( δ ∣ X ) Eleft(delta | X) for non-randomized δ delta . This effect heterogeneity is minimal, in the sense that any other “balancing score” is finer than the instrument score. We establish two critical “reduced-form models” that are linear in D D or δ delta , even though no parametric assumption is imposed. The models hold for any form of Y Y (continuous, binary, count, …). The desired effect is then estimated using either single index model estimators or an instrumental variable estimator after applying a power approximation to the effect. Simulation and empirical studies are performed to illustrate the proposed approaches.
当二元治疗D D可能是内源性的,通常使用二元仪器δ delta来识别“对编译器的影响”。如果协变量X X同时影响D D和结果Y Y,则应该控制X X以识别“X X -条件编译器效应”。然而,它的非参数估计导致了众所周知的维数问题。为了在捕获效应异质性的同时避免这一问题,我们仅针对非随机δ delta的一维“工具评分”E (δ∣X) E left (delta | X)识别编译器效应的异质性。这种效应异质性是最小的,因为任何其他“平衡分数”都比乐器分数好。我们建立了两个临界的“简化形式模型”,它们在D D或δ delta中是线性的,即使没有施加参数假设。这些模型适用于任何形式的Y Y(连续的、二进制的、计数的……)。然后使用单指标模型估计器或在对效果应用功率近似后使用工具变量估计器估计所需的效果。通过仿真和实证研究来说明所提出的方法。
{"title":"Minimally capturing heterogeneous complier effect of endogenous treatment for any outcome variable","authors":"Goeun Lee, Jin‐young Choi, Myoung‐jae Lee","doi":"10.1515/jci-2022-0036","DOIUrl":"https://doi.org/10.1515/jci-2022-0036","url":null,"abstract":"Abstract When a binary treatment D D is possibly endogenous, a binary instrument δ delta is often used to identify the “effect on compliers.” If covariates X X affect both D D and an outcome Y Y , X X should be controlled to identify the “ X X -conditional complier effect.” However, its nonparametric estimation leads to the well-known dimension problem. To avoid this problem while capturing the effect heterogeneity, we identify the complier effect heterogeneous with respect to only the one-dimensional “instrument score” E ( δ ∣ X ) Eleft(delta | X) for non-randomized δ delta . This effect heterogeneity is minimal, in the sense that any other “balancing score” is finer than the instrument score. We establish two critical “reduced-form models” that are linear in D D or δ delta , even though no parametric assumption is imposed. The models hold for any form of Y Y (continuous, binary, count, …). The desired effect is then estimated using either single index model estimators or an instrumental variable estimator after applying a power approximation to the effect. Simulation and empirical studies are performed to illustrate the proposed approaches.","PeriodicalId":48576,"journal":{"name":"Journal of Causal Inference","volume":"8 1","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83418174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract We propose semiparametric and nonparametric methods to estimate conditional interventional indirect effects in the setting of two discrete mediators whose causal ordering is unknown. Average interventional indirect effects have been shown to decompose an average treatment effect into a direct effect and interventional indirect effects that quantify effects of hypothetical interventions on mediator distributions. Yet these effects may be heterogeneous across the covariate distribution. We consider the problem of estimating these effects at particular points. We propose an influence function-based estimator of the projection of the conditional effects onto a working model, and show under some conditions that we can achieve root-n consistent and asymptotically normal estimates. Second, we propose a fully nonparametric approach to estimation and show the conditions where this approach can achieve oracle rates of convergence. Finally, we propose a sensitivity analysis that identifies bounds on both the average and conditional effects in the presence of mediator-outcome confounding. We show that the same methods easily extend to allow estimation of these bounds. We conclude by examining heterogeneous effects with respect to the effect of COVID-19 vaccinations on depression during February 2021.
{"title":"Heterogeneous interventional effects with multiple mediators: Semiparametric and nonparametric approaches","authors":"Max Rubinstein, Zach Branson, Edward Kennedy","doi":"10.1515/jci-2022-0070","DOIUrl":"https://doi.org/10.1515/jci-2022-0070","url":null,"abstract":"Abstract We propose semiparametric and nonparametric methods to estimate conditional interventional indirect effects in the setting of two discrete mediators whose causal ordering is unknown. Average interventional indirect effects have been shown to decompose an average treatment effect into a direct effect and interventional indirect effects that quantify effects of hypothetical interventions on mediator distributions. Yet these effects may be heterogeneous across the covariate distribution. We consider the problem of estimating these effects at particular points. We propose an influence function-based estimator of the projection of the conditional effects onto a working model, and show under some conditions that we can achieve root-n consistent and asymptotically normal estimates. Second, we propose a fully nonparametric approach to estimation and show the conditions where this approach can achieve oracle rates of convergence. Finally, we propose a sensitivity analysis that identifies bounds on both the average and conditional effects in the presence of mediator-outcome confounding. We show that the same methods easily extend to allow estimation of these bounds. We conclude by examining heterogeneous effects with respect to the effect of COVID-19 vaccinations on depression during February 2021.","PeriodicalId":48576,"journal":{"name":"Journal of Causal Inference","volume":"23 1","pages":""},"PeriodicalIF":1.4,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74453037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In a recent work published in this journal, Philip Dawid has described a graphical causal model based on decision diagrams. This article describes how single-world intervention graphs (SWIGs) relate to these diagrams. In this way, a correspondence is established between Dawid's approach and those based on potential outcomes such as Robins’ finest fully randomized causally interpreted structured tree graphs. In more detail, a reformulation of Dawid s theory is given that is essentially equivalent to his proposal and isomorphic to SWIGs.
{"title":"Potential outcome and decision theoretic foundations for statistical causality","authors":"Thomas S. Richardson, James M. Robins","doi":"10.1515/jci-2022-0012","DOIUrl":"https://doi.org/10.1515/jci-2022-0012","url":null,"abstract":"Abstract In a recent work published in this journal, Philip Dawid has described a graphical causal model based on decision diagrams. This article describes how single-world intervention graphs (SWIGs) relate to these diagrams. In this way, a correspondence is established between Dawid's approach and those based on potential outcomes such as Robins’ finest fully randomized causally interpreted structured tree graphs. In more detail, a reformulation of Dawid s theory is given that is essentially equivalent to his proposal and isomorphic to SWIGs.","PeriodicalId":48576,"journal":{"name":"Journal of Causal Inference","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134979760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}