Yamac Isik, Paidamoyo Chapfuwa, Connor Davis, Ricardo Henao
Recently proposed encoder-decoder structures for modeling Hawkes processes use transformer-inspired architectures, which encode the history of events via embeddings and self-attention mechanisms. These models deliver better prediction and goodness-of-fit than their RNN-based counterparts. However, they often require high computational and memory complexity and fail to adequately capture the triggering function of the underlying process. So motivated, we introduce an efficient and general encoding of the historical event sequence by replacing the complex (multilayered) attention structures with triggering kernels of the observed data. Noting the similarity between the triggering kernels of a point process and the attention scores, we use a triggering kernel to replace the weights used to build history representations. Our estimator for the triggering function is equipped with a sigmoid gating mechanism that captures local-in-time triggering effects that are otherwise challenging with standard decaying-over-time kernels. Further, taking both event type representations and temporal embeddings as inputs, the model learns the underlying triggering type-time kernel parameters given pairs of event types. We present experiments on synthetic and real data sets widely used by competing models, and further include a COVID-19 dataset to illustrate the use of longitudinal covariates. Our results show the proposed model outperforms existing approaches, is more efficient in terms of computational complexity, and yields interpretable results via direct application of the newly introduced kernel.
{"title":"Hawkes Process with Flexible Triggering Kernels.","authors":"Yamac Isik, Paidamoyo Chapfuwa, Connor Davis, Ricardo Henao","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Recently proposed encoder-decoder structures for modeling Hawkes processes use transformer-inspired architectures, which encode the history of events via embeddings and self-attention mechanisms. These models deliver better prediction and goodness-of-fit than their RNN-based counterparts. However, they often require high computational and memory complexity and fail to adequately capture the triggering function of the underlying process. So motivated, we introduce an efficient and general encoding of the historical event sequence by replacing the complex (multilayered) attention structures with triggering kernels of the observed data. Noting the similarity between the triggering kernels of a point process and the attention scores, we use a triggering kernel to replace the weights used to build history representations. Our estimator for the triggering function is equipped with a sigmoid gating mechanism that captures local-in-time triggering effects that are otherwise challenging with standard decaying-over-time kernels. Further, taking both event type representations and temporal embeddings as inputs, the model learns the underlying triggering type-time kernel parameters given pairs of event types. We present experiments on synthetic and real data sets widely used by competing models, and further include a COVID-19 dataset to illustrate the use of longitudinal covariates. Our results show the proposed model outperforms existing approaches, is more efficient in terms of computational complexity, and yields interpretable results via direct application of the newly introduced kernel.</p>","PeriodicalId":74504,"journal":{"name":"Proceedings of machine learning research","volume":"219 ","pages":"308-320"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12443382/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145088473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric Rawls, Casey Gilmore, Erich Kummerfeld, Kelvin Lim, Tasha Nienow
Here we advance a new approach for measuring EEG causal oscillatory connectivity, capitalizing on recent advances in causal discovery analysis for skewed time series data and in spectral parameterization of time-frequency (TF) data. We first parameterize EEG TF data into separate oscillatory and aperiodic components. We then measure causal interactions between separated oscillatory data with the recently proposed causal connectivity method Greedy Adjacencies and Non-Gaussian Orientations (GANGO). We apply GANGO to contemporaneous time series, then we extend the GANGO method to lagged data that control for temporal autocorrelation. We apply this approach to EEG data acquired in the context of a clinical trial investigating noninvasive transcranial direct current stimulation to treat executive dysfunction following mild Traumatic Brain Injury (mTBI). First, we analyze whole-scalp oscillatory connectivity patterns using community detection. Then we demonstrate that tDCS increases the effect size of causal theta-band oscillatory connections between prefrontal sensors and the rest of the scalp, while simultaneously decreasing causal alpha-band oscillatory connections between prefrontal sensors and the rest of the scalp. Improved executive functioning following tDCS could result from increased prefrontal causal theta oscillatory influence, and decreased prefrontal alpha-band causal oscillatory influence.
{"title":"A Computational Framework for EEG Causal Oscillatory Connectivity.","authors":"Eric Rawls, Casey Gilmore, Erich Kummerfeld, Kelvin Lim, Tasha Nienow","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Here we advance a new approach for measuring EEG causal oscillatory connectivity, capitalizing on recent advances in causal discovery analysis for skewed time series data and in spectral parameterization of time-frequency (TF) data. We first parameterize EEG TF data into separate oscillatory and aperiodic components. We then measure causal interactions between separated oscillatory data with the recently proposed causal connectivity method Greedy Adjacencies and Non-Gaussian Orientations (GANGO). We apply GANGO to contemporaneous time series, then we extend the GANGO method to lagged data that control for temporal autocorrelation. We apply this approach to EEG data acquired in the context of a clinical trial investigating noninvasive transcranial direct current stimulation to treat executive dysfunction following mild Traumatic Brain Injury (mTBI). First, we analyze whole-scalp oscillatory connectivity patterns using community detection. Then we demonstrate that tDCS increases the effect size of causal theta-band oscillatory connections between prefrontal sensors and the rest of the scalp, while simultaneously decreasing causal alpha-band oscillatory connections between prefrontal sensors and the rest of the scalp. Improved executive functioning following tDCS could result from increased prefrontal causal theta oscillatory influence, and decreased prefrontal alpha-band causal oscillatory influence.</p>","PeriodicalId":74504,"journal":{"name":"Proceedings of machine learning research","volume":"223 ","pages":"40-51"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11545965/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142633724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inyoung Jun, Scott A Cohen, Sarah E Ser, Simone Marini, Robert J Lucero, Jiang Bian, Mattia Prosperi
Developing models for individualized, time-varying treatment optimization from observational data with large variable spaces, e.g., electronic health records (EHR), is problematic because of inherent, complex bias that can change over time. Traditional methods such as the g-formula are robust, but must identify critical subsets of variables due to combinatorial issues. Machine learning approaches such as causal survival forests have fewer constraints and can provide fine-tuned, individualized counterfactual predictions. In this study, we aimed to optimize time-varying antibiotic treatment -identifying treatment heterogeneity and conditional treatment effects- against invasive methicillin-resistant Staphylococcus Aureus (MRSA) infections, using statewide EHR data collected in Florida, USA. While many previous studies focused on measuring the effects of the first empiric treatment (i.e., usually vancomycin), our study focuses on dynamic sequential treatment changes, comparing possible vancomycin switches with other antibiotics at clinically relevant time points, e.g., after obtaining a bacterial culture and susceptibility testing. Our study population included adult individuals admitted to the hospital with invasive MRSA. We collected demographic, clinical, medication, and laboratory information from the EHR for these patients. Then, we followed three sequential antibiotic choices (i.e., their empiric treatment, subsequent directed treatment, and final sustaining treatment), evaluating 30-day mortality as the outcome. We applied both causal survival forests and g-formula using different clinical intervention policies. We found that switching from vancomycin to another antibiotic improved survival probability, yet there was a benefit from initiating vancomycin compared to not using it at any time point. These findings show consistency with the empiric choice of vancomycin before confirmation of MRSA and shed light on how to manage switches on course. In conclusion, this application of causal machine learning on EHR demonstrates utility in modeling dynamic, heterogeneous treatment effects that cannot be evaluated precisely using randomized clinical trials.
{"title":"Optimizing Dynamic Antibiotic Treatment Strategies against Invasive Methicillin-Resistant <i>Staphylococcus Aureus</i> Infections using Causal Survival Forests and G-Formula on Statewide Electronic Health Record Data.","authors":"Inyoung Jun, Scott A Cohen, Sarah E Ser, Simone Marini, Robert J Lucero, Jiang Bian, Mattia Prosperi","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Developing models for individualized, time-varying treatment optimization from observational data with large variable spaces, e.g., electronic health records (EHR), is problematic because of inherent, complex bias that can change over time. Traditional methods such as the g-formula are robust, but must identify critical subsets of variables due to combinatorial issues. Machine learning approaches such as causal survival forests have fewer constraints and can provide fine-tuned, individualized counterfactual predictions. In this study, we aimed to optimize time-varying antibiotic treatment -identifying treatment heterogeneity and conditional treatment effects- against invasive methicillin-resistant <i>Staphylococcus Aureus</i> (MRSA) infections, using statewide EHR data collected in Florida, USA. While many previous studies focused on measuring the effects of the first empiric treatment (i.e., usually vancomycin), our study focuses on dynamic sequential treatment changes, comparing possible vancomycin switches with other antibiotics at clinically relevant time points, e.g., after obtaining a bacterial culture and susceptibility testing. Our study population included adult individuals admitted to the hospital with invasive MRSA. We collected demographic, clinical, medication, and laboratory information from the EHR for these patients. Then, we followed three sequential antibiotic choices (i.e., their empiric treatment, subsequent directed treatment, and final sustaining treatment), evaluating 30-day mortality as the outcome. We applied both causal survival forests and g-formula using different clinical intervention policies. We found that switching from vancomycin to another antibiotic improved survival probability, yet there was a benefit from initiating vancomycin compared to not using it at any time point. These findings show consistency with the empiric choice of vancomycin before confirmation of MRSA and shed light on how to manage switches on course. In conclusion, this application of causal machine learning on EHR demonstrates utility in modeling dynamic, heterogeneous treatment effects that cannot be evaluated precisely using randomized clinical trials.</p>","PeriodicalId":74504,"journal":{"name":"Proceedings of machine learning research","volume":"218 ","pages":"98-115"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10584043/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49686010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We give novel Python and R interfaces for the (Java) Tetrad project for causal modeling, search, and estimation. The Tetrad project is a mainstay in the literature, having been under consistent development for over 30 years. Some of its algorithms are now classics, like PC and FCI; others are recent developments. It is increasingly the case, however, that researchers need to access the underlying Java code from Python or R. Existing methods for doing this are inadequate. We provide new, up-to-date methods using the JPype Python-Java interface and the Reticulate Python-R interface, directly solving these issues. With the addition of some simple tools and the provision of working examples for both Python and R, using JPype and Reticulate to interface Python and R with Tetrad is straightforward and intuitive.
我们为用于因果建模、搜索和估算的(Java)Tetrad 项目提供了新颖的 Python 和 R 接口。Tetrad 项目是文献中的中流砥柱,已经持续发展了 30 多年。它的一些算法现已成为经典,如 PC 和 FCI;另一些则是最近才开发的。然而,越来越多的研究人员需要从 Python 或 R 语言访问底层 Java 代码。我们使用 JPype Python-Java 接口和 Reticulate Python-R 接口提供了最新的新方法,直接解决了这些问题。通过添加一些简单的工具和提供 Python 和 R 的工作示例,使用 JPype 和 Reticulate 将 Python 和 R 与 Tetrad 连接起来就变得简单直观了。
{"title":"Py-Tetrad and RPy-Tetrad: A New Python Interface with R Support for Tetrad Causal Search.","authors":"Joseph D Ramsey, Bryan Andrews","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We give novel Python and R interfaces for the (Java) Tetrad project for causal modeling, search, and estimation. The Tetrad project is a mainstay in the literature, having been under consistent development for over 30 years. Some of its algorithms are now classics, like PC and FCI; others are recent developments. It is increasingly the case, however, that researchers need to access the underlying Java code from Python or R. Existing methods for doing this are inadequate. We provide new, up-to-date methods using the JPype Python-Java interface and the Reticulate Python-R interface, directly solving these issues. With the addition of some simple tools and the provision of working examples for both Python and R, using JPype and Reticulate to interface Python and R with Tetrad is straightforward and intuitive.</p>","PeriodicalId":74504,"journal":{"name":"Proceedings of machine learning research","volume":"223 ","pages":"40-51"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11316512/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141918282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Somin Wadhwa, Jay DeYoung, Benjamin Nye, Silvio Amir, Byron C Wallace
Results from Randomized Controlled Trials (RCTs) establish the comparative effectiveness of interventions, and are in turn critical inputs for evidence-based care. However, results from RCTs are presented in (often unstructured) natural language articles describing the design, execution, and outcomes of trials; clinicians must manually extract findings pertaining to interventions and outcomes of interest from such articles. This onerous manual process has motivated work on (semi-)automating extraction of structured evidence from trial reports. In this work we propose and evaluate a text-to-text model built on instruction-tuned Large Language Models (LLMs) to jointly extract Interventions, Outcomes, and Comparators (ICO elements) from clinical abstracts, and infer the associated results reported. Manual (expert) and automated evaluations indicate that framing evidence extraction as a conditional generation task and fine-tuning LLMs for this purpose realizes considerable (~20 point absolute F1 score) gains over the previous SOTA. We perform ablations and error analyses to assess aspects that contribute to model performance, and to highlight potential directions for further improvements. We apply our model to a collection of published RCTs through mid-2022, and release a searchable database of structured findings: http://ico-relations.ebm-nlp.com.
{"title":"Jointly Extracting Interventions, Outcomes, and Findings from RCT Reports with LLMs.","authors":"Somin Wadhwa, Jay DeYoung, Benjamin Nye, Silvio Amir, Byron C Wallace","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Results from Randomized Controlled Trials (RCTs) establish the comparative effectiveness of interventions, and are in turn critical inputs for evidence-based care. However, results from RCTs are presented in (often unstructured) natural language articles describing the design, execution, and outcomes of trials; clinicians must manually extract findings pertaining to interventions and outcomes of interest from such articles. This onerous manual process has motivated work on (semi-)automating extraction of structured evidence from trial reports. In this work we propose and evaluate a text-to-text model built on instruction-tuned Large Language Models (LLMs) to jointly extract <i>Interventions</i>, <i>Outcomes</i>, and <i>Comparators</i> (ICO elements) from clinical abstracts, and infer the associated results reported. Manual (expert) and automated evaluations indicate that framing evidence extraction as a conditional generation task and fine-tuning LLMs for this purpose realizes considerable (~20 point absolute F1 score) gains over the previous SOTA. We perform ablations and error analyses to assess aspects that contribute to model performance, and to highlight potential directions for further improvements. We apply our model to a collection of published RCTs through mid-2022, and release a searchable database of structured findings: http://ico-relations.ebm-nlp.com.</p>","PeriodicalId":74504,"journal":{"name":"Proceedings of machine learning research","volume":"219 ","pages":"754-771"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12451563/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145133210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reliable extraction of temporal relations from clinical notes is a growing need in many clinical research domains. Our work introduces typed markers to the task of clinical temporal relation extraction. We demonstrate that the addition of medical entity information to clinical text as tags with context sentences then input to a transformer-based architecture can outperform more complex systems requiring feature engineering and temporal reasoning. We propose several strategies of typed marker creation that incorporate entity type information at different granularities, with extensive experiments to test their effectiveness. Our system establishes the best result on I2B2, a clinical benchmark dataset for temporal relation extraction, with a F1 at 83.5% that provides a substantial 3.3% improvement over the previous best system.
{"title":"Typed Markers and Context for Clinical Temporal Relation Extraction.","authors":"Cheng Cheng, Jeremy C Weiss","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Reliable extraction of temporal relations from clinical notes is a growing need in many clinical research domains. Our work introduces typed markers to the task of clinical temporal relation extraction. We demonstrate that the addition of medical entity information to clinical text as tags with context sentences then input to a transformer-based architecture can outperform more complex systems requiring feature engineering and temporal reasoning. We propose several strategies of typed marker creation that incorporate entity type information at different granularities, with extensive experiments to test their effectiveness. Our system establishes the best result on I2B2, a clinical benchmark dataset for temporal relation extraction, with a F1 at 83.5% that provides a substantial 3.3% improvement over the previous best system.</p>","PeriodicalId":74504,"journal":{"name":"Proceedings of machine learning research","volume":"219 ","pages":"94-109"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10929572/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140112398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Survival analysis is a general framework for predicting the time until a specific event occurs, often in the presence of censoring. Although this framework is widely used in practice, few studies to date have considered fairness for time-to-event outcomes, despite recent significant advances in the algorithmic fairness literature more broadly. In this paper, we propose a framework to achieve demographic parity in survival analysis models by minimizing the mutual information between predicted time-to-event and sensitive attributes. We show that our approach effectively minimizes mutual information to encourage statistical independence of time-to-event predictions and sensitive attributes. Furthermore, we propose four types of disparity assessment metrics based on common survival analysis metrics. Through experiments on multiple benchmark datasets, we demonstrate that by minimizing the dependence between the prediction and the sensitive attributes, our method can systematically improve the fairness of survival predictions and is robust to censoring.
{"title":"Fair Survival Time Prediction via Mutual Information Minimization.","authors":"Hyungrok Do, Yuxin Chang, Yoon Sang Cho, Padhraic Smyth, Judy Zhong","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Survival analysis is a general framework for predicting the time until a specific event occurs, often in the presence of censoring. Although this framework is widely used in practice, few studies to date have considered fairness for time-to-event outcomes, despite recent significant advances in the algorithmic fairness literature more broadly. In this paper, we propose a framework to achieve demographic parity in survival analysis models by minimizing the mutual information between predicted time-to-event and sensitive attributes. We show that our approach effectively minimizes mutual information to encourage statistical independence of time-to-event predictions and sensitive attributes. Furthermore, we propose four types of disparity assessment metrics based on common survival analysis metrics. Through experiments on multiple benchmark datasets, we demonstrate that by minimizing the dependence between the prediction and the sensitive attributes, our method can systematically improve the fairness of survival predictions and is robust to censoring.</p>","PeriodicalId":74504,"journal":{"name":"Proceedings of machine learning research","volume":"219 ","pages":"128-149"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11067550/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140861818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce the Explainable Analytical Systems Lab (EASL) framework, an end-to-end solution designed to facilitate the development, implementation, and evaluation of clinical machine learning (ML) tools. EASL is highly versatile and applicable to a variety of contexts and includes resources for data management, ML model development, visualization and user interface development, service hosting, and usage analytics. To demonstrate its practical applications, we present the EASL framework in the context of a case study: designing and evaluating a deep learning classifier to predict diagnoses from medical imaging. The framework is composed of three modules, each with their own set of resources. The Workbench module stores data and develops initial ML models, the Canvas module contains a medical imaging viewer and web development framework, and the Studio module hosts the ML model and provides web analytics and support for conducting user studies. EASL encourages model developers to take a holistic view by integrating the model development, implementation, and evaluation into one framework, and thus ensures that models are both effective and reliable when used in a clinical setting. EASL contributes to our understanding of machine learning applied to healthcare by providing a comprehensive framework that makes it easier to develop and evaluate ML tools within a clinical setting.
我们介绍了可解释分析系统实验室(EASL)框架,这是一个端到端的解决方案,旨在促进临床机器学习(ML)工具的开发、实施和评估。EASL 具有很强的通用性,适用于各种环境,包括用于数据管理、ML 模型开发、可视化和用户界面开发、服务托管和使用分析的资源。为了展示其实际应用,我们在一个案例研究中介绍了 EASL 框架:设计和评估用于预测医学影像诊断的深度学习分类器。该框架由三个模块组成,每个模块都有自己的资源集。Workbench 模块存储数据并开发初始 ML 模型,Canvas 模块包含医学影像浏览器和网络开发框架,Studio 模块托管 ML 模型并提供网络分析和开展用户研究的支持。EASL 鼓励模型开发人员从全局出发,将模型开发、实施和评估整合到一个框架中,从而确保模型在临床环境中使用时既有效又可靠。EASL 提供了一个全面的框架,使在临床环境中开发和评估 ML 工具变得更加容易,从而加深了我们对将机器学习应用于医疗保健的理解。
{"title":"EASL: A Framework for Designing, Implementing, and Evaluating ML Solutions in Clinical Healthcare Settings.","authors":"Eric Prince, Todd C Hankinson, Carsten Görg","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We introduce the Explainable Analytical Systems Lab (EASL) framework, an end-to-end solution designed to facilitate the development, implementation, and evaluation of clinical machine learning (ML) tools. EASL is highly versatile and applicable to a variety of contexts and includes resources for data management, ML model development, visualization and user interface development, service hosting, and usage analytics. To demonstrate its practical applications, we present the EASL framework in the context of a case study: designing and evaluating a deep learning classifier to predict diagnoses from medical imaging. The framework is composed of three modules, each with their own set of resources. The Workbench module stores data and develops initial ML models, the Canvas module contains a medical imaging viewer and web development framework, and the Studio module hosts the ML model and provides web analytics and support for conducting user studies. EASL encourages model developers to take a holistic view by integrating the model development, implementation, and evaluation into one framework, and thus ensures that models are both effective and reliable when used in a clinical setting. EASL contributes to our understanding of machine learning applied to healthcare by providing a comprehensive framework that makes it easier to develop and evaluate ML tools within a clinical setting.</p>","PeriodicalId":74504,"journal":{"name":"Proceedings of machine learning research","volume":"219 ","pages":"612-630"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11235083/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141581781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider missingness in the context of causal inference when the outcome of interest may be missing. If the outcome directly affects its own missingness status, i.e., it is "self-censoring", this may lead to severely biased causal effect estimates. Miao et al. [2015] proposed the shadow variable method to correct for bias due to self-censoring; however, verifying the required model assumptions can be difficult. Here, we propose a test based on a randomized incentive variable offered to encourage reporting of the outcome that can be used to verify identification assumptions that are sufficient to correct for both self-censoring and confounding bias. Concretely, the test confirms whether a given set of pre-treatment covariates is sufficient to block all backdoor paths between the treatment and outcome as well as all paths between the treatment and missingness indicator after conditioning on the outcome. We show that under these conditions, the causal effect is identified by using the treatment as a shadow variable, and it leads to an intuitive inverse probability weighting estimator that uses a product of the treatment and response weights. We evaluate the efficacy of our test and downstream estimator via simulations.
{"title":"Causal Inference With Outcome-Dependent Missingness And Self-Censoring.","authors":"Jacob M Chen, Daniel Malinsky, Rohit Bhattacharya","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We consider missingness in the context of causal inference when the outcome of interest may be missing. If the outcome directly affects its own missingness status, i.e., it is \"self-censoring\", this may lead to severely biased causal effect estimates. Miao et al. [2015] proposed the shadow variable method to correct for bias due to self-censoring; however, verifying the required model assumptions can be difficult. Here, we propose a test based on a randomized incentive variable offered to encourage reporting of the outcome that can be used to verify identification assumptions that are sufficient to correct for both self-censoring and confounding bias. Concretely, the test confirms whether a given set of pre-treatment covariates is sufficient to block all backdoor paths between the treatment and outcome as well as all paths between the treatment and missingness indicator after conditioning on the outcome. We show that under these conditions, the causal effect is identified by using the treatment as a shadow variable, and it leads to an intuitive inverse probability weighting estimator that uses a product of the treatment and response weights. We evaluate the efficacy of our test and downstream estimator via simulations.</p>","PeriodicalId":74504,"journal":{"name":"Proceedings of machine learning research","volume":"216 ","pages":"358-368"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11905187/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143627020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karine Karine, Predrag Klasnja, Susan A Murphy, Benjamin M Marlin
Just-in-Time Adaptive Interventions (JITAIs) are a class of personalized health interventions developed within the behavioral science community. JITAIs aim to provide the right type and amount of support by iteratively selecting a sequence of intervention options from a pre-defined set of components in response to each individual's time varying state. In this work, we explore the application of reinforcement learning methods to the problem of learning intervention option selection policies. We study the effect of context inference error and partial observability on the ability to learn effective policies. Our results show that the propagation of uncertainty from context inferences is critical to improving intervention efficacy as context uncertainty increases, while policy gradient algorithms can provide remarkable robustness to partially observed behavioral state information.
{"title":"Assessing the Impact of Context Inference Error and Partial Observability on RL Methods for Just-In-Time Adaptive Interventions.","authors":"Karine Karine, Predrag Klasnja, Susan A Murphy, Benjamin M Marlin","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Just-in-Time Adaptive Interventions (JITAIs) are a class of personalized health interventions developed within the behavioral science community. JITAIs aim to provide the right type and amount of support by iteratively selecting a sequence of intervention options from a pre-defined set of components in response to each individual's time varying state. In this work, we explore the application of reinforcement learning methods to the problem of learning intervention option selection policies. We study the effect of context inference error and partial observability on the ability to learn effective policies. Our results show that the propagation of uncertainty from context inferences is critical to improving intervention efficacy as context uncertainty increases, while policy gradient algorithms can provide remarkable robustness to partially observed behavioral state information.</p>","PeriodicalId":74504,"journal":{"name":"Proceedings of machine learning research","volume":"216 ","pages":"1047-1057"},"PeriodicalIF":0.0,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10506656/pdf/nihms-1926373.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10309493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}