利用预测模型可靠地估计因果效应

IF 1 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE International Journal on Artificial Intelligence Tools Pub Date : 2024-04-25 DOI:10.1142/s0218213024600066
Mahdi Hadj Ali, Yann Le Biannic, Pierre-Henri Wuillemin
{"title":"利用预测模型可靠地估计因果效应","authors":"Mahdi Hadj Ali, Yann Le Biannic, Pierre-Henri Wuillemin","doi":"10.1142/s0218213024600066","DOIUrl":null,"url":null,"abstract":"In recent years, machine learning algorithms have been widely adopted across many fields due to their efficiency and versatility. However, the complexity of predictive models has led to a lack of interpretability in automatic decision-making. Recent works have improved general interpretability by estimating the contributions of input features to the predictions of a pre-trained model. Drawing on these improvements, practitioners seek to gain causal insights into the underlying data-generating mechanisms. To this end, works have attempted to integrate causal knowledge into interpretability, as non-causal techniques can lead to paradoxical explanations. In this paper, we argue that each question about a causal effect requires its own reasoning and that relying on an initial predictive model trained on an arbitrary set of variables may result in quantification problems when estimating all possible effects. As an alternative, we advocate for a query-driven methodology that addresses each causal question separately. Assuming that the causal structure relating the variables is known, we propose to employ the tools of causal inference to quantify a particular effect as a formula involving observable probabilities. We then derive conditions on the selection of variables to train a predictive model that is tailored for the causal question of interest. Finally, we identify suitable eXplainable AI (XAI) techniques to estimate causal effects from the model predictions. Furthermore, we introduce a novel method for estimating direct effects through intervention on causal mechanisms.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.0000,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reliable Estimation of Causal Effects Using Predictive Models\",\"authors\":\"Mahdi Hadj Ali, Yann Le Biannic, Pierre-Henri Wuillemin\",\"doi\":\"10.1142/s0218213024600066\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, machine learning algorithms have been widely adopted across many fields due to their efficiency and versatility. However, the complexity of predictive models has led to a lack of interpretability in automatic decision-making. Recent works have improved general interpretability by estimating the contributions of input features to the predictions of a pre-trained model. Drawing on these improvements, practitioners seek to gain causal insights into the underlying data-generating mechanisms. To this end, works have attempted to integrate causal knowledge into interpretability, as non-causal techniques can lead to paradoxical explanations. In this paper, we argue that each question about a causal effect requires its own reasoning and that relying on an initial predictive model trained on an arbitrary set of variables may result in quantification problems when estimating all possible effects. As an alternative, we advocate for a query-driven methodology that addresses each causal question separately. Assuming that the causal structure relating the variables is known, we propose to employ the tools of causal inference to quantify a particular effect as a formula involving observable probabilities. We then derive conditions on the selection of variables to train a predictive model that is tailored for the causal question of interest. Finally, we identify suitable eXplainable AI (XAI) techniques to estimate causal effects from the model predictions. Furthermore, we introduce a novel method for estimating direct effects through intervention on causal mechanisms.\",\"PeriodicalId\":50280,\"journal\":{\"name\":\"International Journal on Artificial Intelligence Tools\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2024-04-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal on Artificial Intelligence Tools\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1142/s0218213024600066\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal on Artificial Intelligence Tools","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1142/s0218213024600066","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

近年来,机器学习算法凭借其高效性和多功能性在许多领域得到广泛应用。然而,预测模型的复杂性导致自动决策缺乏可解释性。最近的研究通过估算输入特征对预训练模型预测的贡献,提高了一般可解释性。在这些改进的基础上,实践者们试图深入了解底层数据生成机制的因果关系。为此,有学者尝试将因果知识融入可解释性中,因为非因果技术可能导致自相矛盾的解释。在本文中,我们认为每个关于因果效应的问题都需要自己的推理,依赖于在任意变量集上训练的初始预测模型可能会在估计所有可能的效应时导致量化问题。作为替代方案,我们主张采用查询驱动的方法,分别解决每个因果问题。假设变量之间的因果结构是已知的,我们建议使用因果推理工具将特定效应量化为一个涉及可观测概率的公式。然后,我们推导出选择变量的条件,以训练一个针对相关因果问题的预测模型。最后,我们确定了合适的可解释人工智能(XAI)技术,以便从模型预测中估计因果效应。此外,我们还介绍了一种通过干预因果机制来估算直接影响的新方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Reliable Estimation of Causal Effects Using Predictive Models
In recent years, machine learning algorithms have been widely adopted across many fields due to their efficiency and versatility. However, the complexity of predictive models has led to a lack of interpretability in automatic decision-making. Recent works have improved general interpretability by estimating the contributions of input features to the predictions of a pre-trained model. Drawing on these improvements, practitioners seek to gain causal insights into the underlying data-generating mechanisms. To this end, works have attempted to integrate causal knowledge into interpretability, as non-causal techniques can lead to paradoxical explanations. In this paper, we argue that each question about a causal effect requires its own reasoning and that relying on an initial predictive model trained on an arbitrary set of variables may result in quantification problems when estimating all possible effects. As an alternative, we advocate for a query-driven methodology that addresses each causal question separately. Assuming that the causal structure relating the variables is known, we propose to employ the tools of causal inference to quantify a particular effect as a formula involving observable probabilities. We then derive conditions on the selection of variables to train a predictive model that is tailored for the causal question of interest. Finally, we identify suitable eXplainable AI (XAI) techniques to estimate causal effects from the model predictions. Furthermore, we introduce a novel method for estimating direct effects through intervention on causal mechanisms.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal on Artificial Intelligence Tools
International Journal on Artificial Intelligence Tools 工程技术-计算机:跨学科应用
CiteScore
2.10
自引率
9.10%
发文量
66
审稿时长
8.5 months
期刊介绍: The International Journal on Artificial Intelligence Tools (IJAIT) provides an interdisciplinary forum in which AI scientists and professionals can share their research results and report new advances on AI tools or tools that use AI. Tools refer to architectures, languages or algorithms, which constitute the means connecting theory with applications. So, IJAIT is a medium for promoting general and/or special purpose tools, which are very important for the evolution of science and manipulation of knowledge. IJAIT can also be used as a test ground for new AI tools. Topics covered by IJAIT include but are not limited to: AI in Bioinformatics, AI for Service Engineering, AI for Software Engineering, AI for Ubiquitous Computing, AI for Web Intelligence Applications, AI Parallel Processing Tools (hardware/software), AI Programming Languages, AI Tools for CAD and VLSI Analysis/Design/Testing, AI Tools for Computer Vision and Speech Understanding, AI Tools for Multimedia, Cognitive Informatics, Data Mining and Machine Learning Tools, Heuristic and AI Planning Strategies and Tools, Image Understanding, Integrated/Hybrid AI Approaches, Intelligent System Architectures, Knowledge-Based/Expert Systems, Knowledge Management and Processing Tools, Knowledge Representation Languages, Natural Language Understanding, Neural Networks for AI, Object-Oriented Programming for AI, Reasoning and Evolution of Knowledge Bases, Self-Healing and Autonomous Systems, and Software Engineering for AI.
期刊最新文献
Enhancing Constraint Acquisition through Hybrid Learning: An Integration of Passive and Active Learning Strategies A systematic review on drug to Drug Interaction Prediction and Cryptographic Mechanism for Secure Drug Discovery using AI techniques Towards Domain Adaptive Learning-based Variation Autoencoder Emotional Analysis in English Teaching Music Emotion Intensity Estimation using Transfer Ordinal Label Learning under Heterogeneous Scenes Path Planning for Mobile Robots Using Transfer Reinforcement Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1