{"title":"Bridging the AI/ML gap with explainable symbolic causal models using information theory","authors":"Stuart W. Card","doi":"10.1117/12.3014447","DOIUrl":null,"url":null,"abstract":"We report favorable preliminary findings of work in progress bridging the Artificial Intelligence (AI) gap between bottom-up data-driven Machine Learning (ML) and top-down conceptually driven symbolic reasoning. Our overall goal is automatic generation, maintenance and utilization of explainable, parsimonious, plausibly causal, probably approximately correct, hybrid symbolic/numeric models of the world, the self and other agents, for prediction, what-if (counter-factual) analysis and control. Our old Evolutionary Learning with Information Theoretic Evaluation of Ensembles (ELITE2) techniques quantify strengths of arbitrary multivariate nonlinear statistical dependencies, prior to discovering forms by which observed variables may drive others. We extend these to apply Granger causality, in terms of conditional Mutual Information (MI), to distinguish causal relationships and find their directions. As MI can reflect one observable driving a second directly or via a mediator, two being driven by a common cause, etc., to untangle the causal graph we will apply Pearl causality with its back- and front-door adjustments and criteria. Initial efforts verified that our information theoretic indices detect causality in noise corrupted data despite complex relationships among hidden variables with chaotic dynamics disturbed by process noise, The next step is to apply these information theoretic filters in Genetic Programming (GP) to reduce the population of discovered statistical dependencies to plausibly causal relationships, represented symbolically for use by a reasoning engine in a cognitive architecture. Success could bring broader generalization, using not just learned patterns but learned general principles, enabling AI/ML based systems to autonomously navigate complex unknown environments and handle “black swans”.","PeriodicalId":178341,"journal":{"name":"Defense + Commercial Sensing","volume":"23 25","pages":"1305802 - 1305802-4"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Defense + Commercial Sensing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3014447","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We report favorable preliminary findings of work in progress bridging the Artificial Intelligence (AI) gap between bottom-up data-driven Machine Learning (ML) and top-down conceptually driven symbolic reasoning. Our overall goal is automatic generation, maintenance and utilization of explainable, parsimonious, plausibly causal, probably approximately correct, hybrid symbolic/numeric models of the world, the self and other agents, for prediction, what-if (counter-factual) analysis and control. Our old Evolutionary Learning with Information Theoretic Evaluation of Ensembles (ELITE2) techniques quantify strengths of arbitrary multivariate nonlinear statistical dependencies, prior to discovering forms by which observed variables may drive others. We extend these to apply Granger causality, in terms of conditional Mutual Information (MI), to distinguish causal relationships and find their directions. As MI can reflect one observable driving a second directly or via a mediator, two being driven by a common cause, etc., to untangle the causal graph we will apply Pearl causality with its back- and front-door adjustments and criteria. Initial efforts verified that our information theoretic indices detect causality in noise corrupted data despite complex relationships among hidden variables with chaotic dynamics disturbed by process noise, The next step is to apply these information theoretic filters in Genetic Programming (GP) to reduce the population of discovered statistical dependencies to plausibly causal relationships, represented symbolically for use by a reasoning engine in a cognitive architecture. Success could bring broader generalization, using not just learned patterns but learned general principles, enabling AI/ML based systems to autonomously navigate complex unknown environments and handle “black swans”.
我们报告了在自下而上的数据驱动型机器学习(ML)和自上而下的概念驱动型符号推理之间搭建桥梁的人工智能(AI)工作的初步成果。我们的总体目标是自动生成、维护和利用可解释的、解析的、似因似果的、可能近似正确的世界、自我和其他代理的混合符号/数字模型,用于预测、假设(反事实)分析和控制。在发现观察到的变量可能驱动其他变量的形式之前,我们原有的集合信息论评估进化学习(ELITE2)技术可以量化任意多变量非线性统计依赖关系的强度。我们将其扩展到格兰杰因果关系中,以条件互信息(MI)来区分因果关系并找到其方向。由于 MI 可以反映一个观测变量直接或通过中介驱动另一个观测变量,也可以反映两个观测变量被一个共同原因驱动等,因此,为了理清因果关系图,我们将应用珀尔因果关系及其前后门调整和标准。下一步是在遗传编程(GP)中应用这些信息理论过滤器,将已发现的统计依赖关系减少为可信的因果关系,并用符号表示出来,供认知架构中的推理引擎使用。成功可以带来更广泛的通用性,不仅可以使用学习到的模式,还可以使用学习到的一般原则,从而使基于人工智能/人工智能的系统能够自主导航复杂的未知环境,并处理 "黑天鹅"。