首页 > 最新文献

Applied AI letters最新文献

英文 中文
Remembering for the right reasons: Explanations reduce catastrophic forgetting 记住正确的原因:解释可以减少灾难性的遗忘
Pub Date : 2021-11-05 DOI: 10.1002/ail2.44
Sayna Ebrahimi, Suzanne Petryk, Akash Gokul, William Gan, Joseph E. Gonzalez, Marcus Rohrbach, Trevor Darrell

The goal of continual learning (CL) is to learn a sequence of tasks without suffering from the phenomenon of catastrophic forgetting. Previous work has shown that leveraging memory in the form of a replay buffer can reduce performance degradation on prior tasks. We hypothesize that forgetting can be further reduced when the model is encouraged to remember the evidence for previously made decisions. As a first step towards exploring this hypothesis, we propose a simple novel training paradigm, called Remembering for the Right Reasons (RRR), that additionally stores visual model explanations for each example in the buffer and ensures the model has “the right reasons” for its predictions by encouraging its explanations to remain consistent with those used to make decisions at training time. Without this constraint, there is a drift in explanations and increase in forgetting as conventional continual learning algorithms learn new tasks. We demonstrate how RRR can be easily added to any memory or regularization-based approach and results in reduced forgetting, and more importantly, improved model explanations. We have evaluated our approach in the standard and few-shot settings and observed a consistent improvement across various CL approaches using different architectures and techniques to generate model explanations and demonstrated our approach showing a promising connection between explainability and continual learning. Our code is available at https://github.com/SaynaEbrahimi/Remembering-for-the-Right-Reasons.

持续学习(CL)的目标是在不遭受灾难性遗忘现象的情况下学习一系列任务。先前的工作表明,以重播缓冲区的形式利用内存可以减少先前任务的性能下降。我们假设,当模型被鼓励记住之前做出的决定的证据时,遗忘可以进一步减少。作为探索这一假设的第一步,我们提出了一个简单的新训练范式,称为“正确原因记忆”(RRR),它额外地在缓冲区中存储每个示例的视觉模型解释,并通过鼓励其解释与训练时用于决策的解释保持一致来确保模型具有“正确的原因”。如果没有这种约束,传统的持续学习算法学习新任务时,解释就会出现偏差,遗忘也会增加。我们展示了如何将RRR轻松地添加到任何基于记忆或正则化的方法中,从而减少遗忘,更重要的是,改进了模型解释。我们已经在标准和少数镜头设置中评估了我们的方法,并观察到使用不同架构和技术生成模型解释的各种CL方法的一致改进,并证明了我们的方法显示了可解释性和持续学习之间的有希望的联系。我们的代码可在https://github.com/SaynaEbrahimi/Remembering-for-the-Right-Reasons上获得。
{"title":"Remembering for the right reasons: Explanations reduce catastrophic forgetting","authors":"Sayna Ebrahimi,&nbsp;Suzanne Petryk,&nbsp;Akash Gokul,&nbsp;William Gan,&nbsp;Joseph E. Gonzalez,&nbsp;Marcus Rohrbach,&nbsp;Trevor Darrell","doi":"10.1002/ail2.44","DOIUrl":"https://doi.org/10.1002/ail2.44","url":null,"abstract":"<p>The goal of continual learning (CL) is to learn a sequence of tasks without suffering from the phenomenon of catastrophic forgetting. Previous work has shown that leveraging memory in the form of a replay buffer can reduce performance degradation on prior tasks. We hypothesize that forgetting can be further reduced when the model is encouraged to remember the <i>evidence</i> for previously made decisions. As a first step towards exploring this hypothesis, we propose a simple novel training paradigm, called Remembering for the Right Reasons (RRR), that additionally stores visual model explanations for each example in the buffer and ensures the model has “the right reasons” for its predictions by encouraging its explanations to remain consistent with those used to make decisions at training time. Without this constraint, there is a drift in explanations and increase in forgetting as conventional continual learning algorithms learn new tasks. We demonstrate how RRR can be easily added to any memory or regularization-based approach and results in reduced forgetting, and more importantly, improved model explanations. We have evaluated our approach in the standard and few-shot settings and observed a consistent improvement across various CL approaches using different architectures and techniques to generate model explanations and demonstrated our approach showing a promising connection between explainability and continual learning. Our code is available at https://github.com/SaynaEbrahimi/Remembering-for-the-Right-Reasons.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.44","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137488003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Patching interpretable And-Or-Graph knowledge representation using augmented reality 使用增强现实修补可解释的And-Or-Graph知识表示
Pub Date : 2021-10-20 DOI: 10.1002/ail2.43
Hangxin Liu, Yixin Zhu, Song-Chun Zhu

We present a novel augmented reality (AR) interface to provide effective means to diagnose a robot's erroneous behaviors, endow it with new skills, and patch its knowledge structure represented by an And-Or-Graph (AOG). Specifically, an AOG representation of opening medicine bottles is learned from human demonstration and yields a hierarchical structure that captures the spatiotemporal compositional nature of the given task, which is highly interpretable for the users. Through a series of psychological experiments, we demonstrate that the explanations of a robotic system, inherited from and produced by the AOG, can better foster human trust compared to other forms of explanations. Moreover, by visualizing the knowledge structure and robot states, the AR interface allows human users to intuitively understand what the robot knows, supervise the robot's task planner, and interactively teach the robot with new actions. Together, users can quickly identify the reasons for failures and conveniently patch the current knowledge structure to prevent future errors. This capability demonstrates the interpretability of our knowledge representation and the new forms of interactions afforded by the proposed AR interface.

本文提出了一种新的增强现实(AR)界面,为机器人错误行为诊断、赋予机器人新技能、修补其知识结构提供了有效手段。具体来说,打开药瓶的AOG表示是从人类演示中学习的,并产生一个层次结构,该结构捕获了给定任务的时空组成性质,这对用户来说是高度可解释的。通过一系列的心理学实验,我们证明,与其他形式的解释相比,继承并由AOG产生的机器人系统的解释可以更好地培养人类的信任。此外,通过可视化的知识结构和机器人状态,AR界面可以让人类用户直观地了解机器人知道什么,监督机器人的任务计划,并交互式地教机器人新的动作。用户可以快速识别故障的原因,并方便地修补当前的知识结构,以防止未来的错误。这种能力证明了我们的知识表示的可解释性以及所提出的AR接口提供的新形式的交互。
{"title":"Patching interpretable And-Or-Graph knowledge representation using augmented reality","authors":"Hangxin Liu,&nbsp;Yixin Zhu,&nbsp;Song-Chun Zhu","doi":"10.1002/ail2.43","DOIUrl":"10.1002/ail2.43","url":null,"abstract":"<p>We present a novel augmented reality (AR) interface to provide effective means to diagnose a robot's erroneous behaviors, endow it with new skills, and patch its knowledge structure represented by an And-Or-Graph (AOG). Specifically, an AOG representation of opening medicine bottles is learned from human demonstration and yields a hierarchical structure that captures the spatiotemporal compositional nature of the given task, which is highly interpretable for the users. Through a series of psychological experiments, we demonstrate that the explanations of a robotic system, inherited from and produced by the AOG, can better foster human trust compared to other forms of explanations. Moreover, by visualizing the knowledge structure and robot states, the AR interface allows human users to intuitively understand what the robot knows, supervise the robot's task planner, and interactively teach the robot with new actions. Together, users can quickly identify the reasons for failures and conveniently patch the current knowledge structure to prevent future errors. This capability demonstrates the interpretability of our knowledge representation and the new forms of interactions afforded by the proposed AR interface.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.43","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46548240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Explainable, interactive content-based image retrieval 可解释的,交互式的基于内容的图像检索
Pub Date : 2021-10-19 DOI: 10.1002/ail2.41
Bhavan Vasu, Brian Hu, Bo Dong, Roddy Collins, Anthony Hoogs

Quantifying the value of explanations in a human-in-the-loop (HITL) system is difficult. Previous methods either measure explanation-specific values that do not correspond to user tasks and needs or poll users on how useful they find the explanations to be. In this work, we quantify how much explanations help the user through a utility-based paradigm that measures change in task performance when using explanations vs not. Our chosen task is content-based image retrieval (CBIR), which has well-established baselines and performance metrics independent of explainability. We extend an existing HITL image retrieval system that incorporates user feedback with similarity-based saliency maps (SBSM) that indicate to the user which parts of the retrieved images are most similar to the query image. The system helps the user understand what it is paying attention to through saliency maps, and the user helps the system understand their goal through saliency-guided relevance feedback. Using the MS-COCO dataset, a standard object detection and segmentation dataset, we conducted extensive, crowd-sourced experiments validating that SBSM improves interactive image retrieval. Although the performance increase is modest in the general case, in more difficult cases such as cluttered scenes, using explanations yields an 6.5% increase in accuracy. To the best of our knowledge, this is the first large-scale user study showing that visual saliency map explanations improve performance on a real-world, interactive task. Our utility-based evaluation paradigm is general and potentially applicable to any task for which explainability can be incorporated.

在人在循环(HITL)系统中,量化解释的价值是困难的。以前的方法要么测量与用户任务和需求不对应的特定于解释的值,要么调查用户对解释的有用程度。在这项工作中,我们通过一种基于效用的范式来量化解释对用户的帮助程度,该范式衡量使用解释与不使用解释时任务性能的变化。我们选择的任务是基于内容的图像检索(CBIR),它具有良好的基线和独立于可解释性的性能指标。我们扩展了现有的HITL图像检索系统,该系统将用户反馈与基于相似性的显著性映射(SBSM)结合在一起,该映射向用户指示检索图像的哪些部分与查询图像最相似。系统通过显著性地图帮助用户理解自己关注的是什么,用户通过显著性引导的相关性反馈帮助系统理解自己的目标。使用MS-COCO数据集(一个标准的目标检测和分割数据集),我们进行了广泛的众包实验,验证了SBSM改进了交互式图像检索。虽然在一般情况下,性能的提高是适度的,但在更困难的情况下,比如混乱的场景,使用解释可以提高6.5%的准确性。据我们所知,这是第一次大规模的用户研究,表明视觉显著性地图解释可以提高现实世界中交互式任务的表现。我们基于效用的评估范式是通用的,并且潜在地适用于任何可解释性可以被纳入的任务。
{"title":"Explainable, interactive content-based image retrieval","authors":"Bhavan Vasu,&nbsp;Brian Hu,&nbsp;Bo Dong,&nbsp;Roddy Collins,&nbsp;Anthony Hoogs","doi":"10.1002/ail2.41","DOIUrl":"10.1002/ail2.41","url":null,"abstract":"<p>Quantifying the value of explanations in a human-in-the-loop (HITL) system is difficult. Previous methods either measure explanation-specific values that do not correspond to user tasks and needs or poll users on how useful they find the explanations to be. In this work, we quantify how much explanations help the user through a utility-based paradigm that measures change in task performance when using explanations vs not. Our chosen task is content-based image retrieval (CBIR), which has well-established baselines and performance metrics independent of explainability. We extend an existing HITL image retrieval system that incorporates user feedback with similarity-based saliency maps (SBSM) that indicate to the user which parts of the retrieved images are most similar to the query image. The system helps the user understand what it is paying attention to through saliency maps, and the user helps the system understand their goal through saliency-guided relevance feedback. Using the MS-COCO dataset, a standard object detection and segmentation dataset, we conducted extensive, crowd-sourced experiments validating that SBSM improves interactive image retrieval. Although the performance increase is modest in the general case, in more difficult cases such as cluttered scenes, using explanations yields an 6.5% increase in accuracy. To the best of our knowledge, this is the first large-scale user study showing that visual saliency map explanations improve performance on a real-world, interactive task. Our utility-based evaluation paradigm is general and potentially applicable to any task for which explainability can be incorporated.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.41","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"102959774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
User-guided global explanations for deep image recognition: A user study 深度图像识别的用户导向全局解释:用户研究
Pub Date : 2021-10-19 DOI: 10.1002/ail2.42
Mandana Hamidi-Haines, Zhongang Qi, Alan Fern, Fuxin Li, Prasad Tadepalli

We study a user-guided approach for producing global explanations of deep networks for image recognition. The global explanations are produced with respect to a test data set and give the overall frequency of different “recognition reasons” across the data. Each reason corresponds to a small number of the most significant human-recognizable visual concepts used by the network. The key challenge is that the visual concepts cannot be predetermined and those concepts will often not correspond to existing vocabulary or have labeled data sets. We address this issue via an interactive-naming interface, which allows users to freely cluster significant image regions in the data into visually similar concepts. Our main contribution is a user study on two visual recognition tasks. The results show that the participants were able to produce a small number of visual concepts sufficient for explanation and that there was significant agreement among the concepts, and hence global explanations, produced by different participants.

我们研究了一种用户导向的方法,用于生成用于图像识别的深度网络的全局解释。全局解释是根据测试数据集产生的,并给出了数据中不同“识别原因”的总体频率。每个原因对应于网络使用的少数最重要的人类可识别的视觉概念。关键的挑战是,视觉概念不能预先确定,这些概念通常不对应于现有的词汇表或有标记的数据集。我们通过交互式命名界面解决了这个问题,该界面允许用户自由地将数据中的重要图像区域聚类到视觉上相似的概念中。我们的主要贡献是对两个视觉识别任务的用户研究。结果表明,参与者能够产生少量足以解释的视觉概念,并且这些概念之间存在显著的一致性,因此不同参与者产生的整体解释。
{"title":"User-guided global explanations for deep image recognition: A user study","authors":"Mandana Hamidi-Haines,&nbsp;Zhongang Qi,&nbsp;Alan Fern,&nbsp;Fuxin Li,&nbsp;Prasad Tadepalli","doi":"10.1002/ail2.42","DOIUrl":"https://doi.org/10.1002/ail2.42","url":null,"abstract":"<p>We study a user-guided approach for producing global explanations of deep networks for image recognition. The global explanations are produced with respect to a test data set and give the overall frequency of different “recognition reasons” across the data. Each reason corresponds to a small number of the most significant human-recognizable visual concepts used by the network. The key challenge is that the visual concepts cannot be predetermined and those concepts will often not correspond to existing vocabulary or have labeled data sets. We address this issue via an interactive-naming interface, which allows users to freely cluster significant image regions in the data into visually similar concepts. Our main contribution is a user study on two visual recognition tasks. The results show that the participants were able to produce a small number of visual concepts sufficient for explanation and that there was significant agreement among the concepts, and hence global explanations, produced by different participants.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.42","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137863524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
XAITK: The explainable AI toolkit XAITK:可解释的AI工具包
Pub Date : 2021-10-18 DOI: 10.1002/ail2.40
Brian Hu, Paul Tunison, Bhavan Vasu, Nitesh Menon, Roddy Collins, Anthony Hoogs

Recent advances in artificial intelligence (AI), driven mainly by deep neural networks, have yielded remarkable progress in fields, such as computer vision, natural language processing, and reinforcement learning. Despite these successes, the inability to predict how AI systems will behave “in the wild” impacts almost all stages of planning and deployment, including research and development, verification and validation, and user trust and acceptance. The field of explainable artificial intelligence (XAI) seeks to develop techniques enabling AI algorithms to generate explanations of their results; generally these are human-interpretable representations or visualizations that are meant to “explain” how the system produced its outputs. We introduce the Explainable AI Toolkit (XAITK), a DARPA-sponsored effort that builds on results from the 4-year DARPA XAI program. The XAITK has two goals: (a) to consolidate research results from DARPA XAI into a single publicly accessible repository; and (b) to identify operationally relevant capabilities developed on DARPA XAI and assist in their transition to interested partners. We first describe the XAITK website and associated capabilities. These place the research results from DARPA XAI in the wider context of general research in the field of XAI, and include performer contributions of code, data, publications, and reports. We then describe the XAITK analytics and autonomy software frameworks. These are Python-based frameworks focused on particular XAI domains, and designed to provide a single integration endpoint for multiple algorithm implementations from across DARPA XAI. Each framework generalizes APIs for system-level data and control while providing a plugin interface for existing and future algorithm implementations. The XAITK project can be followed at: https://xaitk.org.

人工智能(AI)的最新进展主要由深度神经网络驱动,在计算机视觉、自然语言处理和强化学习等领域取得了显著进展。尽管取得了这些成功,但无法预测人工智能系统在“野外”中的表现会影响规划和部署的几乎所有阶段,包括研发、验证和验证、用户信任和接受。可解释人工智能(XAI)领域寻求开发使人工智能算法能够对其结果产生解释的技术;一般来说,这些是人类可解释的表示或可视化,旨在“解释”系统如何产生其输出。我们介绍了可解释的人工智能工具包(XAITK),这是DARPA赞助的一项基于4年DARPA XAI项目成果的努力。XAITK有两个目标:(a)将DARPA XAI的研究成果整合到一个可公开访问的存储库中;(b)确定在DARPA XAI上开发的作战相关能力,并协助向感兴趣的合作伙伴过渡。我们首先描述XAITK网站和相关功能。它们将DARPA XAI的研究结果置于XAI领域的一般研究的更广泛的上下文中,并包括执行人员对代码、数据、出版物和报告的贡献。然后我们描述了XAITK分析和自治软件框架。它们是基于python的框架,专注于特定的XAI领域,旨在为来自DARPA XAI的多个算法实现提供单个集成端点。每个框架都为系统级数据和控制提供通用api,同时为现有和未来的算法实现提供插件接口。XAITK项目可以在https://xaitk.org上进行跟踪。
{"title":"XAITK: The explainable AI toolkit","authors":"Brian Hu,&nbsp;Paul Tunison,&nbsp;Bhavan Vasu,&nbsp;Nitesh Menon,&nbsp;Roddy Collins,&nbsp;Anthony Hoogs","doi":"10.1002/ail2.40","DOIUrl":"10.1002/ail2.40","url":null,"abstract":"<p>Recent advances in artificial intelligence (AI), driven mainly by deep neural networks, have yielded remarkable progress in fields, such as computer vision, natural language processing, and reinforcement learning. Despite these successes, the inability to predict how AI systems will behave “in the wild” impacts almost all stages of planning and deployment, including research and development, verification and validation, and user trust and acceptance. The field of explainable artificial intelligence (XAI) seeks to develop techniques enabling AI algorithms to generate explanations of their results; generally these are human-interpretable representations or visualizations that are meant to “explain” how the system produced its outputs. We introduce the Explainable AI Toolkit (XAITK), a DARPA-sponsored effort that builds on results from the 4-year DARPA XAI program. The XAITK has two goals: (a) to consolidate research results from DARPA XAI into a single publicly accessible repository; and (b) to identify operationally relevant capabilities developed on DARPA XAI and assist in their transition to interested partners. We first describe the XAITK website and associated capabilities. These place the research results from DARPA XAI in the wider context of general research in the field of XAI, and include performer contributions of code, data, publications, and reports. We then describe the XAITK analytics and autonomy software frameworks. These are Python-based frameworks focused on particular XAI domains, and designed to provide a single integration endpoint for multiple algorithm implementations from across DARPA XAI. Each framework generalizes APIs for system-level data and control while providing a plugin interface for existing and future algorithm implementations. The XAITK project can be followed at: https://xaitk.org.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.40","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48237805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Explainable neural computation via stack neural module networks 可解释的神经计算通过堆栈神经模块网络
Pub Date : 2021-10-16 DOI: 10.1002/ail2.39
Ronghang Hu, Jacob Andreas, Trevor Darrell, Kate Saenko

In complex inferential tasks like question answering, machine learning models must confront two challenges: the need to implement a compositional reasoning process, and, in many applications, the need for this reasoning process to be interpretable to assist users in both development and prediction. Existing models designed to produce interpretable traces of their decision-making process typically require these traces to be supervised at training time. In this paper, we present a novel neural modular approach that performs compositional reasoning by automatically inducing a desired subtask decomposition without relying on strong supervision. Our model allows linking different reasoning tasks through shared modules that handle common routines across tasks. Experiments show that the model is more interpretable to human evaluators compared to other state-of-the-art models: users can better understand the model's underlying reasoning procedure and predict when it will succeed or fail based on observing its intermediate outputs.

在像问答这样复杂的推理任务中,机器学习模型必须面对两个挑战:需要实现一个组合推理过程,并且在许多应用中,需要这个推理过程是可解释的,以帮助用户进行开发和预测。设计用于产生决策过程的可解释痕迹的现有模型通常要求在训练时对这些痕迹进行监督。在本文中,我们提出了一种新的神经模块方法,该方法通过自动诱导期望的子任务分解来进行组合推理,而不依赖于强监督。我们的模型允许通过共享模块连接不同的推理任务,这些模块处理任务之间的公共例程。实验表明,与其他最先进的模型相比,该模型对人类评估人员更具可解释性:用户可以更好地理解模型的底层推理过程,并根据观察其中间输出来预测它何时成功或失败。
{"title":"Explainable neural computation via stack neural module networks","authors":"Ronghang Hu,&nbsp;Jacob Andreas,&nbsp;Trevor Darrell,&nbsp;Kate Saenko","doi":"10.1002/ail2.39","DOIUrl":"https://doi.org/10.1002/ail2.39","url":null,"abstract":"<p>In complex inferential tasks like question answering, machine learning models must confront two challenges: the need to implement a compositional <i>reasoning</i> process, and, in many applications, the need for this reasoning process to be <i>interpretable</i> to assist users in both development and prediction. Existing models designed to produce interpretable traces of their decision-making process typically require these traces to be supervised at training time. In this paper, we present a novel neural modular approach that performs compositional reasoning by automatically inducing a desired subtask decomposition without relying on strong supervision. Our model allows linking different reasoning tasks through shared modules that handle common routines across tasks. Experiments show that the model is more interpretable to human evaluators compared to other state-of-the-art models: users can better understand the model's underlying reasoning procedure and predict when it will succeed or fail based on observing its intermediate outputs.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.39","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137529182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abstraction, validation, and generalization for explainable artificial intelligence 可解释人工智能的抽象、验证和泛化
Pub Date : 2021-09-02 DOI: 10.1002/ail2.37
Scott Cheng-Hsin Yang, Tomas Folke, Patrick Shafto

Neural network architectures are achieving superhuman performance on an expanding range of tasks. To effectively and safely deploy these systems, their decision-making must be understandable to a wide range of stakeholders. Methods to explain artificial intelligence (AI) have been proposed to answer this challenge, but a lack of theory impedes the development of systematic abstractions, which are necessary for cumulative knowledge gains. We propose Bayesian Teaching as a framework for unifying explainable AI (XAI) by integrating machine learning and human learning. Bayesian Teaching formalizes explanation as a communication act of an explainer to shift the beliefs of an explainee. This formalization decomposes a wide range of XAI methods into four components: (a) the target inference, (b) the explanation, (c) the explainee model, and (d) the explainer model. The abstraction afforded by Bayesian Teaching to decompose XAI methods elucidates the invariances among them. The decomposition of XAI systems enables modular validation, as each of the first three components listed can be tested semi-independently. This decomposition also promotes generalization through recombination of components from different XAI systems, which facilitates the generation of novel variants. These new variants need not be evaluated one by one provided that each component has been validated, leading to an exponential decrease in development time. Finally, by making the goal of explanation explicit, Bayesian Teaching helps developers to assess how suitable an XAI system is for its intended real-world use case. Thus, Bayesian Teaching provides a theoretical framework that encourages systematic, scientific investigation of XAI.

神经网络架构在越来越多的任务上实现了超人的性能。为了有效和安全地部署这些系统,它们的决策必须为广泛的利益相关者所理解。人们提出了解释人工智能(AI)的方法来回答这一挑战,但缺乏理论阻碍了系统抽象的发展,而系统抽象是积累知识所必需的。我们提出贝叶斯教学作为一个框架,通过整合机器学习和人类学习来统一可解释的人工智能(XAI)。贝叶斯教学将解释形式化为解释者改变被解释者信念的交流行为。这种形式化将广泛的XAI方法分解为四个组件:(a)目标推理,(b)解释,(c)被解释者模型,以及(d)解释者模型。贝叶斯教学为分解XAI方法提供了抽象,阐明了它们之间的不变性。XAI系统的分解支持模块化验证,因为列出的前三个组件都可以半独立地进行测试。这种分解还通过重新组合来自不同XAI系统的组件来促进泛化,这有助于生成新的变体。如果每个组件都已经过验证,则不需要逐个评估这些新变体,从而导致开发时间呈指数级减少。最后,通过明确解释的目标,贝叶斯教学可以帮助开发人员评估XAI系统对其预期的实际用例的适合程度。因此,贝叶斯教学提供了一个理论框架,鼓励对XAI进行系统、科学的研究。
{"title":"Abstraction, validation, and generalization for explainable artificial intelligence","authors":"Scott Cheng-Hsin Yang,&nbsp;Tomas Folke,&nbsp;Patrick Shafto","doi":"10.1002/ail2.37","DOIUrl":"https://doi.org/10.1002/ail2.37","url":null,"abstract":"<p>Neural network architectures are achieving superhuman performance on an expanding range of tasks. To effectively and safely deploy these systems, their decision-making must be understandable to a wide range of stakeholders. Methods to explain artificial intelligence (AI) have been proposed to answer this challenge, but a lack of theory impedes the development of systematic abstractions, which are necessary for cumulative knowledge gains. We propose Bayesian Teaching as a framework for unifying explainable AI (XAI) by integrating machine learning and human learning. Bayesian Teaching formalizes explanation as a communication act of an explainer to shift the beliefs of an explainee. This formalization decomposes a wide range of XAI methods into four components: (a) the target inference, (b) the explanation, (c) the explainee model, and (d) the explainer model. The abstraction afforded by Bayesian Teaching to decompose XAI methods elucidates the invariances among them. The decomposition of XAI systems enables modular validation, as each of the first three components listed can be tested semi-independently. This decomposition also promotes generalization through recombination of components from different XAI systems, which facilitates the generation of novel variants. These new variants need not be evaluated one by one provided that each component has been validated, leading to an exponential decrease in development time. Finally, by making the goal of explanation explicit, Bayesian Teaching helps developers to assess how suitable an XAI system is for its intended real-world use case. Thus, Bayesian Teaching provides a theoretical framework that encourages systematic, scientific investigation of XAI.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.37","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137781087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From “no clear winner” to an effective Explainable Artificial Intelligence process: An empirical journey 从“没有明确的赢家”到有效的可解释的人工智能过程:经验之旅
Pub Date : 2021-07-18 DOI: 10.1002/ail2.36
Jonathan Dodge, Andrew Anderson, Roli Khanna, Jed Irvine, Rupika Dikkala, Kin-Ho Lam, Delyar Tabatabai, Anita Ruangrotsakun, Zeyad Shureih, Minsuk Kahng, Alan Fern, Margaret Burnett

“In what circumstances would you want this AI to make decisions on your behalf?” We have been investigating how to enable a user of an Artificial Intelligence-powered system to answer questions like this through a series of empirical studies, a group of which we summarize here. We began the series by (a) comparing four explanation configurations of saliency explanations and/or reward explanations. From this study we learned that, although some configurations had significant strengths, no one configuration was a clear “winner.” This result led us to hypothesize that one reason for the low success rates Explainable AI (XAI) research has in enabling users to create a coherent mental model is that the AI itself does not have a coherent model. This hypothesis led us to (b) build a model-based agent, to compare explaining it with explaining a model-free agent. Our results were encouraging, but we then realized that participants' cognitive energy was being sapped by having to create not only a mental model, but also a process by which to create that mental model. This realization led us to (c) create such a process (which we term After-Action Review for AI or “AAR/AI”) for them, integrate it into the explanation environment, and compare participants' success with AAR/AI scaffolding vs without it. Our AAR/AI studies' results showed that AAR/AI participants were more effective assessing the AI than non-AAR/AI participants, with significantly better precision and significantly better recall at finding the AI's reasoning flaws.

“在什么情况下,你希望这个人工智能代表你做决定?”我们一直在研究如何让人工智能驱动系统的用户通过一系列实证研究来回答这样的问题,我们在这里总结了其中的一组。我们首先比较了显著性解释和/或奖励解释的四种解释配置。从这项研究中我们了解到,尽管一些配置具有显著的优势,但没有一种配置是明确的“赢家”。这一结果让我们假设,可解释人工智能(Explainable AI, XAI)研究在帮助用户创建连贯的心智模型方面成功率低的一个原因是,人工智能本身没有一个连贯的模型。这个假设导致我们(b)建立一个基于模型的代理,并将解释它与解释无模型的代理进行比较。我们的结果令人鼓舞,但我们随后意识到,参与者的认知能量正在被消耗,因为他们不仅要创建一个心智模型,还要创建一个心智模型的过程。这种认识使我们(c)为他们创建这样一个过程(我们称之为AI的事后审查或“AAR/AI”),将其集成到解释环境中,并比较参与者在AAR/AI框架下的成功与没有它的情况。我们的AAR/AI研究结果表明,AAR/AI参与者比非AAR/AI参与者更有效地评估AI,在发现AI的推理缺陷方面具有更高的精度和更高的召回率。
{"title":"From “no clear winner” to an effective Explainable Artificial Intelligence process: An empirical journey","authors":"Jonathan Dodge,&nbsp;Andrew Anderson,&nbsp;Roli Khanna,&nbsp;Jed Irvine,&nbsp;Rupika Dikkala,&nbsp;Kin-Ho Lam,&nbsp;Delyar Tabatabai,&nbsp;Anita Ruangrotsakun,&nbsp;Zeyad Shureih,&nbsp;Minsuk Kahng,&nbsp;Alan Fern,&nbsp;Margaret Burnett","doi":"10.1002/ail2.36","DOIUrl":"10.1002/ail2.36","url":null,"abstract":"<p>“In what circumstances would you want this AI to make decisions on your behalf?” We have been investigating how to enable a user of an Artificial Intelligence-powered system to answer questions like this through a series of empirical studies, a group of which we summarize here. We began the series by (a) comparing four explanation configurations of saliency explanations and/or reward explanations. From this study we learned that, although some configurations had significant strengths, no one configuration was a clear “winner.” This result led us to hypothesize that one reason for the low success rates Explainable AI (XAI) research has in enabling users to create a coherent mental model is that the AI itself does not have a coherent model. This hypothesis led us to (b) build a model-based agent, to compare explaining it with explaining a model-free agent. Our results were encouraging, but we then realized that participants' cognitive energy was being sapped by having to create not only a mental model, but also a process by which to create that mental model. This realization led us to (c) create such a process (which we term <i>After-Action Review for AI</i> or “AAR/AI”) for them, integrate it into the explanation environment, and compare participants' success with AAR/AI scaffolding vs without it. Our AAR/AI studies' results showed that AAR/AI participants were more effective assessing the AI than non-AAR/AI participants, with significantly better precision and significantly better recall at finding the AI's reasoning flaws.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.36","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113253994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A practical approach for applying machine learning in the detection and classification of network devices used in building management 将机器学习应用于楼宇管理中网络设备的检测和分类的实用方法
Pub Date : 2021-07-04 DOI: 10.1002/ail2.35
Maroun Touma, Shalisha Witherspoon, Shonda Witherspoon, Isabelle Crawford-Eng

With the increasing deployment of smart buildings and infrastructure, supervisory control and data acquisition (SCADA) devices and the underlying IT network have become essential elements for the proper operations of these highly complex systems. Of course, with the increase in automation and the proliferation of SCADA devices, a corresponding increase in surface area of attack on critical infrastructure has increased. Understanding device behaviors in terms of known and understood or potentially qualified activities vs unknown and potentially nefarious activities in near-real time is a key component of any security solution. In this paper, we investigate the challenges with building robust machine learning models to identify unknowns purely from network traffic both inside and outside firewalls, starting with missing or inconsistent labels across sites, feature engineering and learning, temporal dependencies and analysis, and training data quality (including small sample sizes) for both shallow and deep learning methods. To demonstrate these challenges and the capabilities we have developed, we focus on Building Automation and Control networks (BACnet) from a private commercial building system. Our results show that “Model Zoo” built from binary classifiers based on each device or behavior combined with an ensemble classifier integrating information from all classifiers provides a reliable methodology to identify unknown devices as well as determining specific known devices when the device type is in the training set. The capability of the Model Zoo framework is shown to be directly linked to feature engineering and learning, and the dependency of the feature selection varies depending on both the binary and ensemble classifiers as well.

随着智能建筑和基础设施的部署越来越多,监控和数据采集(SCADA)设备和底层IT网络已成为这些高度复杂系统正常运行的基本要素。当然,随着自动化程度的提高和SCADA设备的激增,对关键基础设施的攻击面积也相应增加。根据已知和理解的或潜在的合格活动来了解设备行为,以及近乎实时的未知和潜在的恶意活动,是任何安全解决方案的关键组成部分。在本文中,我们研究了构建强大的机器学习模型以从防火墙内外的网络流量中识别未知因素的挑战,从跨站点的缺失或不一致的标签,特征工程和学习,时间依赖性和分析以及浅层和深度学习方法的训练数据质量(包括小样本量)开始。为了展示这些挑战和我们开发的能力,我们将重点放在私人商业建筑系统的楼宇自动化和控制网络(BACnet)上。我们的研究结果表明,基于每个设备或行为的二元分类器与集成所有分类器信息的集成分类器相结合构建的“模型动物园”提供了一种可靠的方法来识别未知设备,以及当设备类型在训练集中时确定特定的已知设备。模型动物园框架的能力被证明与特征工程和学习直接相关,并且特征选择的依赖性也取决于二元分类器和集成分类器。
{"title":"A practical approach for applying machine learning in the detection and classification of network devices used in building management","authors":"Maroun Touma,&nbsp;Shalisha Witherspoon,&nbsp;Shonda Witherspoon,&nbsp;Isabelle Crawford-Eng","doi":"10.1002/ail2.35","DOIUrl":"https://doi.org/10.1002/ail2.35","url":null,"abstract":"<p>With the increasing deployment of smart buildings and infrastructure, supervisory control and data acquisition (SCADA) devices and the underlying IT network have become essential elements for the proper operations of these highly complex systems. Of course, with the increase in automation and the proliferation of SCADA devices, a corresponding increase in surface area of attack on critical infrastructure has increased. Understanding device behaviors in terms of known and understood or potentially qualified activities vs unknown and potentially nefarious activities in near-real time is a key component of any security solution. In this paper, we investigate the challenges with building robust machine learning models to identify unknowns purely from network traffic both inside and outside firewalls, starting with missing or inconsistent labels across sites, feature engineering and learning, temporal dependencies and analysis, and training data quality (including small sample sizes) for both shallow and deep learning methods. To demonstrate these challenges and the capabilities we have developed, we focus on Building Automation and Control networks (BACnet) from a private commercial building system. Our results show that “Model Zoo” built from binary classifiers based on each device or behavior combined with an ensemble classifier integrating information from all classifiers provides a reliable methodology to identify unknown devices as well as determining specific known devices when the device type is in the training set. The capability of the Model Zoo framework is shown to be directly linked to feature engineering and learning, and the dependency of the feature selection varies depending on both the binary and ensemble classifiers as well.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/ail2.35","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137795566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards an affordable magnetomyography instrumentation and low model complexity approach for labour imminency prediction using a novel multiresolution analysis 使用新颖的多分辨率分析,实现负担得起的磁断层成像仪器和低模型复杂性的劳动迫切性预测方法
Pub Date : 2021-06-26 DOI: 10.1002/ail2.34
Ejay Nsugbe, Ibrahim Sanusi

The ability to predict the onset of labour is seen to be an important tool in a clinical setting. Magnetomyography has shown promise in the area of labour imminency prediction, but its clinical application remains limited due to high resource consumption associated with its broad number of channels. In this study, five electrode channels, which account for 3.3% of the total, are used alongside a novel signal decomposition algorithm and low complexity classifiers (logistic regression and linear-SVM) to classify between labour imminency due within 0 to 48 hours and >48 hours. The results suggest that the parsimonious representation comprising of five electrode channels and novel signal decomposition method alongside the candidate classifiers could allow for greater affordability and hence clinical viability of the magnetomyography-based prediction model, which carries a good degree of model interpretability. The results showed around a 20% increase on average for the novel decomposition method, alongside a reduced group of features across the various classification metrics considered for both the logistic regression and support vector machine.

预测分娩开始的能力被认为是临床环境中的一个重要工具。磁断层成像在临产预测领域显示出前景,但其临床应用仍然有限,因为其通道数量多,资源消耗高。在这项研究中,五个电极通道(占总数的3.3%)与一种新的信号分解算法和低复杂度分类器(逻辑回归和线性支持向量机)一起使用,在0至48小时内和48小时内进行劳动迫在眉睫的分类。结果表明,由五个电极通道和新的信号分解方法组成的简约表示以及候选分类器可以允许更高的可负担性,因此基于磁层图的预测模型的临床可行性,该模型具有良好的模型可解释性。结果显示,新的分解方法平均提高了20%左右,同时逻辑回归和支持向量机考虑的各种分类指标的特征组也减少了。
{"title":"Towards an affordable magnetomyography instrumentation and low model complexity approach for labour imminency prediction using a novel multiresolution analysis","authors":"Ejay Nsugbe,&nbsp;Ibrahim Sanusi","doi":"10.1002/ail2.34","DOIUrl":"https://doi.org/10.1002/ail2.34","url":null,"abstract":"<p>The ability to predict the onset of labour is seen to be an important tool in a clinical setting. Magnetomyography has shown promise in the area of labour imminency prediction, but its clinical application remains limited due to high resource consumption associated with its broad number of channels. In this study, five electrode channels, which account for 3.3% of the total, are used alongside a novel signal decomposition algorithm and low complexity classifiers (logistic regression and linear-SVM) to classify between labour imminency due within 0 to 48 hours and &gt;48 hours. The results suggest that the parsimonious representation comprising of five electrode channels and novel signal decomposition method alongside the candidate classifiers could allow for greater affordability and hence clinical viability of the magnetomyography-based prediction model, which carries a good degree of model interpretability. The results showed around a 20% increase on average for the novel decomposition method, alongside a reduced group of features across the various classification metrics considered for both the logistic regression and support vector machine.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":"2 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/ail2.34","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137548038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Applied AI letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1