将解释作为一种互动媒介:EQUAS(可解释问答系统)项目

Applied AI letters Pub Date : 2021-11-30 DOI:10.1002/ail2.60
William Ferguson, Dhruv Batra, Raymond Mooney, Devi Parikh, Antonio Torralba, David Bau, David Diller, Josh Fasching, Jaden Fiotto-Kaufman, Yash Goyal, Jeff Miller, Kerry Moffitt, Alex Montes de Oca, Ramprasaath R. Selvaraju, Ayush Shrivastava, Jialin Wu, Stefan Lee
{"title":"将解释作为一种互动媒介:EQUAS(可解释问答系统)项目","authors":"William Ferguson,&nbsp;Dhruv Batra,&nbsp;Raymond Mooney,&nbsp;Devi Parikh,&nbsp;Antonio Torralba,&nbsp;David Bau,&nbsp;David Diller,&nbsp;Josh Fasching,&nbsp;Jaden Fiotto-Kaufman,&nbsp;Yash Goyal,&nbsp;Jeff Miller,&nbsp;Kerry Moffitt,&nbsp;Alex Montes de Oca,&nbsp;Ramprasaath R. Selvaraju,&nbsp;Ayush Shrivastava,&nbsp;Jialin Wu,&nbsp;Stefan Lee","doi":"10.1002/ail2.60","DOIUrl":null,"url":null,"abstract":"<p>This letter is a retrospective analysis of our team's research for the Defense Advanced Research Projects Agency Explainable Artificial Intelligence project. Our initial approach was to use salience maps, English sentences, and lists of feature names to explain the behavior of deep-learning-based discriminative systems, with particular focus on visual question answering systems. We found that presenting static explanations along with answers led to limited positive effects. By exploring various combinations of machine and human explanation production and consumption, we evolved a notion of explanation as an interactive process that takes place usually between humans and artificial intelligence systems but sometimes within the software system. We realized that by interacting via explanations people could task and adapt machine learning (ML) agents. We added affordances for editing explanations and modified the ML system to act in accordance with the edits to produce an interpretable interface to the agent. Through this interface, editing an explanation can adapt a system's performance to new, modified purposes. This deep tasking, wherein the agent knows its objective and the explanation for that objective, will be critical to enable higher levels of autonomy.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.60","citationCount":"0","resultStr":"{\"title\":\"Reframing explanation as an interactive medium: The EQUAS (Explainable QUestion Answering System) project\",\"authors\":\"William Ferguson,&nbsp;Dhruv Batra,&nbsp;Raymond Mooney,&nbsp;Devi Parikh,&nbsp;Antonio Torralba,&nbsp;David Bau,&nbsp;David Diller,&nbsp;Josh Fasching,&nbsp;Jaden Fiotto-Kaufman,&nbsp;Yash Goyal,&nbsp;Jeff Miller,&nbsp;Kerry Moffitt,&nbsp;Alex Montes de Oca,&nbsp;Ramprasaath R. Selvaraju,&nbsp;Ayush Shrivastava,&nbsp;Jialin Wu,&nbsp;Stefan Lee\",\"doi\":\"10.1002/ail2.60\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>This letter is a retrospective analysis of our team's research for the Defense Advanced Research Projects Agency Explainable Artificial Intelligence project. Our initial approach was to use salience maps, English sentences, and lists of feature names to explain the behavior of deep-learning-based discriminative systems, with particular focus on visual question answering systems. We found that presenting static explanations along with answers led to limited positive effects. By exploring various combinations of machine and human explanation production and consumption, we evolved a notion of explanation as an interactive process that takes place usually between humans and artificial intelligence systems but sometimes within the software system. We realized that by interacting via explanations people could task and adapt machine learning (ML) agents. We added affordances for editing explanations and modified the ML system to act in accordance with the edits to produce an interpretable interface to the agent. Through this interface, editing an explanation can adapt a system's performance to new, modified purposes. This deep tasking, wherein the agent knows its objective and the explanation for that objective, will be critical to enable higher levels of autonomy.</p>\",\"PeriodicalId\":72253,\"journal\":{\"name\":\"Applied AI letters\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.60\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied AI letters\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ail2.60\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied AI letters","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ail2.60","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

这封信是对我们团队为国防高级研究计划局可解释人工智能项目所做研究的回顾性分析。我们最初的方法是使用显著性地图、英语句子和特征名称列表来解释基于深度学习的判别系统的行为,特别关注视觉问答系统。我们发现,在给出答案的同时给出静态的解释,其积极效果有限。通过探索机器和人类解释生产和消费的各种组合,我们进化出一种解释的概念,即解释是一种交互过程,通常发生在人类和人工智能系统之间,但有时也发生在软件系统内部。我们意识到,通过解释进行交互,人们可以分配任务并适应机器学习(ML)代理。我们添加了编辑解释的功能,并修改了机器学习系统,使其根据编辑内容采取行动,从而为代理生成可解释的界面。通过这个接口,编辑解释可以使系统的性能适应新的、修改过的目的。在这种深度任务中,智能体知道自己的目标和对目标的解释,这对于实现更高水平的自主性至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Reframing explanation as an interactive medium: The EQUAS (Explainable QUestion Answering System) project

This letter is a retrospective analysis of our team's research for the Defense Advanced Research Projects Agency Explainable Artificial Intelligence project. Our initial approach was to use salience maps, English sentences, and lists of feature names to explain the behavior of deep-learning-based discriminative systems, with particular focus on visual question answering systems. We found that presenting static explanations along with answers led to limited positive effects. By exploring various combinations of machine and human explanation production and consumption, we evolved a notion of explanation as an interactive process that takes place usually between humans and artificial intelligence systems but sometimes within the software system. We realized that by interacting via explanations people could task and adapt machine learning (ML) agents. We added affordances for editing explanations and modified the ML system to act in accordance with the edits to produce an interpretable interface to the agent. Through this interface, editing an explanation can adapt a system's performance to new, modified purposes. This deep tasking, wherein the agent knows its objective and the explanation for that objective, will be critical to enable higher levels of autonomy.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Issue Information Fine-Tuned Pretrained Transformer for Amharic News Headline Generation TL-GNN: Android Malware Detection Using Transfer Learning Issue Information Building Text and Speech Benchmark Datasets and Models for Low-Resourced East African Languages: Experiences and Lessons
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1