神经反应时间分析:可解释的人工智能只用一个秒表

Applied AI letters Pub Date : 2021-11-06 DOI:10.1002/ail2.48
J. Eric T. Taylor, Shashank Shekhar, Graham W. Taylor
{"title":"神经反应时间分析:可解释的人工智能只用一个秒表","authors":"J. Eric T. Taylor,&nbsp;Shashank Shekhar,&nbsp;Graham W. Taylor","doi":"10.1002/ail2.48","DOIUrl":null,"url":null,"abstract":"<p>How would you describe the features that a deep learning model composes if you were restricted to measuring observable behaviours? Explainable artificial intelligence (XAI) methods rely on privileged access to model architecture and parameters that is not always feasible for most users, practitioners and regulators. Inspired by cognitive psychology research on humans, we present a case for measuring response times (RTs) of a forward pass using only the system clock as a technique for XAI. Our method applies to the growing class of models that use input-adaptive dynamic inference and we also extend our approach to standard models that are converted to dynamic inference post hoc. The experimental logic is simple: If the researcher can contrive a stimulus set where variability among input features is tightly controlled, differences in RT for those inputs can be attributed to the way the model composes those features. First, we show that RT is sensitive to difficult, complex features by comparing RTs from ObjectNet and ImageNet. Next, we make specific a priori predictions about RT for abstract features present in the SCEGRAM data set, where object recognition in humans depends on complex intrascene object-object relationships. Finally, we show that RT profiles bear specificity for class identity and therefore the features that define classes. These results cast light on the model's feature space without opening the black box.</p>","PeriodicalId":72253,"journal":{"name":"Applied AI letters","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.48","citationCount":"0","resultStr":"{\"title\":\"Neural response time analysis: Explainable artificial intelligence using only a stopwatch\",\"authors\":\"J. Eric T. Taylor,&nbsp;Shashank Shekhar,&nbsp;Graham W. Taylor\",\"doi\":\"10.1002/ail2.48\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>How would you describe the features that a deep learning model composes if you were restricted to measuring observable behaviours? Explainable artificial intelligence (XAI) methods rely on privileged access to model architecture and parameters that is not always feasible for most users, practitioners and regulators. Inspired by cognitive psychology research on humans, we present a case for measuring response times (RTs) of a forward pass using only the system clock as a technique for XAI. Our method applies to the growing class of models that use input-adaptive dynamic inference and we also extend our approach to standard models that are converted to dynamic inference post hoc. The experimental logic is simple: If the researcher can contrive a stimulus set where variability among input features is tightly controlled, differences in RT for those inputs can be attributed to the way the model composes those features. First, we show that RT is sensitive to difficult, complex features by comparing RTs from ObjectNet and ImageNet. Next, we make specific a priori predictions about RT for abstract features present in the SCEGRAM data set, where object recognition in humans depends on complex intrascene object-object relationships. Finally, we show that RT profiles bear specificity for class identity and therefore the features that define classes. These results cast light on the model's feature space without opening the black box.</p>\",\"PeriodicalId\":72253,\"journal\":{\"name\":\"Applied AI letters\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ail2.48\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied AI letters\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ail2.48\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied AI letters","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ail2.48","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

如果你被限制在测量可观察的行为,你会如何描述深度学习模型所构成的特征?可解释的人工智能(XAI)方法依赖于对模型架构和参数的特权访问,这对于大多数用户、从业者和监管者来说并不总是可行的。受人类认知心理学研究的启发,我们提出了一个仅使用系统时钟作为XAI技术来测量向前传递的响应时间(RTs)的案例。我们的方法适用于越来越多的使用输入自适应动态推理的模型,我们也将我们的方法扩展到转换为动态推理的标准模型。实验逻辑很简单:如果研究人员可以设计一个刺激集,其中输入特征之间的可变性受到严格控制,那么这些输入的RT差异可以归因于模型组成这些特征的方式。首先,我们通过比较ObjectNet和ImageNet的RT,证明RT对困难、复杂的特征很敏感。接下来,我们对sceggram数据集中存在的抽象特征的RT进行具体的先验预测,其中人类的对象识别依赖于复杂的内部对象-对象关系。最后,我们展示了RT概要文件具有类标识的特异性,因此具有定义类的特性。这些结果揭示了模型的特征空间,而无需打开黑盒。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Neural response time analysis: Explainable artificial intelligence using only a stopwatch

How would you describe the features that a deep learning model composes if you were restricted to measuring observable behaviours? Explainable artificial intelligence (XAI) methods rely on privileged access to model architecture and parameters that is not always feasible for most users, practitioners and regulators. Inspired by cognitive psychology research on humans, we present a case for measuring response times (RTs) of a forward pass using only the system clock as a technique for XAI. Our method applies to the growing class of models that use input-adaptive dynamic inference and we also extend our approach to standard models that are converted to dynamic inference post hoc. The experimental logic is simple: If the researcher can contrive a stimulus set where variability among input features is tightly controlled, differences in RT for those inputs can be attributed to the way the model composes those features. First, we show that RT is sensitive to difficult, complex features by comparing RTs from ObjectNet and ImageNet. Next, we make specific a priori predictions about RT for abstract features present in the SCEGRAM data set, where object recognition in humans depends on complex intrascene object-object relationships. Finally, we show that RT profiles bear specificity for class identity and therefore the features that define classes. These results cast light on the model's feature space without opening the black box.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Issue Information Fine-Tuned Pretrained Transformer for Amharic News Headline Generation TL-GNN: Android Malware Detection Using Transfer Learning Issue Information Building Text and Speech Benchmark Datasets and Models for Low-Resourced East African Languages: Experiences and Lessons
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1