什么是好的预测?评估代理知识的挑战。

IF 1.2 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Adaptive Behavior Pub Date : 2023-06-01 DOI:10.1177/10597123221095880
Alex Kearney, Anna J Koop, Patrick M Pilarski
{"title":"什么是好的预测?评估代理知识的挑战。","authors":"Alex Kearney,&nbsp;Anna J Koop,&nbsp;Patrick M Pilarski","doi":"10.1177/10597123221095880","DOIUrl":null,"url":null,"abstract":"<p><p>Constructing general knowledge by learning task-independent models of the world can help agents solve challenging problems. However, both constructing and evaluating such models remain an open challenge. The most common approaches to evaluating models is to assess their accuracy with respect to observable values. However, the prevailing reliance on estimator accuracy as a proxy for the usefulness of the knowledge has the potential to lead us astray. We demonstrate the conflict between accuracy and usefulness through a series of illustrative examples including both a thought experiment and an empirical example in Minecraft, using the General Value Function framework (GVF). Having identified challenges in assessing an agent's knowledge, we propose an alternate evaluation approach that arises naturally in the online continual learning setting: we recommend evaluation by examining internal learning processes, specifically the relevance of a GVF's features to the prediction task at hand. This paper contributes a first look into evaluation of predictions through their use, an integral component of predictive knowledge which is as of yet unexplored.</p>","PeriodicalId":55552,"journal":{"name":"Adaptive Behavior","volume":"31 3","pages":"197-212"},"PeriodicalIF":1.2000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10240643/pdf/","citationCount":"4","resultStr":"{\"title\":\"What's a good prediction? Challenges in evaluating an agent's knowledge.\",\"authors\":\"Alex Kearney,&nbsp;Anna J Koop,&nbsp;Patrick M Pilarski\",\"doi\":\"10.1177/10597123221095880\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Constructing general knowledge by learning task-independent models of the world can help agents solve challenging problems. However, both constructing and evaluating such models remain an open challenge. The most common approaches to evaluating models is to assess their accuracy with respect to observable values. However, the prevailing reliance on estimator accuracy as a proxy for the usefulness of the knowledge has the potential to lead us astray. We demonstrate the conflict between accuracy and usefulness through a series of illustrative examples including both a thought experiment and an empirical example in Minecraft, using the General Value Function framework (GVF). Having identified challenges in assessing an agent's knowledge, we propose an alternate evaluation approach that arises naturally in the online continual learning setting: we recommend evaluation by examining internal learning processes, specifically the relevance of a GVF's features to the prediction task at hand. This paper contributes a first look into evaluation of predictions through their use, an integral component of predictive knowledge which is as of yet unexplored.</p>\",\"PeriodicalId\":55552,\"journal\":{\"name\":\"Adaptive Behavior\",\"volume\":\"31 3\",\"pages\":\"197-212\"},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2023-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10240643/pdf/\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Adaptive Behavior\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1177/10597123221095880\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Adaptive Behavior","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1177/10597123221095880","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 4

摘要

通过学习世界的任务独立模型来构建通用知识可以帮助智能体解决具有挑战性的问题。然而,构建和评估这些模型仍然是一个开放的挑战。评估模型最常用的方法是评估它们相对于观测值的准确性。然而,普遍依赖估计器的准确性作为知识有用性的代理有可能使我们误入歧途。我们通过一系列说明性的例子来证明准确性和实用性之间的冲突,包括使用通用价值函数框架(GVF)的《我的世界》中的思想实验和经验例子。在确定了评估智能体知识的挑战之后,我们提出了一种在在线持续学习环境中自然出现的替代评估方法:我们建议通过检查内部学习过程进行评估,特别是GVF特征与手头预测任务的相关性。本文通过预测的使用,作为尚未探索的预测性知识的一个组成部分,为预测的评估提供了第一个视角。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

摘要图片

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
What's a good prediction? Challenges in evaluating an agent's knowledge.

Constructing general knowledge by learning task-independent models of the world can help agents solve challenging problems. However, both constructing and evaluating such models remain an open challenge. The most common approaches to evaluating models is to assess their accuracy with respect to observable values. However, the prevailing reliance on estimator accuracy as a proxy for the usefulness of the knowledge has the potential to lead us astray. We demonstrate the conflict between accuracy and usefulness through a series of illustrative examples including both a thought experiment and an empirical example in Minecraft, using the General Value Function framework (GVF). Having identified challenges in assessing an agent's knowledge, we propose an alternate evaluation approach that arises naturally in the online continual learning setting: we recommend evaluation by examining internal learning processes, specifically the relevance of a GVF's features to the prediction task at hand. This paper contributes a first look into evaluation of predictions through their use, an integral component of predictive knowledge which is as of yet unexplored.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Adaptive Behavior
Adaptive Behavior 工程技术-计算机:人工智能
CiteScore
4.30
自引率
18.80%
发文量
34
审稿时长
>12 weeks
期刊介绍: _Adaptive Behavior_ publishes articles on adaptive behaviour in living organisms and autonomous artificial systems. The official journal of the _International Society of Adaptive Behavior_, _Adaptive Behavior_, addresses topics such as perception and motor control, embodied cognition, learning and evolution, neural mechanisms, artificial intelligence, behavioral sequences, motivation and emotion, characterization of environments, decision making, collective and social behavior, navigation, foraging, communication and signalling. Print ISSN: 1059-7123
期刊最新文献
Environmental complexity, cognition, and plant stress physiology A model of how hierarchical representations constructed in the hippocampus are used to navigate through space Mechanical Problem Solving in Goffin’s Cockatoos—Towards Modeling Complex Behavior Coupling First-Person Cognitive Research With Neurophilosophy and Enactivism: An Outline of Arguments The origin and function of external representations
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1