Surfing Information: The Challenge of Intelligent Decision-Making

IF 2.2 Q3 COMPUTER SCIENCE, CYBERNETICS International Journal of Intelligent Computing and Cybernetics Pub Date : 2023-01-01 DOI:10.34133/icomputing.0041
Chenyang Wu, Zongzhang Zhang
{"title":"Surfing Information: The Challenge of Intelligent Decision-Making","authors":"Chenyang Wu, Zongzhang Zhang","doi":"10.34133/icomputing.0041","DOIUrl":null,"url":null,"abstract":"Reinforcement learning (RL) is indispensable for building intelligent decision-making agents. However, current RL algorithms suffer from statistical and computational inefficiencies that render them useless in most real-world applications. We argue that high-value information in the real world is essential for intelligent decision-making; however, it is not addressed by most RL formalisms. Through a closer investigation of high-value information, it becomes evident that, to exploit high-value information, there is a need to formalize intelligent decision-making as bounded-optimal lifelong RL. Thus, the challenge of achieving intelligent decision-making is summarized as effectively surfing information, specifically regarding handling the non-IID (independent and identically distributed) information stream while operating with limited resources. This study discusses the design of an intelligent decision-making agent and examines its primary challenges, which are (a) online learning for non-IID data streams, (b) efficient reasoning with limited resources, and (c) the exploration–exploitation dilemma. We review relevant problems and research in the field of RL literature and conclude that current RL methods are insufficient to address these challenges. We propose that an agent capable of overcoming these challenges could effectively surf the information overload in the real world and achieve sample- and compute-efficient intelligent decision-making.","PeriodicalId":45291,"journal":{"name":"International Journal of Intelligent Computing and Cybernetics","volume":null,"pages":null},"PeriodicalIF":2.2000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Intelligent Computing and Cybernetics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34133/icomputing.0041","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0

Abstract

Reinforcement learning (RL) is indispensable for building intelligent decision-making agents. However, current RL algorithms suffer from statistical and computational inefficiencies that render them useless in most real-world applications. We argue that high-value information in the real world is essential for intelligent decision-making; however, it is not addressed by most RL formalisms. Through a closer investigation of high-value information, it becomes evident that, to exploit high-value information, there is a need to formalize intelligent decision-making as bounded-optimal lifelong RL. Thus, the challenge of achieving intelligent decision-making is summarized as effectively surfing information, specifically regarding handling the non-IID (independent and identically distributed) information stream while operating with limited resources. This study discusses the design of an intelligent decision-making agent and examines its primary challenges, which are (a) online learning for non-IID data streams, (b) efficient reasoning with limited resources, and (c) the exploration–exploitation dilemma. We review relevant problems and research in the field of RL literature and conclude that current RL methods are insufficient to address these challenges. We propose that an agent capable of overcoming these challenges could effectively surf the information overload in the real world and achieve sample- and compute-efficient intelligent decision-making.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
信息浏览:智能决策的挑战
强化学习(RL)是构建智能决策代理不可或缺的技术。然而,当前的强化学习算法存在统计和计算效率低下的问题,这使得它们在大多数实际应用中毫无用处。我们认为,现实世界中的高价值信息对于智能决策至关重要;然而,大多数RL形式化并没有解决这个问题。通过对高价值信息的深入研究,很明显,为了利用高价值信息,有必要将智能决策形式化为有界最优终身强化学习。因此,实现智能决策的挑战被概括为有效地浏览信息,特别是在有限资源下处理非iid(独立和相同分布的)信息流。本研究讨论了智能决策代理的设计,并考察了其主要挑战,即(a)非iid数据流的在线学习,(b)有限资源下的有效推理,以及(c)探索-开发困境。我们回顾了RL领域的相关问题和研究文献,并得出结论,目前的RL方法不足以解决这些挑战。我们提出,能够克服这些挑战的智能体可以有效地在现实世界中的信息过载中冲浪,并实现样本和计算效率的智能决策。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
6.80
自引率
4.70%
发文量
26
期刊最新文献
X-News dataset for online news categorization X-News dataset for online news categorization A novel ensemble causal feature selection approach with mutual information and group fusion strategy for multi-label data Contextualized dynamic meta embeddings based on Gated CNNs and self-attention for Arabic machine translation Dynamic community detection algorithm based on hyperbolic graph convolution
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1