Do Large Language Models Learn to Human-Like Learn?

Jesse Roberts
{"title":"Do Large Language Models Learn to Human-Like Learn?","authors":"Jesse Roberts","doi":"10.1609/aaaiss.v3i1.31287","DOIUrl":null,"url":null,"abstract":"Human-like learning refers to the learning done in the lifetime of the individual. However, the architecture of the human brain has been developed over millennia and represents a long process of evolutionary learning which could be viewed as a form of pre-training. Large language models (LLMs), after pre-training on large amounts of data, exhibit a form of learning referred to as in-context learning (ICL). Consistent with human-like learning, LLMs are able to use ICL to perform novel tasks with few examples and to interpret the examples through the lens of their prior experience. I examine the constraints which typify human-like learning and propose that LLMs may learn to exhibit human-like learning simply by training on human generated text.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"1 10","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI Symposium Series","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/aaaiss.v3i1.31287","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Human-like learning refers to the learning done in the lifetime of the individual. However, the architecture of the human brain has been developed over millennia and represents a long process of evolutionary learning which could be viewed as a form of pre-training. Large language models (LLMs), after pre-training on large amounts of data, exhibit a form of learning referred to as in-context learning (ICL). Consistent with human-like learning, LLMs are able to use ICL to perform novel tasks with few examples and to interpret the examples through the lens of their prior experience. I examine the constraints which typify human-like learning and propose that LLMs may learn to exhibit human-like learning simply by training on human generated text.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
大型语言模型能学会像人类一样学习吗?
类人学习指的是个体在一生中完成的学习。然而,人类大脑的结构已经发展了几千年,代表了一个漫长的进化学习过程,可以被看作是一种预训练。大型语言模型(LLM)在对大量数据进行预训练后,会表现出一种被称为上下文学习(ICL)的学习形式。与人类的学习方式一致,LLMs 能够利用 ICL 在实例较少的情况下完成新任务,并通过先前的经验对实例进行解释。我研究了类人学习的典型约束条件,并提出 LLMs 只需在人类生成的文本上进行训练,就能学会表现出类人学习的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Modes of Tracking Mal-Info in Social Media with AI/ML Tools to Help Mitigate Harmful GenAI for Improved Societal Well Being Embodying Human-Like Modes of Balance Control Through Human-In-the-Loop Dyadic Learning Constructing Deep Concepts through Shallow Search Implications of Identity in AI: Creators, Creations, and Consequences ASMR: Aggregated Semantic Matching Retrieval Unleashing Commonsense Ability of LLM through Open-Ended Question Answering
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1