随机数生成任务中大型语言模型与人类表现的比较

Rachel M. Harrison
{"title":"随机数生成任务中大型语言模型与人类表现的比较","authors":"Rachel M. Harrison","doi":"arxiv-2408.09656","DOIUrl":null,"url":null,"abstract":"Random Number Generation Tasks (RNGTs) are used in psychology for examining\nhow humans generate sequences devoid of predictable patterns. By adapting an\nexisting human RNGT for an LLM-compatible environment, this preliminary study\ntests whether ChatGPT-3.5, a large language model (LLM) trained on\nhuman-generated text, exhibits human-like cognitive biases when generating\nrandom number sequences. Initial findings indicate that ChatGPT-3.5 more\neffectively avoids repetitive and sequential patterns compared to humans, with\nnotably lower repeat frequencies and adjacent number frequencies. Continued\nresearch into different models, parameters, and prompting methodologies will\ndeepen our understanding of how LLMs can more closely mimic human random\ngeneration behaviors, while also broadening their applications in cognitive and\nbehavioral science research.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Comparison of Large Language Model and Human Performance on Random Number Generation Tasks\",\"authors\":\"Rachel M. Harrison\",\"doi\":\"arxiv-2408.09656\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Random Number Generation Tasks (RNGTs) are used in psychology for examining\\nhow humans generate sequences devoid of predictable patterns. By adapting an\\nexisting human RNGT for an LLM-compatible environment, this preliminary study\\ntests whether ChatGPT-3.5, a large language model (LLM) trained on\\nhuman-generated text, exhibits human-like cognitive biases when generating\\nrandom number sequences. Initial findings indicate that ChatGPT-3.5 more\\neffectively avoids repetitive and sequential patterns compared to humans, with\\nnotably lower repeat frequencies and adjacent number frequencies. Continued\\nresearch into different models, parameters, and prompting methodologies will\\ndeepen our understanding of how LLMs can more closely mimic human random\\ngeneration behaviors, while also broadening their applications in cognitive and\\nbehavioral science research.\",\"PeriodicalId\":501517,\"journal\":{\"name\":\"arXiv - QuanBio - Neurons and Cognition\",\"volume\":\"9 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuanBio - Neurons and Cognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.09656\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.09656","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

随机数生成任务(RNGT)在心理学中被用于研究人类如何生成没有可预测模式的序列。本初步研究通过将现有的人类 RNGT 改编为与 LLM 兼容的环境,测试了以人类生成的文本为基础训练的大语言模型(LLM)ChatGPT-3.5 在生成随机数序列时是否表现出与人类类似的认知偏差。初步研究结果表明,与人类相比,ChatGPT-3.5 能更有效地避免重复和顺序模式,重复频率和相邻数字频率明显较低。对不同模型、参数和提示方法的继续研究将加深我们对 LLM 如何更接近地模仿人类随机生成行为的理解,同时也将拓宽它们在认知和行为科学研究中的应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
A Comparison of Large Language Model and Human Performance on Random Number Generation Tasks
Random Number Generation Tasks (RNGTs) are used in psychology for examining how humans generate sequences devoid of predictable patterns. By adapting an existing human RNGT for an LLM-compatible environment, this preliminary study tests whether ChatGPT-3.5, a large language model (LLM) trained on human-generated text, exhibits human-like cognitive biases when generating random number sequences. Initial findings indicate that ChatGPT-3.5 more effectively avoids repetitive and sequential patterns compared to humans, with notably lower repeat frequencies and adjacent number frequencies. Continued research into different models, parameters, and prompting methodologies will deepen our understanding of how LLMs can more closely mimic human random generation behaviors, while also broadening their applications in cognitive and behavioral science research.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Early reduced dopaminergic tone mediated by D3 receptor and dopamine transporter in absence epileptogenesis Contrasformer: A Brain Network Contrastive Transformer for Neurodegenerative Condition Identification Identifying Influential nodes in Brain Networks via Self-Supervised Graph-Transformer Contrastive Learning in Memristor-based Neuromorphic Systems Self-Attention Limits Working Memory Capacity of Transformer-Based Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1