{"title":"随机数生成任务中大型语言模型与人类表现的比较","authors":"Rachel M. Harrison","doi":"arxiv-2408.09656","DOIUrl":null,"url":null,"abstract":"Random Number Generation Tasks (RNGTs) are used in psychology for examining\nhow humans generate sequences devoid of predictable patterns. By adapting an\nexisting human RNGT for an LLM-compatible environment, this preliminary study\ntests whether ChatGPT-3.5, a large language model (LLM) trained on\nhuman-generated text, exhibits human-like cognitive biases when generating\nrandom number sequences. Initial findings indicate that ChatGPT-3.5 more\neffectively avoids repetitive and sequential patterns compared to humans, with\nnotably lower repeat frequencies and adjacent number frequencies. Continued\nresearch into different models, parameters, and prompting methodologies will\ndeepen our understanding of how LLMs can more closely mimic human random\ngeneration behaviors, while also broadening their applications in cognitive and\nbehavioral science research.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"9 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Comparison of Large Language Model and Human Performance on Random Number Generation Tasks\",\"authors\":\"Rachel M. Harrison\",\"doi\":\"arxiv-2408.09656\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Random Number Generation Tasks (RNGTs) are used in psychology for examining\\nhow humans generate sequences devoid of predictable patterns. By adapting an\\nexisting human RNGT for an LLM-compatible environment, this preliminary study\\ntests whether ChatGPT-3.5, a large language model (LLM) trained on\\nhuman-generated text, exhibits human-like cognitive biases when generating\\nrandom number sequences. Initial findings indicate that ChatGPT-3.5 more\\neffectively avoids repetitive and sequential patterns compared to humans, with\\nnotably lower repeat frequencies and adjacent number frequencies. Continued\\nresearch into different models, parameters, and prompting methodologies will\\ndeepen our understanding of how LLMs can more closely mimic human random\\ngeneration behaviors, while also broadening their applications in cognitive and\\nbehavioral science research.\",\"PeriodicalId\":501517,\"journal\":{\"name\":\"arXiv - QuanBio - Neurons and Cognition\",\"volume\":\"9 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - QuanBio - Neurons and Cognition\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.09656\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.09656","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Comparison of Large Language Model and Human Performance on Random Number Generation Tasks
Random Number Generation Tasks (RNGTs) are used in psychology for examining
how humans generate sequences devoid of predictable patterns. By adapting an
existing human RNGT for an LLM-compatible environment, this preliminary study
tests whether ChatGPT-3.5, a large language model (LLM) trained on
human-generated text, exhibits human-like cognitive biases when generating
random number sequences. Initial findings indicate that ChatGPT-3.5 more
effectively avoids repetitive and sequential patterns compared to humans, with
notably lower repeat frequencies and adjacent number frequencies. Continued
research into different models, parameters, and prompting methodologies will
deepen our understanding of how LLMs can more closely mimic human random
generation behaviors, while also broadening their applications in cognitive and
behavioral science research.