Andrew Pilny, Kelly McAninch, Amanda Slone, Kelsey Moore
{"title":"从人工到机器:评估大型语言模型在内容分析中的功效","authors":"Andrew Pilny, Kelly McAninch, Amanda Slone, Kelsey Moore","doi":"10.1080/08824096.2024.2327547","DOIUrl":null,"url":null,"abstract":"This study compares the performance of Large Language Models (LLMs) and human coders in predicting relational uncertainty from textual data. Employing various LLMs (gpt-4.0-turbo, gpt-3.5-turbo, Cl...","PeriodicalId":47084,"journal":{"name":"Communication Research Reports","volume":"61 1","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"From manual to machine: assessing the efficacy of large language models in content analysis\",\"authors\":\"Andrew Pilny, Kelly McAninch, Amanda Slone, Kelsey Moore\",\"doi\":\"10.1080/08824096.2024.2327547\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This study compares the performance of Large Language Models (LLMs) and human coders in predicting relational uncertainty from textual data. Employing various LLMs (gpt-4.0-turbo, gpt-3.5-turbo, Cl...\",\"PeriodicalId\":47084,\"journal\":{\"name\":\"Communication Research Reports\",\"volume\":\"61 1\",\"pages\":\"\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2024-03-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Communication Research Reports\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/08824096.2024.2327547\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMMUNICATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Communication Research Reports","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/08824096.2024.2327547","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMMUNICATION","Score":null,"Total":0}
From manual to machine: assessing the efficacy of large language models in content analysis
This study compares the performance of Large Language Models (LLMs) and human coders in predicting relational uncertainty from textual data. Employing various LLMs (gpt-4.0-turbo, gpt-3.5-turbo, Cl...