{"title":"Scaling Laws for Economic Productivity: Experimental Evidence in LLM-Assisted Translation","authors":"Ali Merali","doi":"arxiv-2409.02391","DOIUrl":null,"url":null,"abstract":"This paper derives 'scaling laws' -- empirical relationships between the\namount of training compute used for a Large Language Model (LLM) and its\nperformance -- for economic outcomes. In a preregistered experiment, 300\nprofessional translators completed 1800 tasks with access to one of thirteen\nLLMs with differing model training compute sizes (or a control). Our results\nshow that model scaling substantially raises productivity: for every 10x\nincrease in model compute, translators completed tasks 12.3% quicker, received\n0.18 s.d. higher grades, and earned 16.1% more per minute (including bonus\npayments). Further, the gains from model scaling are much higher for\nlower-skilled workers who gain a 4x larger improvement in task completion\nspeed. These results imply further frontier model scaling -- which is currently\nestimated at 4x increase per year -- may have significant economic\nimplications.","PeriodicalId":501273,"journal":{"name":"arXiv - ECON - General Economics","volume":"75 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - ECON - General Economics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.02391","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper derives 'scaling laws' -- empirical relationships between the
amount of training compute used for a Large Language Model (LLM) and its
performance -- for economic outcomes. In a preregistered experiment, 300
professional translators completed 1800 tasks with access to one of thirteen
LLMs with differing model training compute sizes (or a control). Our results
show that model scaling substantially raises productivity: for every 10x
increase in model compute, translators completed tasks 12.3% quicker, received
0.18 s.d. higher grades, and earned 16.1% more per minute (including bonus
payments). Further, the gains from model scaling are much higher for
lower-skilled workers who gain a 4x larger improvement in task completion
speed. These results imply further frontier model scaling -- which is currently
estimated at 4x increase per year -- may have significant economic
implications.