{"title":"大型语言模型及其巨大潜力","authors":"Sarah A Fisher","doi":"10.1007/s10676-024-09802-5","DOIUrl":null,"url":null,"abstract":"<p><p>Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are <i>bullshitting</i>, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they <i>need not</i> bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.</p>","PeriodicalId":51495,"journal":{"name":"Ethics and Information Technology","volume":"26 4","pages":"67"},"PeriodicalIF":3.4000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452423/pdf/","citationCount":"0","resultStr":"{\"title\":\"Large language models and their big bullshit potential.\",\"authors\":\"Sarah A Fisher\",\"doi\":\"10.1007/s10676-024-09802-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are <i>bullshitting</i>, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they <i>need not</i> bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.</p>\",\"PeriodicalId\":51495,\"journal\":{\"name\":\"Ethics and Information Technology\",\"volume\":\"26 4\",\"pages\":\"67\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452423/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Ethics and Information Technology\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.1007/s10676-024-09802-5\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/10/4 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ethics and Information Technology","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.1007/s10676-024-09802-5","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/10/4 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
Large language models and their big bullshit potential.
Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are bullshitting, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they need not bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.
期刊介绍:
Ethics and Information Technology is a peer-reviewed journal dedicated to advancing the dialogue between moral philosophy and the field of information and communication technology (ICT). The journal aims to foster and promote reflection and analysis which is intended to make a constructive contribution to answering the ethical, social and political questions associated with the adoption, use, and development of ICT. Within the scope of the journal are also conceptual analysis and discussion of ethical ICT issues which arise in the context of technology assessment, cultural studies, public policy analysis and public administration, cognitive science, social and anthropological studies in technology, mass-communication, and legal studies.