{"title":"风险交流和大型语言模型","authors":"Daniel Sledge, Herschel F. Thomas","doi":"10.1002/rhc3.12303","DOIUrl":null,"url":null,"abstract":"The widespread embrace of Large Language Models (LLMs) integrated with chatbot interfaces, such as ChatGPT, represents a potentially critical moment in the development of risk communication and management. In this article, we consider the implications of the current wave of LLM‐based chat programs for risk communication. We examine ChatGPT‐generated responses to 24 different hazard situations. We compare these responses to guidelines published for public consumption on the US Department of Homeland Security's <jats:ext-link xmlns:xlink=\"http://www.w3.org/1999/xlink\" xlink:href=\"http://Ready.gov\">Ready.gov</jats:ext-link> website. We find that, although ChatGPT did not generate false or misleading responses, ChatGPT responses were typically less than optimal in terms of their similarity to guidances from the federal government. While delivered in an authoritative tone, these responses at times omitted important information and contained points of emphasis that were substantially different than those from <jats:ext-link xmlns:xlink=\"http://www.w3.org/1999/xlink\" xlink:href=\"http://Ready.gov\">Ready.gov</jats:ext-link>. Moving forward, it is critical that researchers and public officials both seek to harness the power of LLMs to inform the public and acknowledge the challenges represented by a potential shift in information flows away from public officials and experts and towards individuals.","PeriodicalId":21362,"journal":{"name":"Risk, Hazards & Crisis in Public Policy","volume":"4 1","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Risk communication and large language models\",\"authors\":\"Daniel Sledge, Herschel F. Thomas\",\"doi\":\"10.1002/rhc3.12303\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The widespread embrace of Large Language Models (LLMs) integrated with chatbot interfaces, such as ChatGPT, represents a potentially critical moment in the development of risk communication and management. In this article, we consider the implications of the current wave of LLM‐based chat programs for risk communication. We examine ChatGPT‐generated responses to 24 different hazard situations. We compare these responses to guidelines published for public consumption on the US Department of Homeland Security's <jats:ext-link xmlns:xlink=\\\"http://www.w3.org/1999/xlink\\\" xlink:href=\\\"http://Ready.gov\\\">Ready.gov</jats:ext-link> website. We find that, although ChatGPT did not generate false or misleading responses, ChatGPT responses were typically less than optimal in terms of their similarity to guidances from the federal government. While delivered in an authoritative tone, these responses at times omitted important information and contained points of emphasis that were substantially different than those from <jats:ext-link xmlns:xlink=\\\"http://www.w3.org/1999/xlink\\\" xlink:href=\\\"http://Ready.gov\\\">Ready.gov</jats:ext-link>. Moving forward, it is critical that researchers and public officials both seek to harness the power of LLMs to inform the public and acknowledge the challenges represented by a potential shift in information flows away from public officials and experts and towards individuals.\",\"PeriodicalId\":21362,\"journal\":{\"name\":\"Risk, Hazards & Crisis in Public Policy\",\"volume\":\"4 1\",\"pages\":\"\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2024-05-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Risk, Hazards & Crisis in Public Policy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/rhc3.12303\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"PUBLIC ADMINISTRATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Risk, Hazards & Crisis in Public Policy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/rhc3.12303","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PUBLIC ADMINISTRATION","Score":null,"Total":0}
The widespread embrace of Large Language Models (LLMs) integrated with chatbot interfaces, such as ChatGPT, represents a potentially critical moment in the development of risk communication and management. In this article, we consider the implications of the current wave of LLM‐based chat programs for risk communication. We examine ChatGPT‐generated responses to 24 different hazard situations. We compare these responses to guidelines published for public consumption on the US Department of Homeland Security's Ready.gov website. We find that, although ChatGPT did not generate false or misleading responses, ChatGPT responses were typically less than optimal in terms of their similarity to guidances from the federal government. While delivered in an authoritative tone, these responses at times omitted important information and contained points of emphasis that were substantially different than those from Ready.gov. Moving forward, it is critical that researchers and public officials both seek to harness the power of LLMs to inform the public and acknowledge the challenges represented by a potential shift in information flows away from public officials and experts and towards individuals.
期刊介绍:
Scholarship on risk, hazards, and crises (emergencies, disasters, or public policy/organizational crises) has developed into mature and distinct fields of inquiry. Risk, Hazards & Crisis in Public Policy (RHCPP) addresses the governance implications of the important questions raised for the respective fields. The relationships between risk, hazards, and crisis raise fundamental questions with broad social science and policy implications. During unstable situations of acute or chronic danger and substantial uncertainty (i.e. a crisis), important and deeply rooted societal institutions, norms, and values come into play. The purpose of RHCPP is to provide a forum for research and commentary that examines societies’ understanding of and measures to address risk,hazards, and crises, how public policies do and should address these concerns, and to what effect. The journal is explicitly designed to encourage a broad range of perspectives by integrating work from a variety of disciplines. The journal will look at social science theory and policy design across the spectrum of risks and crises — including natural and technological hazards, public health crises, terrorism, and societal and environmental disasters. Papers will analyze the ways societies deal with both unpredictable and predictable events as public policy questions, which include topics such as crisis governance, loss and liability, emergency response, agenda setting, and the social and cultural contexts in which hazards, risks and crises are perceived and defined. Risk, Hazards & Crisis in Public Policy invites dialogue and is open to new approaches. We seek scholarly work that combines academic quality with practical relevance. We especially welcome authors writing on the governance of risk and crises to submit their manuscripts.