{"title":"在高风险的保险应用案例中解决与 ChatGPT 有关的信任问题","authors":"Juliane Ressel , Michaele Völler , Finbarr Murphy , Martin Mullins","doi":"10.1016/j.techsoc.2024.102644","DOIUrl":null,"url":null,"abstract":"<div><p>The public discourse concerning the level of (dis)trust in ChatGPT and other applications based on large language models (LLMs) is loaded with generic, dread risk terms, while the heterogeneity of relevant theoretical concepts and empirical measurements of trust further impedes in-depth analysis. Thus, a more nuanced understanding of the factors driving the trust judgment call is essential to avoid unwarranted trust. In this commentary paper, we propose that addressing the notion of trust in consumer-facing LLM-based systems across the insurance industry can confer enhanced specificity to this debate. The concept and role of trust are germane to this particular setting due to the highly intangible nature of the product coupled with elevated levels of risk, complexity, and information asymmetry. Moreover, widespread use of LLMs in this sector is to be expected, given the vast array of text documents, particularly general policy conditions or claims protocols. Insurance as a practice is highly relevant to the welfare of citizens and has numerous spillover effects on wider public policy areas. We therefore argue that a domain-specific approach to good AI governance is essential to avoid negative externalities around financial inclusion. Indeed, as a constitutive element of trust, vulnerability is particularly challenging within this high-stakes set of transactions, with the adoption of LLMs adding to the socio-ethical risks. In light of this, our commentary provides a valuable baseline to support regulators and policymakers in unravelling the profound socioeconomic consequences that may arise from adopting consumer-facing LLMs in insurance.</p></div>","PeriodicalId":47979,"journal":{"name":"Technology in Society","volume":null,"pages":null},"PeriodicalIF":10.1000,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0160791X24001921/pdfft?md5=e3ca72087c09bc695b525f008d640018&pid=1-s2.0-S0160791X24001921-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Addressing the notion of trust around ChatGPT in the high-stakes use case of insurance\",\"authors\":\"Juliane Ressel , Michaele Völler , Finbarr Murphy , Martin Mullins\",\"doi\":\"10.1016/j.techsoc.2024.102644\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The public discourse concerning the level of (dis)trust in ChatGPT and other applications based on large language models (LLMs) is loaded with generic, dread risk terms, while the heterogeneity of relevant theoretical concepts and empirical measurements of trust further impedes in-depth analysis. Thus, a more nuanced understanding of the factors driving the trust judgment call is essential to avoid unwarranted trust. In this commentary paper, we propose that addressing the notion of trust in consumer-facing LLM-based systems across the insurance industry can confer enhanced specificity to this debate. The concept and role of trust are germane to this particular setting due to the highly intangible nature of the product coupled with elevated levels of risk, complexity, and information asymmetry. Moreover, widespread use of LLMs in this sector is to be expected, given the vast array of text documents, particularly general policy conditions or claims protocols. Insurance as a practice is highly relevant to the welfare of citizens and has numerous spillover effects on wider public policy areas. We therefore argue that a domain-specific approach to good AI governance is essential to avoid negative externalities around financial inclusion. Indeed, as a constitutive element of trust, vulnerability is particularly challenging within this high-stakes set of transactions, with the adoption of LLMs adding to the socio-ethical risks. In light of this, our commentary provides a valuable baseline to support regulators and policymakers in unravelling the profound socioeconomic consequences that may arise from adopting consumer-facing LLMs in insurance.</p></div>\",\"PeriodicalId\":47979,\"journal\":{\"name\":\"Technology in Society\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":10.1000,\"publicationDate\":\"2024-06-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0160791X24001921/pdfft?md5=e3ca72087c09bc695b525f008d640018&pid=1-s2.0-S0160791X24001921-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Technology in Society\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0160791X24001921\",\"RegionNum\":1,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"SOCIAL ISSUES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technology in Society","FirstCategoryId":"90","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0160791X24001921","RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL ISSUES","Score":null,"Total":0}
Addressing the notion of trust around ChatGPT in the high-stakes use case of insurance
The public discourse concerning the level of (dis)trust in ChatGPT and other applications based on large language models (LLMs) is loaded with generic, dread risk terms, while the heterogeneity of relevant theoretical concepts and empirical measurements of trust further impedes in-depth analysis. Thus, a more nuanced understanding of the factors driving the trust judgment call is essential to avoid unwarranted trust. In this commentary paper, we propose that addressing the notion of trust in consumer-facing LLM-based systems across the insurance industry can confer enhanced specificity to this debate. The concept and role of trust are germane to this particular setting due to the highly intangible nature of the product coupled with elevated levels of risk, complexity, and information asymmetry. Moreover, widespread use of LLMs in this sector is to be expected, given the vast array of text documents, particularly general policy conditions or claims protocols. Insurance as a practice is highly relevant to the welfare of citizens and has numerous spillover effects on wider public policy areas. We therefore argue that a domain-specific approach to good AI governance is essential to avoid negative externalities around financial inclusion. Indeed, as a constitutive element of trust, vulnerability is particularly challenging within this high-stakes set of transactions, with the adoption of LLMs adding to the socio-ethical risks. In light of this, our commentary provides a valuable baseline to support regulators and policymakers in unravelling the profound socioeconomic consequences that may arise from adopting consumer-facing LLMs in insurance.
期刊介绍:
Technology in Society is a global journal dedicated to fostering discourse at the crossroads of technological change and the social, economic, business, and philosophical transformation of our world. The journal aims to provide scholarly contributions that empower decision-makers to thoughtfully and intentionally navigate the decisions shaping this dynamic landscape. A common thread across these fields is the role of technology in society, influencing economic, political, and cultural dynamics. Scholarly work in Technology in Society delves into the social forces shaping technological decisions and the societal choices regarding technology use. This encompasses scholarly and theoretical approaches (history and philosophy of science and technology, technology forecasting, economic growth, and policy, ethics), applied approaches (business innovation, technology management, legal and engineering), and developmental perspectives (technology transfer, technology assessment, and economic development). Detailed information about the journal's aims and scope on specific topics can be found in Technology in Society Briefings, accessible via our Special Issues and Article Collections.