{"title":"对人工智能聊天机器人的信任:系统回顾","authors":"Sheryl Wei Ting Ng , Renwen Zhang","doi":"10.1016/j.tele.2025.102240","DOIUrl":null,"url":null,"abstract":"<div><div>Advancements in artificial intelligence (AI) have enabled increasingly natural and human-like interactions with conversational agents (chatbots). However, the processes and outcomes of trust in AI chatbots remain underexplored. This study provides a systematic review of how trust in AI chatbots is defined, operationalised, and studied, synthesizing factors influencing trust development and its outcomes. An analysis of 40 articles revealed notable variations and inconsistencies in trust conceptualisations and operationalisations. Predictors of trust are categorized into five groups: user, machine, interaction, social, and context-related factors. Trust in AI chatbots leads to diverse outcomes that span affective, relational, behavioural, cognitive, and psychological domains. The review underscores the need for longitudinal studies to better understand the dynamics and boundary conditions of trust development. These findings offer valuable insights for advancing human–machine communication (HMC) research and informing the design of trustworthy AI systems.</div></div>","PeriodicalId":48257,"journal":{"name":"Telematics and Informatics","volume":"97 ","pages":"Article 102240"},"PeriodicalIF":7.6000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Trust in AI chatbots: A systematic review\",\"authors\":\"Sheryl Wei Ting Ng , Renwen Zhang\",\"doi\":\"10.1016/j.tele.2025.102240\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Advancements in artificial intelligence (AI) have enabled increasingly natural and human-like interactions with conversational agents (chatbots). However, the processes and outcomes of trust in AI chatbots remain underexplored. This study provides a systematic review of how trust in AI chatbots is defined, operationalised, and studied, synthesizing factors influencing trust development and its outcomes. An analysis of 40 articles revealed notable variations and inconsistencies in trust conceptualisations and operationalisations. Predictors of trust are categorized into five groups: user, machine, interaction, social, and context-related factors. Trust in AI chatbots leads to diverse outcomes that span affective, relational, behavioural, cognitive, and psychological domains. The review underscores the need for longitudinal studies to better understand the dynamics and boundary conditions of trust development. These findings offer valuable insights for advancing human–machine communication (HMC) research and informing the design of trustworthy AI systems.</div></div>\",\"PeriodicalId\":48257,\"journal\":{\"name\":\"Telematics and Informatics\",\"volume\":\"97 \",\"pages\":\"Article 102240\"},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2025-02-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Telematics and Informatics\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0736585325000024\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Telematics and Informatics","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0736585325000024","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
Advancements in artificial intelligence (AI) have enabled increasingly natural and human-like interactions with conversational agents (chatbots). However, the processes and outcomes of trust in AI chatbots remain underexplored. This study provides a systematic review of how trust in AI chatbots is defined, operationalised, and studied, synthesizing factors influencing trust development and its outcomes. An analysis of 40 articles revealed notable variations and inconsistencies in trust conceptualisations and operationalisations. Predictors of trust are categorized into five groups: user, machine, interaction, social, and context-related factors. Trust in AI chatbots leads to diverse outcomes that span affective, relational, behavioural, cognitive, and psychological domains. The review underscores the need for longitudinal studies to better understand the dynamics and boundary conditions of trust development. These findings offer valuable insights for advancing human–machine communication (HMC) research and informing the design of trustworthy AI systems.
期刊介绍:
Telematics and Informatics is an interdisciplinary journal that publishes cutting-edge theoretical and methodological research exploring the social, economic, geographic, political, and cultural impacts of digital technologies. It covers various application areas, such as smart cities, sensors, information fusion, digital society, IoT, cyber-physical technologies, privacy, knowledge management, distributed work, emergency response, mobile communications, health informatics, social media's psychosocial effects, ICT for sustainable development, blockchain, e-commerce, and e-government.