{"title":"What drives AI-based risk information-seeking intent? Insufficiency of risk information versus (Un)certainty of AI chatbots","authors":"Soo Jung Hong","doi":"10.1016/j.chb.2024.108460","DOIUrl":null,"url":null,"abstract":"<div><div>This study explored the factors influencing the U.S. public's intent to seek risk information via AI-powered channels, such as ChatGPT. It focused on cognitive and affective pathways that lead to uncertainty about both risk information and AI chatbots in the context of climate change risk. We conducted a comparative analysis to discern the impacts of risk perceptions related to climate change and AI-caused privacy risks on public uncertainty and decision-making regarding the use of AI chatbots. Specifically, we assessed how different risk-related perceptions and emotions contribute to subsequent uncertainty perceptions and decision-making regarding AI chatbot use for climate change risk information. We enlisted 1023 U.S. citizens aged 21–65 via CloudResearch in September 2023. The results reveal that high levels of perceived risk, strong negative emotions, and information insufficiency drive information-seeking behavior through AI chatbots. Perceived privacy concerns about AI technology significantly increase AI anxiety, which is positively associated with perceived uncertainty. Both AI anxiety and perceived uncertainty negatively affect the intent to seek information via AI chatbots. Conversely, perceived trust in AI chatbots significantly increases positive emotional responses, reduces perceived uncertainty, and enhances the intent to seek information via AI chatbots. We also investigated the mediation effects within each study model tested. The findings offer theoretical and practical implications for future studies on the public's adoption of AI services for risk information seeking, influenced by both risk-related and technology-based contexts.</div></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":"162 ","pages":"Article 108460"},"PeriodicalIF":9.0000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0747563224003285","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
This study explored the factors influencing the U.S. public's intent to seek risk information via AI-powered channels, such as ChatGPT. It focused on cognitive and affective pathways that lead to uncertainty about both risk information and AI chatbots in the context of climate change risk. We conducted a comparative analysis to discern the impacts of risk perceptions related to climate change and AI-caused privacy risks on public uncertainty and decision-making regarding the use of AI chatbots. Specifically, we assessed how different risk-related perceptions and emotions contribute to subsequent uncertainty perceptions and decision-making regarding AI chatbot use for climate change risk information. We enlisted 1023 U.S. citizens aged 21–65 via CloudResearch in September 2023. The results reveal that high levels of perceived risk, strong negative emotions, and information insufficiency drive information-seeking behavior through AI chatbots. Perceived privacy concerns about AI technology significantly increase AI anxiety, which is positively associated with perceived uncertainty. Both AI anxiety and perceived uncertainty negatively affect the intent to seek information via AI chatbots. Conversely, perceived trust in AI chatbots significantly increases positive emotional responses, reduces perceived uncertainty, and enhances the intent to seek information via AI chatbots. We also investigated the mediation effects within each study model tested. The findings offer theoretical and practical implications for future studies on the public's adoption of AI services for risk information seeking, influenced by both risk-related and technology-based contexts.
期刊介绍:
Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.