Mengxue Ou, Han Zheng, Yueliang Zeng, Preben Hansen
{"title":"信不信由你:了解用户评估人工智能生成信息可信度的动机和策略","authors":"Mengxue Ou, Han Zheng, Yueliang Zeng, Preben Hansen","doi":"10.1177/14614448241293154","DOIUrl":null,"url":null,"abstract":"The evolution of artificial intelligence (AI) facilitates the creation of multimodal information of mixed quality, intensifying the challenges individuals face when assessing information credibility. Through in-depth interviews with users of generative AI platforms, this study investigates the underlying motivations and multidimensional approaches people use to assess the credibility of AI-generated information. Four major motivations driving users to authenticate information are identified: expectancy violation, task features, personal involvement, and pre-existing attitudes. Users evaluate AI-generated information’s credibility using both internal (e.g. relying on AI affordances, content integrity, and subjective expertise) and external approaches (e.g. iterative interaction, cross-validation, and practical testing). Theoretical and practical implications are discussed in the context of AI-generated content assessment.","PeriodicalId":19149,"journal":{"name":"New Media & Society","volume":null,"pages":null},"PeriodicalIF":4.5000,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Trust it or not: Understanding users’ motivations and strategies for assessing the credibility of AI-generated information\",\"authors\":\"Mengxue Ou, Han Zheng, Yueliang Zeng, Preben Hansen\",\"doi\":\"10.1177/14614448241293154\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The evolution of artificial intelligence (AI) facilitates the creation of multimodal information of mixed quality, intensifying the challenges individuals face when assessing information credibility. Through in-depth interviews with users of generative AI platforms, this study investigates the underlying motivations and multidimensional approaches people use to assess the credibility of AI-generated information. Four major motivations driving users to authenticate information are identified: expectancy violation, task features, personal involvement, and pre-existing attitudes. Users evaluate AI-generated information’s credibility using both internal (e.g. relying on AI affordances, content integrity, and subjective expertise) and external approaches (e.g. iterative interaction, cross-validation, and practical testing). Theoretical and practical implications are discussed in the context of AI-generated content assessment.\",\"PeriodicalId\":19149,\"journal\":{\"name\":\"New Media & Society\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2024-11-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"New Media & Society\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1177/14614448241293154\",\"RegionNum\":1,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMMUNICATION\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"New Media & Society","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1177/14614448241293154","RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
Trust it or not: Understanding users’ motivations and strategies for assessing the credibility of AI-generated information
The evolution of artificial intelligence (AI) facilitates the creation of multimodal information of mixed quality, intensifying the challenges individuals face when assessing information credibility. Through in-depth interviews with users of generative AI platforms, this study investigates the underlying motivations and multidimensional approaches people use to assess the credibility of AI-generated information. Four major motivations driving users to authenticate information are identified: expectancy violation, task features, personal involvement, and pre-existing attitudes. Users evaluate AI-generated information’s credibility using both internal (e.g. relying on AI affordances, content integrity, and subjective expertise) and external approaches (e.g. iterative interaction, cross-validation, and practical testing). Theoretical and practical implications are discussed in the context of AI-generated content assessment.
期刊介绍:
New Media & Society engages in critical discussions of the key issues arising from the scale and speed of new media development, drawing on a wide range of disciplinary perspectives and on both theoretical and empirical research. The journal includes contributions on: -the individual and the social, the cultural and the political dimensions of new media -the global and local dimensions of the relationship between media and social change -contemporary as well as historical developments -the implications and impacts of, as well as the determinants and obstacles to, media change the relationship between theory, policy and practice.