首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
AI literacy for users – A comprehensive review and future research directions of learning methods, components, and effects 面向用户的人工智能扫盲--关于学习方法、内容和效果的全面回顾与未来研究方向
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100062
Marc Pinski, Alexander Benlian

The rapid advancement of artificial intelligence (AI) has brought transformative changes to various aspects of human life, leading to an exponential increase in the number of AI users. The broad access and usage of AI enable immense benefits but also give rise to significant challenges. One way for AI users to address these challenges is to develop AI literacy, referring to human proficiency in different subject areas of AI that enable purposeful, efficient, and ethical usage of AI technologies. This study aims to comprehensively understand and structure the research on AI literacy for AI users through a systematic, scoping literature review. Therefore, we synthesize the literature, provide a conceptual framework, and develop a research agenda. Our review paper holistically assesses the fragmented AI literacy research landscape (68 papers) while critically examining its specificity to different user groups and its distinction from other technology literacies, exposing that research efforts are partly not well integrated. We organize our findings in an overarching conceptual framework structured along the learning methods leading to, the components constituting, and the effects stemming from AI literacy. Our research agenda – oriented along the developed conceptual framework – sheds light on the most promising research opportunities to prepare AI users for an AI-powered future of work and society.

人工智能(AI)的快速发展给人类生活的各个方面带来了变革,导致人工智能用户数量呈指数级增长。人工智能的广泛接触和使用带来了巨大的好处,但也带来了巨大的挑战。人工智能用户应对这些挑战的方法之一是培养人工智能素养,即人类在人工智能不同学科领域的能力,从而能够有目的、高效率、合乎道德地使用人工智能技术。本研究旨在通过系统的、范围广泛的文献综述,全面了解和构建有关人工智能用户人工智能素养的研究。因此,我们对文献进行了综合,提供了一个概念框架,并制定了一个研究议程。我们的综述论文全面评估了支离破碎的人工智能扫盲研究(68 篇论文),同时批判性地审视了其对不同用户群体的特殊性及其与其他技术扫盲的区别,揭露了部分研究工作没有得到很好整合的问题。我们根据人工智能素养的学习方法、构成要素和效果,将研究结果归纳为一个总体概念框架。我们的研究议程--以已开发的概念框架为导向--揭示了最有前途的研究机会,让人工智能用户为人工智能驱动的未来工作和社会做好准备。
{"title":"AI literacy for users – A comprehensive review and future research directions of learning methods, components, and effects","authors":"Marc Pinski,&nbsp;Alexander Benlian","doi":"10.1016/j.chbah.2024.100062","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100062","url":null,"abstract":"<div><p>The rapid advancement of artificial intelligence (AI) has brought transformative changes to various aspects of human life, leading to an exponential increase in the number of AI users. The broad access and usage of AI enable immense benefits but also give rise to significant challenges. One way for AI users to address these challenges is to develop AI literacy, referring to human proficiency in different subject areas of AI that enable purposeful, efficient, and ethical usage of AI technologies. This study aims to comprehensively understand and structure the research on AI literacy for AI users through a systematic, scoping literature review. Therefore, we synthesize the literature, provide a conceptual framework, and develop a research agenda. Our review paper holistically assesses the fragmented AI literacy research landscape (68 papers) while critically examining its specificity to different user groups and its distinction from other technology literacies, exposing that research efforts are partly not well integrated. We organize our findings in an overarching conceptual framework structured along the learning methods leading to, the components constituting, and the effects stemming from AI literacy. Our research agenda – oriented along the developed conceptual framework – sheds light on the most promising research opportunities to prepare AI users for an AI-powered future of work and society.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000227/pdfft?md5=67048bb47ad6e81dd544c466338d703f&pid=1-s2.0-S2949882124000227-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling morality and spirituality in artificial chaplains 人工牧师的道德和灵性建模
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100051
Mark Graves
{"title":"Modeling morality and spirituality in artificial chaplains","authors":"Mark Graves","doi":"10.1016/j.chbah.2024.100051","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100051","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000112/pdfft?md5=c4380ab3c86812f04171e97918fb3c5d&pid=1-s2.0-S2949882124000112-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139744221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual vs. Human influencers: The battle for consumer hearts and minds 虚拟影响者与人工影响者:消费者心智之争
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100059
Abhishek Dondapati, Ranjit Kumar Dehury

Virtual influencers, or fictional CGI-generated social media personas, are gaining popularity. However, research lacks information on how they compare to human influencers in shaping consumer attitudes and purchase intent. This study examines whether perceived homophily and para-social relationships mediate the effect of influencer type on purchase intent and the moderating effect of perceived authenticity. A 2 × 2 between-subjects experiment manipulated influencer type (virtual vs. human) and product type (hedonic vs. utilitarian). Young adult participants viewed an Instagram profile of a lifestyle influencer. Authenticity, perceived homophily, para-social relationship, and purchase intent were measured using established scales. Perceived homophily and para-social relationships mediate the effect of influencer type on purchase intent. A significant interaction showed that perceived authenticity moderated the mediated pathway, such that the indirect effect via para-social relationship and perceived homophily was stronger for human influencers. Maintaining an authentic persona is critical for virtual influencers to sway consumer behaviours, especially for audiences less familiar with social media.

虚拟影响者或由 CGI 生成的虚构社交媒体角色越来越受欢迎。然而,有关虚拟影响者与人类影响者在塑造消费者态度和购买意向方面的比较的研究还很缺乏。本研究探讨了感知到的同质性和准社会关系是否会调节影响者类型对购买意向的影响,以及感知到的真实性的调节作用。一项 2 × 2 的主体间实验操纵了影响者类型(虚拟与人类)和产品类型(享乐与功利)。年轻的成年参与者观看了一位生活方式影响者的 Instagram 简介。实验采用既定量表对真实性、感知同质性、准社会关系和购买意向进行了测量。感知到的同质性和准社会关系调解了影响者类型对购买意向的影响。显着的交互作用表明,感知到的真实性调节了中介途径,因此通过社会辅助关系和感知到的同亲关系产生的间接效应对人类影响者更强。虚拟影响者要想左右消费者的行为,尤其是对于不太熟悉社交媒体的受众来说,保持真实的形象至关重要。
{"title":"Virtual vs. Human influencers: The battle for consumer hearts and minds","authors":"Abhishek Dondapati,&nbsp;Ranjit Kumar Dehury","doi":"10.1016/j.chbah.2024.100059","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100059","url":null,"abstract":"<div><p>Virtual influencers, or fictional CGI-generated social media personas, are gaining popularity. However, research lacks information on how they compare to human influencers in shaping consumer attitudes and purchase intent. This study examines whether perceived homophily and para-social relationships mediate the effect of influencer type on purchase intent and the moderating effect of perceived authenticity. A 2 × 2 between-subjects experiment manipulated influencer type (virtual vs. human) and product type (hedonic vs. utilitarian). Young adult participants viewed an Instagram profile of a lifestyle influencer. Authenticity, perceived homophily, para-social relationship, and purchase intent were measured using established scales. Perceived homophily and para-social relationships mediate the effect of influencer type on purchase intent. A significant interaction showed that perceived authenticity moderated the mediated pathway, such that the indirect effect via para-social relationship and perceived homophily was stronger for human influencers. Maintaining an authentic persona is critical for virtual influencers to sway consumer behaviours, especially for audiences less familiar with social media.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000197/pdfft?md5=20eb84dd566ad4d79f74fed42380915b&pid=1-s2.0-S2949882124000197-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140000257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trust in artificial intelligence: Literature review and main path analysis 人工智能中的信任:文献综述和主要路径分析
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100043
Bruno Miranda Henrique , Eugene Santos Jr.

Artificial Intelligence (AI) is present in various modern systems, but it is still subject to acceptance in many fields. Medical diagnosis, autonomous driving cars, recommender systems and robotics are examples of areas in which some humans distrust AI technology, which ultimately leads to low acceptance rates. Conversely, those same applications can have humans who over rely on AI, acting as recommended by the systems with no criticism regarding the risks of a wrong decision. Therefore, there is an optimal balance with respect to trust in AI, achieved by calibration of expectations and capabilities. In this context, the literature about factors influencing trust in AI and its calibration is scattered among research fields, with no objective summaries of the overall evolution of the theme. In order to close this gap, this paper contributes a literature review of the most influential papers on the subject of trust in AI, selected by quantitative methods. It also proposes a Main Path Analysis of the literature, highlighting how the theme has evolved over the years. As results, researchers will find an overview on trust in AI based on the most important papers objectively selected and also tendencies and opportunities for future research.

人工智能(AI)存在于各种现代系统中,但在许多领域仍有待接受。医疗诊断、自动驾驶汽车、推荐系统和机器人技术都是一些人类不信任人工智能技术的领域,最终导致接受率低下。反之,同样是这些应用,人类也可能过度依赖人工智能,按照系统的建议行事,对错误决策的风险不闻不问。因此,对人工智能的信任需要一个最佳平衡点,通过校准期望值和能力来实现。在这种情况下,有关影响人工智能信任度及其校准的因素的文献散见于各个研究领域,没有对这一主题的整体演变进行客观总结。为了填补这一空白,本文通过定量方法,对人工智能信任主题中最具影响力的论文进行了文献综述。本文还提出了文献的主要路径分析,强调了该主题多年来的演变过程。研究人员将根据客观筛选出的最重要文献,对人工智能中的信任问题进行综述,并发现未来研究的趋势和机遇。
{"title":"Trust in artificial intelligence: Literature review and main path analysis","authors":"Bruno Miranda Henrique ,&nbsp;Eugene Santos Jr.","doi":"10.1016/j.chbah.2024.100043","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100043","url":null,"abstract":"<div><p>Artificial Intelligence (AI) is present in various modern systems, but it is still subject to acceptance in many fields. Medical diagnosis, autonomous driving cars, recommender systems and robotics are examples of areas in which some humans distrust AI technology, which ultimately leads to low acceptance rates. Conversely, those same applications can have humans who over rely on AI, acting as recommended by the systems with no criticism regarding the risks of a wrong decision. Therefore, there is an optimal balance with respect to trust in AI, achieved by calibration of expectations and capabilities. In this context, the literature about factors influencing trust in AI and its calibration is scattered among research fields, with no objective summaries of the overall evolution of the theme. In order to close this gap, this paper contributes a literature review of the most influential papers on the subject of trust in AI, selected by quantitative methods. It also proposes a Main Path Analysis of the literature, highlighting how the theme has evolved over the years. As results, researchers will find an overview on trust in AI based on the most important papers objectively selected and also tendencies and opportunities for future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000033/pdfft?md5=730364a034e2bd4ec1f23bf724f7adef&pid=1-s2.0-S2949882124000033-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139550002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of assessment for learning with artificial intelligence 人工智能学习评估综述
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2023.100040
Bahar Memarian, Tenzin Doleck

The reformed Assessment For Learning (AFL) practices the design of activities and evaluation and feedback processes that improve student learning. While Artificial Intelligence (AI) has blossomed as a field in education, less work has been done to examine the studies and challenges reported between AFL and AI. We conduct a review of the literature to examine the state of work on AFL and AI in the education literature. A review of articles in Web of Science, SCOPUS, and Google Scholar yielded 35 studies for review. We share the trends in research design, AFL conceptions, and AI challenges in the reviewed studies. We offer the implications of AFL and AI and considerations for future research.

经过改革的 "学习评估"(Assessment For Learning,AFL)是指设计能够提高学生学习效果的活动、评价和反馈过程。虽然人工智能(AI)已在教育领域蓬勃发展,但对 AFL 和 AI 之间的研究和挑战的研究却较少。我们进行了一次文献综述,以研究教育文献中有关 AFL 和 AI 的工作状况。通过对 Web of Science、SCOPUS 和 Google Scholar 中的文章进行综述,我们得出了 35 篇研究综述。我们分享了这些研究在研究设计、AFL 概念和人工智能挑战方面的趋势。我们提出了 AFL 和人工智能的影响以及未来研究的考虑因素。
{"title":"A review of assessment for learning with artificial intelligence","authors":"Bahar Memarian,&nbsp;Tenzin Doleck","doi":"10.1016/j.chbah.2023.100040","DOIUrl":"10.1016/j.chbah.2023.100040","url":null,"abstract":"<div><p>The reformed Assessment For Learning (AFL) practices the design of activities and evaluation and feedback processes that improve student learning. While Artificial Intelligence (AI) has blossomed as a field in education, less work has been done to examine the studies and challenges reported between AFL and AI. We conduct a review of the literature to examine the state of work on AFL and AI in the education literature. A review of articles in Web of Science, SCOPUS, and Google Scholar yielded 35 studies for review. We share the trends in research design, AFL conceptions, and AI challenges in the reviewed studies. We offer the implications of AFL and AI and considerations for future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000403/pdfft?md5=7027156594dcf9b4d5bc0dc0e9c5dca9&pid=1-s2.0-S2949882123000403-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139191639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-creating art with generative artificial intelligence: Implications for artworks and artists 与生成式人工智能共同创造艺术:对艺术作品和艺术家的影响
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100056
Uwe Messer

Synthetic visual art is becoming a commodity due to generative artificial intelligence (AI). The trend of using AI for co-creation will not spare artists’ creative processes, and it is important to understand how the use of generative AI at different stages of the creative process affects both the evaluation of the artist and the result of the human-machine collaboration (i.e., the visual artifact). In three experiments (N = 560), this research explores how the evaluation of artworks is transformed by the revelation that the artist collaborated with AI at different stages of the creative process. The results show that co-created art is less liked and recognized, especially when AI was used in the implementation stage. While co-created art is perceived as more novel, it lacks creative authenticity, which exerts a dominant influence. The results also show that artists’ perceptions suffer from the co-creation process, and that artists who co-create are less admired because they are perceived as less authentic. Two boundary conditions are identified. The negative effect can be mitigated by disclosing the level of artist involvement in co-creation with AI (e.g., by training the algorithm on a curated set of images vs. simply prompting an off-the-shelf AI image generator). In the context of art that is perceived as commercially motivated (e.g., stock images), the effect is also diminished. This research has important implications for the literature on human-AI-collaboration, research on authenticity, and the ongoing policy debate regarding the transparency of algorithmic presence.

由于人工智能(AI)的产生,合成视觉艺术正在成为一种商品。使用人工智能进行共同创作的趋势不会放过艺术家的创作过程,因此了解在创作过程的不同阶段使用生成式人工智能如何影响对艺术家的评价以及人机合作的结果(即视觉作品)非常重要。在三项实验(N = 560)中,本研究探讨了艺术作品的评价如何因艺术家在创作过程的不同阶段与人工智能合作的启示而发生变化。结果表明,共同创作的艺术作品较少受到喜爱和认可,尤其是在实施阶段使用人工智能时。虽然共同创作的艺术被认为更新颖,但却缺乏创作的真实性,这一点具有主导性影响。研究结果还表明,艺术家的看法会受到共同创作过程的影响,而共同创作的艺术家会因为被认为不够真实而较少受到欣赏。结果确定了两个边界条件。通过公开艺术家参与人工智能共同创作的程度(例如,在一组精心策划的图像上训练算法,而不是简单地提示现成的人工智能图像生成器),可以减轻负面影响。在艺术被认为具有商业动机(如股票图像)的情况下,效果也会减弱。这项研究对有关人类与人工智能合作的文献、真实性研究以及正在进行的有关算法存在透明度的政策辩论具有重要意义。
{"title":"Co-creating art with generative artificial intelligence: Implications for artworks and artists","authors":"Uwe Messer","doi":"10.1016/j.chbah.2024.100056","DOIUrl":"10.1016/j.chbah.2024.100056","url":null,"abstract":"<div><p>Synthetic visual art is becoming a commodity due to generative artificial intelligence (AI). The trend of using AI for co-creation will not spare artists’ creative processes, and it is important to understand how the use of generative AI at different stages of the creative process affects both the evaluation of the artist and the result of the human-machine collaboration (i.e., the visual artifact). In three experiments (N = 560), this research explores how the evaluation of artworks is transformed by the revelation that the artist collaborated with AI at different stages of the creative process. The results show that co-created art is less liked and recognized, especially when AI was used in the implementation stage. While co-created art is perceived as more novel, it lacks creative authenticity, which exerts a dominant influence. The results also show that artists’ perceptions suffer from the co-creation process, and that artists who co-create are less admired because they are perceived as less authentic. Two boundary conditions are identified. The negative effect can be mitigated by disclosing the level of artist involvement in co-creation with AI (e.g., by training the algorithm on a curated set of images vs. simply prompting an off-the-shelf AI image generator). In the context of art that is perceived as commercially motivated (e.g., stock images), the effect is also diminished. This research has important implications for the literature on human-AI-collaboration, research on authenticity, and the ongoing policy debate regarding the transparency of algorithmic presence.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000161/pdfft?md5=117db880bc1bfc8ee95dd810da305f04&pid=1-s2.0-S2949882124000161-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139884737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The effect of source disclosure on evaluation of AI-generated messages 信息来源披露对人工智能生成的信息评价的影响
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100058
Sue Lim, Ralf Schmälzle

Advancements in artificial intelligence (AI) over the last decade demonstrate that machines can exhibit communicative behavior and influence how humans think, feel, and behave. In fact, the recent development of ChatGPT has shown that large language models (LLMs) can be leveraged to generate high-quality communication content at scale and across domains, suggesting that they will be increasingly used in practice. However, many questions remain about how knowing the source of the messages influences recipients' evaluation of and preference for AI-generated messages compared to human-generated messages. This paper investigated this topic in the context of vaping prevention messaging. In Study 1, which was pre-registered, we examined the influence of source disclosure on young adults' evaluation of AI-generated health prevention messages compared to human-generated messages. We found that source disclosure (i.e., labeling the source of a message as AI vs. human) significantly impacted the evaluation of the messages but did not significantly alter message rankings. In a follow-up study (Study 2), we examined how the influence of source disclosure may vary by the adults’ negative attitudes towards AI. We found a significant moderating effect of negative attitudes towards AI on message evaluation, but not for message selection. However, source disclosure decreased the preference for AI-generated messages for those with moderate levels (statistically significant) and high levels (directional) of negative attitudes towards AI. Overall, the results of this series of studies showed a slight bias against AI-generated messages once the source was disclosed, adding to the emerging area of study that lies at the intersection of AI and communication.

过去十年来,人工智能(AI)的进步表明,机器可以表现出交流行为,并影响人类的思维、感觉和行为方式。事实上,最近开发的 ChatGPT 已经表明,大型语言模型(LLMs)可用于大规模跨领域生成高质量的交流内容,这表明它们将越来越多地应用于实践中。然而,与人类生成的信息相比,了解信息的来源如何影响接收者对人工智能生成的信息的评价和偏好,仍然存在许多问题。本文以预防吸烟信息为背景对这一问题进行了研究。在预先登记的研究 1 中,我们考察了信息来源披露与人工智能生成的信息相比对年轻人对人工智能生成的健康预防信息的评价的影响。我们发现,信息来源披露(即标明信息来源是人工智能还是人类)对信息评价有显著影响,但对信息排名没有显著改变。在后续研究(研究 2)中,我们考察了来源披露的影响如何因成年人对人工智能的负面态度而异。我们发现,对人工智能的负面态度对信息评价有明显的调节作用,但对信息选择没有影响。然而,对于那些对人工智能持中度负面态度(统计学上有意义)和高度负面态度(方向性)的人来说,信息源披露降低了他们对人工智能生成的信息的偏好。总体而言,这一系列研究的结果表明,一旦公开信息来源,人们对人工智能生成的信息就会产生轻微的偏好,这为人工智能与传播交叉领域的新兴研究增添了新的内容。
{"title":"The effect of source disclosure on evaluation of AI-generated messages","authors":"Sue Lim,&nbsp;Ralf Schmälzle","doi":"10.1016/j.chbah.2024.100058","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100058","url":null,"abstract":"<div><p>Advancements in artificial intelligence (AI) over the last decade demonstrate that machines can exhibit communicative behavior and influence how humans think, feel, and behave. In fact, the recent development of ChatGPT has shown that large language models (LLMs) can be leveraged to generate high-quality communication content at scale and across domains, suggesting that they will be increasingly used in practice. However, many questions remain about how knowing the source of the messages influences recipients' evaluation of and preference for AI-generated messages compared to human-generated messages. This paper investigated this topic in the context of vaping prevention messaging. In Study 1, which was pre-registered, we examined the influence of source disclosure on young adults' evaluation of AI-generated health prevention messages compared to human-generated messages. We found that source disclosure (i.e., labeling the source of a message as AI vs. human) significantly impacted the evaluation of the messages but did not significantly alter message rankings. In a follow-up study (Study 2), we examined how the influence of source disclosure may vary by the adults’ negative attitudes towards AI. We found a significant moderating effect of negative attitudes towards AI on message evaluation, but not for message selection. However, source disclosure decreased the preference for AI-generated messages for those with moderate levels (statistically significant) and high levels (directional) of negative attitudes towards AI. Overall, the results of this series of studies showed a slight bias against AI-generated messages once the source was disclosed, adding to the emerging area of study that lies at the intersection of AI and communication.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000185/pdfft?md5=137b14adf60a30776f098531f8e0d44c&pid=1-s2.0-S2949882124000185-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140062815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual voices for real change: The efficacy of virtual humans in pro-environmental social marketing for mitigating misinformation about climate change 虚拟声音促进真实变化:虚拟人在减少气候变化误导的环保社会营销中的功效
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100047
Won-Ki Moon , Y. Greg Song , Lucy Atkinson

Academics have focused their research on the rise of non-human entities, particularly virtual humans. To assess the effectiveness of virtual humans in influencing individual behavior through campaigns, we conducted two separate online experiments involving different participant groups: university students (N = 167) and U.S. adults (N = 320). We compared individuals’ responses to video-type pro-environmental campaigns featuring a virtual or actual human scientist as the central figure who provides testimonials about their individual efforts to prevent misinformation about climate change. The results indicate that an actual human protagonist evoked a stronger sense of identification compared to a virtual human counterpart. Nevertheless, we also observed that virtual humans can evoke empathy for the characters, leading individuals to perceive them as living entities who can have emotions. The insights gleaned from this study have the potential to shape the creation of virtual human content in various domains, including pro-social campaigns and marketing communications.

学术界的研究重点是非人类实体的兴起,尤其是虚拟人。为了评估虚拟人通过宣传活动影响个人行为的效果,我们分别进行了两项在线实验,涉及不同的参与者群体:大学生(167 人)和美国成年人(320 人)。我们比较了个人对以虚拟或真实人类科学家为中心人物的视频型环保宣传活动的反应,这些科学家提供了他们为防止气候变化误导所做的个人努力的证明。结果表明,与虚拟人相比,真人主角能唤起更强烈的认同感。不过,我们也观察到,虚拟人可以唤起人们对角色的共鸣,从而使人们将其视为有情感的活生生的实体。从这项研究中获得的启示有可能在各个领域(包括亲社会活动和营销传播)塑造虚拟人内容。
{"title":"Virtual voices for real change: The efficacy of virtual humans in pro-environmental social marketing for mitigating misinformation about climate change","authors":"Won-Ki Moon ,&nbsp;Y. Greg Song ,&nbsp;Lucy Atkinson","doi":"10.1016/j.chbah.2024.100047","DOIUrl":"10.1016/j.chbah.2024.100047","url":null,"abstract":"<div><p>Academics have focused their research on the rise of non-human entities, particularly virtual humans. To assess the effectiveness of virtual humans in influencing individual behavior through campaigns, we conducted two separate online experiments involving different participant groups: university students (N = 167) and U.S. adults (N = 320). We compared individuals’ responses to video-type pro-environmental campaigns featuring a virtual or actual human scientist as the central figure who provides testimonials about their individual efforts to prevent misinformation about climate change. The results indicate that an actual human protagonist evoked a stronger sense of identification compared to a virtual human counterpart. Nevertheless, we also observed that virtual humans can evoke empathy for the characters, leading individuals to perceive them as living entities who can have emotions. The insights gleaned from this study have the potential to shape the creation of virtual human content in various domains, including pro-social campaigns and marketing communications.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000070/pdfft?md5=4855892cb89ecc21d2e7dd741dce8b3b&pid=1-s2.0-S2949882124000070-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139637773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception is reality? Understanding user perceptions of chatbot-inferred versus self-reported personality traits 感知就是现实?了解用户对聊天机器人推断性格特征和自我报告性格特征的看法
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100057
Lingyao (Ivy) Yuan , Tianjun Sun , Alan R. Dennis , Michelle Zhou

Artificial Intelligence (AI) can infer one's personality from online behavior, which offers an interesting alternative to traditional, self-reported personality assessments. Recent studies comparing AI-inferred personality to personality derived from traditional assessments have found noticeable differences between the two (meta-analyses have found mean correlations of 0.3 between AI-inferred personality and personality from surveys). One important but unanswered question is how users perceive their personality derived from both methods. Which do users perceive to be more accurate, and more satisfying to use? To answer this question, we used both methods to conduct personality assessments of 595 participants and then asked users how well the two sets of results fit them, as well as their satisfaction and intention to use them. Participants reported that both results fit them equally well, even though the two methods reported different personality scores. Users were equally satisfied with both methods but were more likely to use the survey, likely because it took less time. Our findings imply that both methods measure different aspects of user personality, and both may be useful. We discuss the pros and cons of AI-inferred versus traditional, self-reported personality and indicate future research directions of AI-inferred personality assessment and the implications of their use for real-world applications.

人工智能(AI)可以从网络行为中推断出一个人的性格,这为传统的自我报告性格评估提供了一个有趣的替代方案。最近有研究将人工智能推断出的个性与传统评估得出的个性进行了比较,发现两者之间存在明显差异(元分析发现,人工智能推断出的个性与调查得出的个性之间的平均相关性为 0.3)。一个重要但尚未回答的问题是,用户如何看待这两种方法得出的人格。用户认为哪种方法更准确,使用起来更令人满意?为了回答这个问题,我们使用这两种方法对 595 名参与者进行了性格评估,然后询问用户这两套结果与他们的匹配程度,以及他们的满意度和使用意向。参与者表示,尽管两种方法报告的人格分数不同,但两种结果同样适合他们。用户对两种方法的满意度相同,但更倾向于使用调查问卷,这可能是因为调查问卷花费的时间更少。我们的研究结果表明,两种方法都能测量用户性格的不同方面,而且两种方法都可能有用。我们讨论了人工智能推断与传统自我报告性格的利弊,并指出了人工智能推断性格评估的未来研究方向及其在现实世界应用中的意义。
{"title":"Perception is reality? Understanding user perceptions of chatbot-inferred versus self-reported personality traits","authors":"Lingyao (Ivy) Yuan ,&nbsp;Tianjun Sun ,&nbsp;Alan R. Dennis ,&nbsp;Michelle Zhou","doi":"10.1016/j.chbah.2024.100057","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100057","url":null,"abstract":"<div><p>Artificial Intelligence (AI) can infer one's personality from online behavior, which offers an interesting alternative to traditional, self-reported personality assessments. Recent studies comparing AI-inferred personality to personality derived from traditional assessments have found noticeable differences between the two (meta-analyses have found mean correlations of 0.3 between AI-inferred personality and personality from surveys). One important but unanswered question is how users perceive their personality derived from both methods. Which do users perceive to be more accurate, and more satisfying to use? To answer this question, we used both methods to conduct personality assessments of 595 participants and then asked users how well the two sets of results fit them, as well as their satisfaction and intention to use them. Participants reported that both results fit them equally well, even though the two methods reported different personality scores. Users were equally satisfied with both methods but were more likely to use the survey, likely because it took less time. Our findings imply that both methods measure different aspects of user personality, and both may be useful. We discuss the pros and cons of AI-inferred versus traditional, self-reported personality and indicate future research directions of AI-inferred personality assessment and the implications of their use for real-world applications.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000173/pdfft?md5=24ef3c6048e20de6068aaad37820dbc8&pid=1-s2.0-S2949882124000173-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140000251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How does perception of artificial intelligence- user interaction (PAIUI) impact organizational attractiveness among external users? An empirical study testing the mediating variables 人工智能-用户交互感知(PAIUI)如何影响组织对外部用户的吸引力?检验中介变量的实证研究
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100048
Raghda Abulsaoud Ahmed Younis , Mohammed Rabiee Salama , Mervat Mohammed Sayed Rashwan

The purpose of this paper is to explain how perception of AI- user interaction (PAIUI) impacts organizational attractiveness among external AI users. Moreover, this paper tests the mediating variables that explain this relation among them. The samples include both customers and job applicants who have interacted previously with AI tools. A questionnaire was Developed, tested, and distributed among the AI users. The results of 194 valid questionnaires revealed that perception of AI directly impacts organizational attractiveness only for job applicants. Moreover, it seems from the findings that both fairness and anxiety explain the mediating impacts between perceptions of AI and attractiveness. On the other hand, the results showed that perception of AI doesn't directly impact corporate attractiveness among customers. However, it can impact communication quality dimensions among them. This paper is considered one of the first papers that investigate the impacts of AI among different stakeholders. Moreover, it is one of the limited papers that explain AI-attractiveness relationship.

本文旨在解释人工智能与用户互动感知(PAIUI)如何影响外部人工智能用户的组织吸引力。此外,本文还检验了解释他们之间这种关系的中介变量。样本包括以前与人工智能工具有过互动的客户和求职者。我们编制了一份调查问卷,并在人工智能用户中进行了测试和分发。194 份有效问卷的结果显示,只有求职者对人工智能的感知会直接影响组织的吸引力。此外,从调查结果来看,公平性和焦虑似乎可以解释人工智能感知与吸引力之间的中介影响。另一方面,调查结果显示,人工智能感知并不直接影响企业对客户的吸引力。但是,它可以影响客户之间的沟通质量维度。本文被认为是研究人工智能对不同利益相关者影响的首批论文之一。此外,它也是解释人工智能与吸引力关系的有限论文之一。
{"title":"How does perception of artificial intelligence- user interaction (PAIUI) impact organizational attractiveness among external users? An empirical study testing the mediating variables","authors":"Raghda Abulsaoud Ahmed Younis ,&nbsp;Mohammed Rabiee Salama ,&nbsp;Mervat Mohammed Sayed Rashwan","doi":"10.1016/j.chbah.2024.100048","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100048","url":null,"abstract":"<div><p>The purpose of this paper is to explain how perception of AI- user interaction (PAIUI) impacts organizational attractiveness among external AI users. Moreover, this paper tests the mediating variables that explain this relation among them. The samples include both customers and job applicants who have interacted previously with AI tools. A questionnaire was Developed, tested, and distributed among the AI users. The results of 194 valid questionnaires revealed that perception of AI directly impacts organizational attractiveness only for job applicants. Moreover, it seems from the findings that both fairness and anxiety explain the mediating impacts between perceptions of AI and attractiveness. On the other hand, the results showed that perception of AI doesn't directly impact corporate attractiveness among customers. However, it can impact communication quality dimensions among them. This paper is considered one of the first papers that investigate the impacts of AI among different stakeholders. Moreover, it is one of the limited papers that explain AI-attractiveness relationship.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000082/pdfft?md5=d257f9a779a732886fc43913a41ab30e&pid=1-s2.0-S2949882124000082-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139709462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1