首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Social media influencer vs. virtual influencer: The mediating role of source credibility and authenticity in advertising effectiveness within AI influencer marketing 社交媒体影响者与虚拟影响者:人工智能影响者营销中来源可信度和真实性对广告效果的中介作用
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100100
Donggyu Kim, Zituo Wang
This study examines the differences between social media influencers and virtual influencers in influencer marketing, focusing on their impact on marketing effectiveness. Using a between-subjects experimental design, the research explores how human influencers (HIs), human-like virtual influencers (HVIs), and anime-like virtual influencers (AVIs) affect perceptions of authenticity, source credibility, and overall marketing effectiveness. The study evaluates these influencer types across both for-profit and not-for-profit messaging contexts to determine how message intent influences audience reactions. The findings reveal that HVIs can be as effective as human influencers, especially in not-for-profit messaging, where their authenticity and source credibility are higher. However, when the messaging shifts to for-profit motives, the advantage of HVIs diminishes, aligning more closely with AVIs, which consistently show lower effectiveness. The study highlights the critical role that both authenticity and source credibility play in mediating the relationship between the type of influencer and advertising effectiveness.
本研究探讨了影响者营销中社交媒体影响者和虚拟影响者之间的差异,重点关注他们对营销效果的影响。研究采用主体间实验设计,探讨人类影响者(HIs)、类人虚拟影响者(HVIs)和动漫虚拟影响者(AVIs)如何影响对真实性、来源可信度和整体营销效果的感知。研究评估了这些影响者类型在营利性和非营利性信息传播中的作用,以确定信息意图如何影响受众的反应。研究结果表明,HVI 与人类影响者一样有效,尤其是在非营利性信息中,其真实性和来源可信度更高。然而,当信息传播转向营利动机时,HVI 的优势就会减弱,与 AVI 更加接近,而 AVI 一直显示出较低的有效性。这项研究强调了真实性和来源可信度在影响者类型与广告效果之间的中介作用。
{"title":"Social media influencer vs. virtual influencer: The mediating role of source credibility and authenticity in advertising effectiveness within AI influencer marketing","authors":"Donggyu Kim,&nbsp;Zituo Wang","doi":"10.1016/j.chbah.2024.100100","DOIUrl":"10.1016/j.chbah.2024.100100","url":null,"abstract":"<div><div>This study examines the differences between social media influencers and virtual influencers in influencer marketing, focusing on their impact on marketing effectiveness. Using a between-subjects experimental design, the research explores how human influencers (HIs), human-like virtual influencers (HVIs), and anime-like virtual influencers (AVIs) affect perceptions of authenticity, source credibility, and overall marketing effectiveness. The study evaluates these influencer types across both for-profit and not-for-profit messaging contexts to determine how message intent influences audience reactions. The findings reveal that HVIs can be as effective as human influencers, especially in not-for-profit messaging, where their authenticity and source credibility are higher. However, when the messaging shifts to for-profit motives, the advantage of HVIs diminishes, aligning more closely with AVIs, which consistently show lower effectiveness. The study highlights the critical role that both authenticity and source credibility play in mediating the relationship between the type of influencer and advertising effectiveness.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100100"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating sound effects and background music in Robotic storytelling – A series of online studies across different story genres 将音效和背景音乐融入机器人讲故事 - 不同故事类型的系列在线研究
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100085
Sophia C. Steinhaeusser, Birgit Lugrin

Social robots as storytellers combine advantages of human storytellers – such as embodiment, gestures, and gaze – and audio books – large repertoire of voices, sound effects, and background music. However, research on adding non-speech sounds to robotic storytelling is yet in its infancy. The current series of four online studies investigates the influence of sound effects and background music in robotic storytelling on recipients’ storytelling experience and enjoyment, robot perception, and emotion induction across different story genres, i.e. horror, detective, romantic and humorous stories. Results indicate increased enjoyment for romantic stories and a trend for decreased fatigue for all genres when adding sound effects and background music to the robotic storytelling. Of the four genres examined, horror stories seem to benefit the most from the addition of non-speech sounds. Future research should provide guidelines for the selection of music and sound effects to improve the realization of non-speech sound-accompanied robotic storytelling. In conclusion, our ongoing research suggests that the integration of sound effects and background music holds promise for enhancing robotic storytelling, and our genre comparison provides first guidance of when to use them.

作为讲故事者的社交机器人结合了人类讲故事者的优势(如体态、手势和目光)和有声读物的优势(大量声音曲目、音效和背景音乐)。然而,为机器人讲故事添加非语音声音的研究仍处于起步阶段。目前的四项在线研究调查了机器人讲故事中的音效和背景音乐对不同故事类型(即恐怖、侦探、浪漫和幽默故事)的接受者讲故事的体验和乐趣、机器人感知和情感诱导的影响。结果表明,在机器人讲故事过程中加入音效和背景音乐后,浪漫故事的受众会更喜欢,而所有类型故事的受众疲劳感都呈下降趋势。在所研究的四种类型中,恐怖故事似乎从添加非语音声音中获益最多。未来的研究应为音乐和音效的选择提供指导,以提高非语音声音伴奏机器人讲故事的效果。总之,我们正在进行的研究表明,音效和背景音乐的整合有望增强机器人讲故事的效果,而我们的流派比较则为何时使用音效和背景音乐提供了初步指导。
{"title":"Integrating sound effects and background music in Robotic storytelling – A series of online studies across different story genres","authors":"Sophia C. Steinhaeusser,&nbsp;Birgit Lugrin","doi":"10.1016/j.chbah.2024.100085","DOIUrl":"10.1016/j.chbah.2024.100085","url":null,"abstract":"<div><p>Social robots as storytellers combine advantages of human storytellers – such as embodiment, gestures, and gaze – and audio books – large repertoire of voices, sound effects, and background music. However, research on adding non-speech sounds to robotic storytelling is yet in its infancy. The current series of four online studies investigates the influence of sound effects and background music in robotic storytelling on recipients’ storytelling experience and enjoyment, robot perception, and emotion induction across different story genres, i.e. horror, detective, romantic and humorous stories. Results indicate increased enjoyment for romantic stories and a trend for decreased fatigue for all genres when adding sound effects and background music to the robotic storytelling. Of the four genres examined, horror stories seem to benefit the most from the addition of non-speech sounds. Future research should provide guidelines for the selection of music and sound effects to improve the realization of non-speech sound-accompanied robotic storytelling. In conclusion, our ongoing research suggests that the integration of sound effects and background music holds promise for enhancing robotic storytelling, and our genre comparison provides first guidance of when to use them.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100085"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000458/pdfft?md5=39926971bcbec336bf3117e22eb44704&pid=1-s2.0-S2949882124000458-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interest 当自身利益与 "大局 "对立时--涉及用户自身利益的自主系统伦理困境中的随机化决策
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100097
Anja Bodenschatz
Autonomous systems (ASs) decide upon ethical dilemmas and their artificial intelligence as well as situational settings become more and more complex. However, to study common-sense morality concerning ASs abstracted dilemmas on autonomous vehicle (AV) accidents are a common tool. A special case of ethical dilemmas is when the AS’s users are affected. Many people want AVs to adhere to utilitarian programming (e.g., to save the larger group), or egalitarian programming (i.e., to treat every person equally). However, they want their own AV to protect them instead of the “greater good”. That people reject utilitarian programming as an AS’s user while supporting the idea from an impartial perspective has been termed the “social dilemma of AVs”. Meanwhile, preferences for another technical capability, which would implement egalitarian programming, have not been elicited for dilemmas involving self-interest: decision randomization. This paper investigates normative and descriptive preferences for a self-protective, self-sacrificial, or randomized choice by an AS in a dilemma where people are the sole passenger of an AV, and their survival stands against the survival of several others. Results suggest that randomization may mitigate the “social dilemma of AVs” by bridging between a societally accepted programming and the urge of ASs’ users for self-protection.
自主系统(AS)决定伦理困境,其人工智能和情境设置变得越来越复杂。然而,研究有关自动驾驶汽车(AV)事故的常识性道德抽象困境是一种常用工具。当自动驾驶汽车的用户受到影响时,就会出现特殊的道德困境。许多人希望自动驾驶汽车遵守功利主义程序(如拯救大群体)或平等主义程序(即平等对待每个人)。然而,他们希望自己的 AV 保护自己,而不是保护 "更大的利益"。人们拒绝作为自动机用户的功利主义编程,但又从公正的角度支持这一想法,这被称为 "自动机的社会困境"。与此同时,在涉及自身利益的困境中,人们对另一种可实现平等主义编程的技术能力--决策随机化--的偏好尚未被激发出来。本文研究了在人们是视听设备的唯一乘客,而他们的生存与其他几个人的生存息息相关的困境中,人们对自动驾驶汽车的自我保护、自我牺牲或随机选择的规范性和描述性偏好。结果表明,随机化可以在社会认可的程序和自动驾驶汽车用户自我保护的冲动之间架起一座桥梁,从而缓解 "自动驾驶汽车的社会困境"。
{"title":"When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interest","authors":"Anja Bodenschatz","doi":"10.1016/j.chbah.2024.100097","DOIUrl":"10.1016/j.chbah.2024.100097","url":null,"abstract":"<div><div>Autonomous systems (ASs) decide upon ethical dilemmas and their artificial intelligence as well as situational settings become more and more complex. However, to study common-sense morality concerning ASs abstracted dilemmas on autonomous vehicle (AV) accidents are a common tool. A special case of ethical dilemmas is when the AS’s users are affected. Many people want AVs to adhere to utilitarian programming (e.g., to save the larger group), or egalitarian programming (i.e., to treat every person equally). However, they want their own AV to protect them instead of the “greater good”. That people reject utilitarian programming as an AS’s user while supporting the idea from an impartial perspective has been termed the “social dilemma of AVs”. Meanwhile, preferences for another technical capability, which would implement egalitarian programming, have not been elicited for dilemmas involving self-interest: decision randomization. This paper investigates normative and descriptive preferences for a self-protective, self-sacrificial, or randomized choice by an AS in a dilemma where people are the sole passenger of an AV, and their survival stands against the survival of several others. Results suggest that randomization may mitigate the “social dilemma of AVs” by bridging between a societally accepted programming and the urge of ASs’ users for self-protection.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100097"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating generative AI in data science programming: Group differences in hint requests 在数据科学编程中整合生成式人工智能:提示请求的群体差异
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100089
Tenzin Doleck, Pedram Agand, Dylan Pirrotta

Generative AI applications have increasingly gained visibility in recent educational literature. Yet less is known about how access to generative tools, such as ChatGPT, influences help-seeking during complex problem-solving. In this paper, we aim to advance the understanding of learners' use of a support strategy (hints) when solving data science programming tasks in an online AI-enabled learning environment. The study compared two conditions: students solving problems in DaTu with AI assistance (N = 45) and those without AI assistance (N = 44). Findings reveal no difference in hint-seeking behavior between the two groups, suggesting that the integration of AI assistance has minimal impact on how individuals seek help. The findings also suggest that the availability of AI assistance does not necessarily reduce learners’ reliance on support strategies (such as hints). The current study advances data science education and research by exploring the influence of AI assistance during complex data science problem-solving. We discuss implications and identify paths for future research.

在最近的教育文献中,生成式人工智能应用日益受到关注。然而,人们对使用生成工具(如 ChatGPT)如何影响复杂问题解决过程中的求助行为知之甚少。在本文中,我们旨在进一步了解学习者在人工智能在线学习环境中解决数据科学编程任务时使用辅助策略(提示)的情况。研究比较了两种情况:学生在有人工智能辅助的 DaTu 中解决问题(45 人)和没有人工智能辅助的学生(44 人)。研究结果表明,两组学生在寻求提示行为上没有差异,这表明集成人工智能辅助对个人如何寻求帮助的影响微乎其微。研究结果还表明,提供人工智能辅助并不一定会减少学习者对辅助策略(如提示)的依赖。本研究通过探索人工智能辅助在复杂数据科学问题解决过程中的影响,推动了数据科学教育和研究的发展。我们讨论了研究的意义,并确定了未来研究的方向。
{"title":"Integrating generative AI in data science programming: Group differences in hint requests","authors":"Tenzin Doleck,&nbsp;Pedram Agand,&nbsp;Dylan Pirrotta","doi":"10.1016/j.chbah.2024.100089","DOIUrl":"10.1016/j.chbah.2024.100089","url":null,"abstract":"<div><p>Generative AI applications have increasingly gained visibility in recent educational literature. Yet less is known about how access to generative tools, such as ChatGPT, influences help-seeking during complex problem-solving. In this paper, we aim to advance the understanding of learners' use of a support strategy (hints) when solving data science programming tasks in an online AI-enabled learning environment. The study compared two conditions: students solving problems in <em>DaTu</em> with AI assistance (<em>N</em> = 45) and those without AI assistance (<em>N</em> = 44). Findings reveal no difference in hint-seeking behavior between the two groups, suggesting that the integration of AI assistance has minimal impact on how individuals seek help. The findings also suggest that the availability of AI assistance does not necessarily reduce learners’ reliance on support strategies (such as hints). The current study advances data science education and research by exploring the influence of AI assistance during complex data science problem-solving. We discuss implications and identify paths for future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100089"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000495/pdfft?md5=d2364f734cd75435ea2c327fb376b30e&pid=1-s2.0-S2949882124000495-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI as decision aid or delegated agent: The effects of trust dimensions on the adoption of AI digital agents 人工智能是辅助决策还是委托代理?信任维度对采用人工智能数字代理的影响
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100094
Aman Pathak, Veena Bansal
AI digital agents may act as decision-aid or as delegated agents. A decision-aid agent helps a user make decisions, whereas a delegated agent makes decisions on behalf of the consumer. The study determines the factors affecting the adoption intention of AI digital agents as decision aids and delegated agents. The domain of study is banking, financial services, and Insurance sector (BFSI). Due to the unique characteristics of AI digital agents, trust has been identified as an important construct in the extant literature. The study decomposed trust into social, cognitive, and affective trust. We incorporated PLS-SEM and fsQCA to examine the factors drawn from the literature. The findings from PLS-SEM suggest that perceived AI quality affects cognitive trust, perceived usefulness affects affective trust, and social trust affects cognitive and affective trust. The intention to adopt AI as a decision-aid is influenced by affective and cognitive trust. The intention to adopt AI as delegated agents is influenced by social, cognitive, and affective trust. FsQCA findings indicate that combining AI quality, perceived usefulness, and trust (social, cognitive, and affective) best explains the intention to adopt AI as a decision aid and delegated agents.
人工智能数字代理可以作为决策辅助代理或委托代理。决策辅助代理帮助用户做出决策,而委托代理则代表消费者做出决策。本研究确定了影响人工智能数字代理作为决策辅助工具和委托代理的采用意向的因素。研究领域是银行、金融服务和保险业(BFSI)。由于人工智能数字代理的独特性,信任在现有文献中被认为是一个重要的概念。本研究将信任分解为社会信任、认知信任和情感信任。我们采用 PLS-SEM 和 fsQCA 来研究文献中的因素。PLS-SEM 的研究结果表明,感知到的人工智能质量会影响认知信任,感知到的有用性会影响情感信任,而社会信任会影响认知信任和情感信任。采用人工智能作为决策辅助工具的意愿受情感信任和认知信任的影响。采用人工智能作为委托代理的意愿受社会信任、认知信任和情感信任的影响。FsQCA的研究结果表明,将人工智能质量、感知有用性和信任(社会、认知和情感)结合起来,最能解释采用人工智能作为决策辅助工具和委托代理的意愿。
{"title":"AI as decision aid or delegated agent: The effects of trust dimensions on the adoption of AI digital agents","authors":"Aman Pathak,&nbsp;Veena Bansal","doi":"10.1016/j.chbah.2024.100094","DOIUrl":"10.1016/j.chbah.2024.100094","url":null,"abstract":"<div><div>AI digital agents may act as decision-aid or as delegated agents. A decision-aid agent helps a user make decisions, whereas a delegated agent makes decisions on behalf of the consumer. The study determines the factors affecting the adoption intention of AI digital agents as decision aids and delegated agents. The domain of study is banking, financial services, and Insurance sector (BFSI). Due to the unique characteristics of AI digital agents, trust has been identified as an important construct in the extant literature. The study decomposed trust into social, cognitive, and affective trust. We incorporated PLS-SEM and fsQCA to examine the factors drawn from the literature. The findings from PLS-SEM suggest that perceived AI quality affects cognitive trust, perceived usefulness affects affective trust, and social trust affects cognitive and affective trust. The intention to adopt AI as a decision-aid is influenced by affective and cognitive trust. The intention to adopt AI as delegated agents is influenced by social, cognitive, and affective trust. FsQCA findings indicate that combining AI quality, perceived usefulness, and trust (social, cognitive, and affective) best explains the intention to adopt AI as a decision aid and delegated agents.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100094"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142426883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Behavioral and neural evidence for the underestimated attractiveness of faces synthesized using an artificial neural network 使用人工神经网络合成的人脸吸引力被低估的行为和神经证据
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100104
Satoshi Nishida
Recent advancements in artificial intelligence (AI) have not eased human anxiety about AI. If such anxiety diminishes human preference for AI-synthesized visual information, the preference should be reduced solely by the belief that the information is synthesized by AI, independently of its appearance. This study tested this hypothesis by asking experimental participants to rate the attractiveness of faces synthesized by an artificial neural network, under the false instruction that some faces were real and others were synthetic. This experimental design isolated the impact of belief on attractiveness ratings from the actual facial appearance. Brain responses were also recorded with fMRI to examine the neural basis of this belief effect. The results showed that participants rated faces significantly lower when they believed them to be synthetic, and this belief altered the responsiveness of fMRI signals to facial attractiveness in the right fusiform cortex. These findings support the notion that human preference for visual information is reduced solely due to the belief that the information is synthesized by AI, suggesting that AI and robot design should focus not only on enhancing appearance but also on alleviating human anxiety about them.
人工智能(AI)的最新进展并没有缓解人类对人工智能的焦虑。如果这种焦虑降低了人类对人工智能合成的视觉信息的偏好,那么这种偏好应该仅仅因为相信信息是人工智能合成的而降低,与信息的外观无关。本研究通过让实验参与者对人工神经网络合成的人脸的吸引力进行评分来验证这一假设,实验参与者被假定一些人脸是真实的,而另一些人脸是合成的。这种实验设计将信念对吸引力评分的影响与实际面部外观隔离开来。此外,实验还使用 fMRI 记录了大脑的反应,以研究这种信念效应的神经基础。结果显示,当参与者认为人脸是合成的时,他们对人脸的评分会明显降低,而且这种信念会改变右侧纺锤形皮层中 fMRI 信号对面部吸引力的反应。这些发现支持了这样一种观点,即人类对视觉信息的偏好降低完全是因为相信信息是由人工智能合成的,这表明人工智能和机器人的设计不仅要关注增强外观,还要关注减轻人类对外观的焦虑。
{"title":"Behavioral and neural evidence for the underestimated attractiveness of faces synthesized using an artificial neural network","authors":"Satoshi Nishida","doi":"10.1016/j.chbah.2024.100104","DOIUrl":"10.1016/j.chbah.2024.100104","url":null,"abstract":"<div><div>Recent advancements in artificial intelligence (AI) have not eased human anxiety about AI. If such anxiety diminishes human preference for AI-synthesized visual information, the preference should be reduced solely by the belief that the information is synthesized by AI, independently of its appearance. This study tested this hypothesis by asking experimental participants to rate the attractiveness of faces synthesized by an artificial neural network, under the false instruction that some faces were real and others were synthetic. This experimental design isolated the impact of belief on attractiveness ratings from the actual facial appearance. Brain responses were also recorded with fMRI to examine the neural basis of this belief effect. The results showed that participants rated faces significantly lower when they believed them to be synthetic, and this belief altered the responsiveness of fMRI signals to facial attractiveness in the right fusiform cortex. These findings support the notion that human preference for visual information is reduced solely due to the belief that the information is synthesized by AI, suggesting that AI and robot design should focus not only on enhancing appearance but also on alleviating human anxiety about them.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100104"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How voice and helpfulness shape perceptions in human–agent teams 声音和乐于助人如何影响人类--代理团队的认知
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100101
Samuel Westby , Richard J. Radke , Christoph Riedl , Brook Foucault Welles
Voice assistants are increasingly prevalent, from personal devices to team environments. This study explores how voice type and contribution quality influence human–agent team performance and perceptions of anthropomorphism, animacy, intelligence, and trustworthiness. By manipulating both, we reveal mechanisms of perception and clarify ambiguity in previous work. Our results show that the human resemblance of a voice assistant’s voice negatively interacts with the helpfulness of an agent’s contribution to flip its effect on perceived anthropomorphism and perceived animacy. This means human teammates interpret the agent’s contributions differently depending on its voice. Our study found no significant effect of voice on perceived intelligence, trustworthiness, or team performance. We find differences in these measures are caused by manipulating the helpfulness of an agent. These findings suggest that function matters more than form when designing agents for high-performing human–agent teams, but controlling perceptions of anthropomorphism and animacy can be unpredictable even with high human resemblance.
从个人设备到团队环境,语音助手越来越普遍。本研究探讨了语音类型和贡献质量如何影响人类-代理团队的表现以及对拟人化、灵性、智能和可信度的感知。通过对两者的操作,我们揭示了感知机制,并澄清了以往工作中的模糊之处。我们的研究结果表明,语音助手声音的人类相似度与代理贡献的有用性会产生负向互动,从而改变其对感知拟人化和感知生动性的影响。这意味着人类队友会根据语音对代理的贡献做出不同的解释。我们的研究发现,声音对感知智力、可信度或团队表现没有明显影响。我们发现这些指标的差异是由操纵代理的帮助程度造成的。这些研究结果表明,在为高绩效的人类-代理团队设计代理时,功能比形式更重要,但即使人与代理高度相似,控制拟人化和灵性的感知也是不可预测的。
{"title":"How voice and helpfulness shape perceptions in human–agent teams","authors":"Samuel Westby ,&nbsp;Richard J. Radke ,&nbsp;Christoph Riedl ,&nbsp;Brook Foucault Welles","doi":"10.1016/j.chbah.2024.100101","DOIUrl":"10.1016/j.chbah.2024.100101","url":null,"abstract":"<div><div>Voice assistants are increasingly prevalent, from personal devices to team environments. This study explores how voice type and contribution quality influence human–agent team performance and perceptions of anthropomorphism, animacy, intelligence, and trustworthiness. By manipulating both, we reveal mechanisms of perception and clarify ambiguity in previous work. Our results show that the human resemblance of a voice assistant’s voice negatively interacts with the helpfulness of an agent’s contribution to flip its effect on perceived anthropomorphism and perceived animacy. This means human teammates interpret the agent’s contributions differently depending on its voice. Our study found no significant effect of voice on perceived intelligence, trustworthiness, or team performance. We find differences in these measures are caused by manipulating the helpfulness of an agent. These findings suggest that function matters more than form when designing agents for high-performing human–agent teams, but controlling perceptions of anthropomorphism and animacy can be unpredictable even with high human resemblance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100101"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are humanoid robots perceived as mindless mannequins? 人形机器人是否被视为没有思想的人体模型?
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100105
Emmanuele Tidoni , Emily S. Cross , Richard Ramsey , Michele Scandola
The shape and texture of humans and humanoid robots provide perceptual information that help us to appropriately categorise these stimuli. However, it remains unclear which features and attributes are driving the assignment into human and non-human categories. To explore this issue, we ran a series of five preregistered experiments wherein we presented stimuli that varied in their appearance (i.e., humans, humanoid robots, non-human primates, mannequins, hammers, musical instruments) and asked participants to complete a match-to-category task (Experiments 1-2-3), a priming task (Experiment 4), or to rate each category along four dimensions (i.e., similarity, liveliness, body association, action association; Experiment 5). Results indicate that categorising human bodies and humanoid robots requires the integration of both the analyses of their physical shape and visual texture (i.e., to identify a humanoid robot we cannot only rely on its visual shape). Further, our behavioural findings suggest that human bodies may be represented as a special living category separate from non-human animal entities (i.e., primates). Moreover, results also suggest that categorising humans and humanoid robots may rely on a network of information typically associated to human being and inanimate objects respectively (e.g., humans can play musical instruments and have a mind while robots do not play musical instruments and do have not a human mind). Overall, the paradigms introduced here offer new avenues through which to study the perception of human and artificial agents, and how experiences with humanoid robots may change the perception of humanness along a robot—human continuum.
人类和仿人机器人的形状和纹理提供了感知信息,有助于我们对这些刺激进行适当分类。然而,目前还不清楚是哪些特征和属性驱动了人类和非人类的分类。为了探索这个问题,我们进行了一系列五项预先登记的实验,在这些实验中,我们展示了外观各异的刺激物(即人类、仿人机器人、非人灵长类动物、人体模型、锤子、乐器),并要求参与者完成匹配分类任务(实验 1-2-3)、引申任务(实验 4),或从四个维度(即相似性、生动性、肢体关联、动作关联;实验 5)对每个类别进行评分。实验结果表明,对人体和仿人机器人进行分类需要综合分析它们的物理形状和视觉纹理(也就是说,要识别一个仿人机器人,我们不能仅仅依靠它的视觉形状)。此外,我们的行为研究结果表明,人体可以作为一个特殊的生命类别,与非人形动物实体(即灵长类动物)区分开来。此外,研究结果还表明,对人类和仿人机器人的分类可能分别依赖于与人类和无生命物体相关的典型信息网络(例如,人类会演奏乐器并有思想,而机器人不会演奏乐器,也没有人类的思想)。总之,本文介绍的范式为研究人类和人造代理的感知提供了新的途径,也为研究与仿人机器人相处的经历如何改变机器人-人类连续统一体对人类的感知提供了新的途径。
{"title":"Are humanoid robots perceived as mindless mannequins?","authors":"Emmanuele Tidoni ,&nbsp;Emily S. Cross ,&nbsp;Richard Ramsey ,&nbsp;Michele Scandola","doi":"10.1016/j.chbah.2024.100105","DOIUrl":"10.1016/j.chbah.2024.100105","url":null,"abstract":"<div><div>The shape and texture of humans and humanoid robots provide perceptual information that help us to appropriately categorise these stimuli. However, it remains unclear which features and attributes are driving the assignment into human and non-human categories. To explore this issue, we ran a series of five preregistered experiments wherein we presented stimuli that varied in their appearance (i.e., humans, humanoid robots, non-human primates, mannequins, hammers, musical instruments) and asked participants to complete a match-to-category task (Experiments 1-2-3), a priming task (Experiment 4), or to rate each category along four dimensions (i.e., similarity, liveliness, body association, action association; Experiment 5). Results indicate that categorising human bodies and humanoid robots requires the integration of both the analyses of their physical shape and visual texture (i.e., to identify a humanoid robot we cannot only rely on its visual shape). Further, our behavioural findings suggest that human bodies may be represented as a special living category separate from non-human animal entities (i.e., primates). Moreover, results also suggest that categorising humans and humanoid robots may rely on a network of information typically associated to human being and inanimate objects respectively (e.g., humans can play musical instruments and have a mind while robots do not play musical instruments and do have not a human mind). Overall, the paradigms introduced here offer new avenues through which to study the perception of human and artificial agents, and how experiences with humanoid robots may change the perception of humanness along a robot—human continuum.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100105"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142700897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The great AI witch hunt: Reviewers’ perception and (Mis)conception of generative AI in research writing 人工智能大猎杀:审稿人对研究写作中生成式人工智能的看法和(错误)概念
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100095
Hilda Hadan, Derrick M. Wang, Reza Hadi Mogavi, Joseph Tu, Leah Zhang-Kennedy, Lennart E. Nacke
Generative AI (GenAI) use in research writing is growing fast. However, it is unclear how peer reviewers recognize or misjudge AI-augmented manuscripts. To investigate the impact of AI-augmented writing on peer reviews, we conducted a snippet-based online survey with 17 peer reviewers from top-tier HCI conferences. Our findings indicate that while AI-augmented writing improves readability, language diversity, and informativeness, it often lacks research details and reflective insights from authors. Reviewers consistently struggled to distinguish between human and AI-augmented writing but their judgements remained consistent. They noted the loss of a “human touch” and subjective expressions in AI-augmented writing. Based on our findings, we advocate for reviewer guidelines that promote impartial evaluations of submissions, regardless of any personal biases towards GenAI. The quality of the research itself should remain a priority in reviews, regardless of any preconceived notions about the tools used to create it. We emphasize that researchers must maintain their authorship and control over the writing process, even when using GenAI's assistance.
生成式人工智能(GenAI)在研究写作中的应用正在快速增长。然而,同行评审者是如何识别或误判人工智能扩增稿件的还不清楚。为了研究人工智能增强写作对同行评审的影响,我们对来自顶级人机交互会议的17位同行评审员进行了基于片段的在线调查。我们的调查结果表明,虽然人工智能增强写作提高了可读性、语言多样性和信息量,但往往缺乏研究细节和作者的反思性见解。审稿人一直在努力区分人类写作和人工智能增强写作,但他们的判断保持一致。他们注意到人工智能增强写作中 "人情味 "和主观表达的缺失。根据我们的研究结果,我们主张制定审稿人指南,促进对提交的论文进行公正评价,而不考虑对 GenAI 的任何个人偏见。研究本身的质量仍应是评审的优先考虑因素,而不应考虑对用于创建研究的工具有任何先入为主的看法。我们强调,即使在使用 GenAI 的协助时,研究人员也必须保持其作者身份和对撰写过程的控制权。
{"title":"The great AI witch hunt: Reviewers’ perception and (Mis)conception of generative AI in research writing","authors":"Hilda Hadan,&nbsp;Derrick M. Wang,&nbsp;Reza Hadi Mogavi,&nbsp;Joseph Tu,&nbsp;Leah Zhang-Kennedy,&nbsp;Lennart E. Nacke","doi":"10.1016/j.chbah.2024.100095","DOIUrl":"10.1016/j.chbah.2024.100095","url":null,"abstract":"<div><div>Generative AI (GenAI) use in research writing is growing fast. However, it is unclear how peer reviewers recognize or misjudge AI-augmented manuscripts. To investigate the impact of AI-augmented writing on peer reviews, we conducted a snippet-based online survey with 17 peer reviewers from top-tier HCI conferences. Our findings indicate that while AI-augmented writing improves readability, language diversity, and informativeness, it often lacks research details and reflective insights from authors. Reviewers consistently struggled to distinguish between human and AI-augmented writing but their judgements remained consistent. They noted the loss of a “human touch” and subjective expressions in AI-augmented writing. Based on our findings, we advocate for reviewer guidelines that promote impartial evaluations of submissions, regardless of any personal biases towards GenAI. The quality of the research itself should remain a priority in reviews, regardless of any preconceived notions about the tools used to create it. We emphasize that researchers must maintain their authorship and control over the writing process, even when using GenAI's assistance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100095"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142587264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differences between human and artificial/augmented intelligence in medicine 人类智能和人工智能/增强智能在医学中的区别
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100084
Scott Monteith , Tasha Glenn , John R. Geddes , Eric D. Achtyes , Peter C. Whybrow , Michael Bauer

The emphasis on artificial intelligence (AI) is rapidly increasing across many diverse aspects of society. This manuscript discusses some of the key topics related to the expansion of AI. These include a comparison of the unique cognitive capabilities of human intelligence with AI, and the potential risks of using AI in clinical medicine. The general public attitudes towards AI are also discussed, including patient perspectives. As the promotion of AI in high-risk situations such as clinical medicine expands, the limitations, risks and benefits of AI need to be better understood.

对人工智能(AI)的重视正在社会的许多不同方面迅速增加。本手稿讨论了与人工智能发展相关的一些重要话题。其中包括人类智能与人工智能独特认知能力的比较,以及在临床医学中使用人工智能的潜在风险。文章还讨论了公众对人工智能的态度,包括患者的观点。随着人工智能在临床医学等高风险领域的推广,我们需要更好地了解人工智能的局限性、风险和益处。
{"title":"Differences between human and artificial/augmented intelligence in medicine","authors":"Scott Monteith ,&nbsp;Tasha Glenn ,&nbsp;John R. Geddes ,&nbsp;Eric D. Achtyes ,&nbsp;Peter C. Whybrow ,&nbsp;Michael Bauer","doi":"10.1016/j.chbah.2024.100084","DOIUrl":"10.1016/j.chbah.2024.100084","url":null,"abstract":"<div><p>The emphasis on artificial intelligence (AI) is rapidly increasing across many diverse aspects of society. This manuscript discusses some of the key topics related to the expansion of AI. These include a comparison of the unique cognitive capabilities of human intelligence with AI, and the potential risks of using AI in clinical medicine. The general public attitudes towards AI are also discussed, including patient perspectives. As the promotion of AI in high-risk situations such as clinical medicine expands, the limitations, risks and benefits of AI need to be better understood.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100084"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000446/pdfft?md5=de42c1e5a75fbb492e2bc6a082094c1f&pid=1-s2.0-S2949882124000446-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141853511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1