首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Integrating generative AI in data science programming: Group differences in hint requests 在数据科学编程中整合生成式人工智能:提示请求的群体差异
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100089
Tenzin Doleck, Pedram Agand, Dylan Pirrotta

Generative AI applications have increasingly gained visibility in recent educational literature. Yet less is known about how access to generative tools, such as ChatGPT, influences help-seeking during complex problem-solving. In this paper, we aim to advance the understanding of learners' use of a support strategy (hints) when solving data science programming tasks in an online AI-enabled learning environment. The study compared two conditions: students solving problems in DaTu with AI assistance (N = 45) and those without AI assistance (N = 44). Findings reveal no difference in hint-seeking behavior between the two groups, suggesting that the integration of AI assistance has minimal impact on how individuals seek help. The findings also suggest that the availability of AI assistance does not necessarily reduce learners’ reliance on support strategies (such as hints). The current study advances data science education and research by exploring the influence of AI assistance during complex data science problem-solving. We discuss implications and identify paths for future research.

在最近的教育文献中,生成式人工智能应用日益受到关注。然而,人们对使用生成工具(如 ChatGPT)如何影响复杂问题解决过程中的求助行为知之甚少。在本文中,我们旨在进一步了解学习者在人工智能在线学习环境中解决数据科学编程任务时使用辅助策略(提示)的情况。研究比较了两种情况:学生在有人工智能辅助的 DaTu 中解决问题(45 人)和没有人工智能辅助的学生(44 人)。研究结果表明,两组学生在寻求提示行为上没有差异,这表明集成人工智能辅助对个人如何寻求帮助的影响微乎其微。研究结果还表明,提供人工智能辅助并不一定会减少学习者对辅助策略(如提示)的依赖。本研究通过探索人工智能辅助在复杂数据科学问题解决过程中的影响,推动了数据科学教育和研究的发展。我们讨论了研究的意义,并确定了未来研究的方向。
{"title":"Integrating generative AI in data science programming: Group differences in hint requests","authors":"Tenzin Doleck,&nbsp;Pedram Agand,&nbsp;Dylan Pirrotta","doi":"10.1016/j.chbah.2024.100089","DOIUrl":"10.1016/j.chbah.2024.100089","url":null,"abstract":"<div><p>Generative AI applications have increasingly gained visibility in recent educational literature. Yet less is known about how access to generative tools, such as ChatGPT, influences help-seeking during complex problem-solving. In this paper, we aim to advance the understanding of learners' use of a support strategy (hints) when solving data science programming tasks in an online AI-enabled learning environment. The study compared two conditions: students solving problems in <em>DaTu</em> with AI assistance (<em>N</em> = 45) and those without AI assistance (<em>N</em> = 44). Findings reveal no difference in hint-seeking behavior between the two groups, suggesting that the integration of AI assistance has minimal impact on how individuals seek help. The findings also suggest that the availability of AI assistance does not necessarily reduce learners’ reliance on support strategies (such as hints). The current study advances data science education and research by exploring the influence of AI assistance during complex data science problem-solving. We discuss implications and identify paths for future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100089"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000495/pdfft?md5=d2364f734cd75435ea2c327fb376b30e&pid=1-s2.0-S2949882124000495-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI as decision aid or delegated agent: The effects of trust dimensions on the adoption of AI digital agents 人工智能是辅助决策还是委托代理?信任维度对采用人工智能数字代理的影响
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100094
Aman Pathak, Veena Bansal
AI digital agents may act as decision-aid or as delegated agents. A decision-aid agent helps a user make decisions, whereas a delegated agent makes decisions on behalf of the consumer. The study determines the factors affecting the adoption intention of AI digital agents as decision aids and delegated agents. The domain of study is banking, financial services, and Insurance sector (BFSI). Due to the unique characteristics of AI digital agents, trust has been identified as an important construct in the extant literature. The study decomposed trust into social, cognitive, and affective trust. We incorporated PLS-SEM and fsQCA to examine the factors drawn from the literature. The findings from PLS-SEM suggest that perceived AI quality affects cognitive trust, perceived usefulness affects affective trust, and social trust affects cognitive and affective trust. The intention to adopt AI as a decision-aid is influenced by affective and cognitive trust. The intention to adopt AI as delegated agents is influenced by social, cognitive, and affective trust. FsQCA findings indicate that combining AI quality, perceived usefulness, and trust (social, cognitive, and affective) best explains the intention to adopt AI as a decision aid and delegated agents.
人工智能数字代理可以作为决策辅助代理或委托代理。决策辅助代理帮助用户做出决策,而委托代理则代表消费者做出决策。本研究确定了影响人工智能数字代理作为决策辅助工具和委托代理的采用意向的因素。研究领域是银行、金融服务和保险业(BFSI)。由于人工智能数字代理的独特性,信任在现有文献中被认为是一个重要的概念。本研究将信任分解为社会信任、认知信任和情感信任。我们采用 PLS-SEM 和 fsQCA 来研究文献中的因素。PLS-SEM 的研究结果表明,感知到的人工智能质量会影响认知信任,感知到的有用性会影响情感信任,而社会信任会影响认知信任和情感信任。采用人工智能作为决策辅助工具的意愿受情感信任和认知信任的影响。采用人工智能作为委托代理的意愿受社会信任、认知信任和情感信任的影响。FsQCA的研究结果表明,将人工智能质量、感知有用性和信任(社会、认知和情感)结合起来,最能解释采用人工智能作为决策辅助工具和委托代理的意愿。
{"title":"AI as decision aid or delegated agent: The effects of trust dimensions on the adoption of AI digital agents","authors":"Aman Pathak,&nbsp;Veena Bansal","doi":"10.1016/j.chbah.2024.100094","DOIUrl":"10.1016/j.chbah.2024.100094","url":null,"abstract":"<div><div>AI digital agents may act as decision-aid or as delegated agents. A decision-aid agent helps a user make decisions, whereas a delegated agent makes decisions on behalf of the consumer. The study determines the factors affecting the adoption intention of AI digital agents as decision aids and delegated agents. The domain of study is banking, financial services, and Insurance sector (BFSI). Due to the unique characteristics of AI digital agents, trust has been identified as an important construct in the extant literature. The study decomposed trust into social, cognitive, and affective trust. We incorporated PLS-SEM and fsQCA to examine the factors drawn from the literature. The findings from PLS-SEM suggest that perceived AI quality affects cognitive trust, perceived usefulness affects affective trust, and social trust affects cognitive and affective trust. The intention to adopt AI as a decision-aid is influenced by affective and cognitive trust. The intention to adopt AI as delegated agents is influenced by social, cognitive, and affective trust. FsQCA findings indicate that combining AI quality, perceived usefulness, and trust (social, cognitive, and affective) best explains the intention to adopt AI as a decision aid and delegated agents.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100094"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142426883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Behavioral and neural evidence for the underestimated attractiveness of faces synthesized using an artificial neural network 使用人工神经网络合成的人脸吸引力被低估的行为和神经证据
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100104
Satoshi Nishida
Recent advancements in artificial intelligence (AI) have not eased human anxiety about AI. If such anxiety diminishes human preference for AI-synthesized visual information, the preference should be reduced solely by the belief that the information is synthesized by AI, independently of its appearance. This study tested this hypothesis by asking experimental participants to rate the attractiveness of faces synthesized by an artificial neural network, under the false instruction that some faces were real and others were synthetic. This experimental design isolated the impact of belief on attractiveness ratings from the actual facial appearance. Brain responses were also recorded with fMRI to examine the neural basis of this belief effect. The results showed that participants rated faces significantly lower when they believed them to be synthetic, and this belief altered the responsiveness of fMRI signals to facial attractiveness in the right fusiform cortex. These findings support the notion that human preference for visual information is reduced solely due to the belief that the information is synthesized by AI, suggesting that AI and robot design should focus not only on enhancing appearance but also on alleviating human anxiety about them.
人工智能(AI)的最新进展并没有缓解人类对人工智能的焦虑。如果这种焦虑降低了人类对人工智能合成的视觉信息的偏好,那么这种偏好应该仅仅因为相信信息是人工智能合成的而降低,与信息的外观无关。本研究通过让实验参与者对人工神经网络合成的人脸的吸引力进行评分来验证这一假设,实验参与者被假定一些人脸是真实的,而另一些人脸是合成的。这种实验设计将信念对吸引力评分的影响与实际面部外观隔离开来。此外,实验还使用 fMRI 记录了大脑的反应,以研究这种信念效应的神经基础。结果显示,当参与者认为人脸是合成的时,他们对人脸的评分会明显降低,而且这种信念会改变右侧纺锤形皮层中 fMRI 信号对面部吸引力的反应。这些发现支持了这样一种观点,即人类对视觉信息的偏好降低完全是因为相信信息是由人工智能合成的,这表明人工智能和机器人的设计不仅要关注增强外观,还要关注减轻人类对外观的焦虑。
{"title":"Behavioral and neural evidence for the underestimated attractiveness of faces synthesized using an artificial neural network","authors":"Satoshi Nishida","doi":"10.1016/j.chbah.2024.100104","DOIUrl":"10.1016/j.chbah.2024.100104","url":null,"abstract":"<div><div>Recent advancements in artificial intelligence (AI) have not eased human anxiety about AI. If such anxiety diminishes human preference for AI-synthesized visual information, the preference should be reduced solely by the belief that the information is synthesized by AI, independently of its appearance. This study tested this hypothesis by asking experimental participants to rate the attractiveness of faces synthesized by an artificial neural network, under the false instruction that some faces were real and others were synthetic. This experimental design isolated the impact of belief on attractiveness ratings from the actual facial appearance. Brain responses were also recorded with fMRI to examine the neural basis of this belief effect. The results showed that participants rated faces significantly lower when they believed them to be synthetic, and this belief altered the responsiveness of fMRI signals to facial attractiveness in the right fusiform cortex. These findings support the notion that human preference for visual information is reduced solely due to the belief that the information is synthesized by AI, suggesting that AI and robot design should focus not only on enhancing appearance but also on alleviating human anxiety about them.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100104"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How voice and helpfulness shape perceptions in human–agent teams 声音和乐于助人如何影响人类--代理团队的认知
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100101
Samuel Westby , Richard J. Radke , Christoph Riedl , Brook Foucault Welles
Voice assistants are increasingly prevalent, from personal devices to team environments. This study explores how voice type and contribution quality influence human–agent team performance and perceptions of anthropomorphism, animacy, intelligence, and trustworthiness. By manipulating both, we reveal mechanisms of perception and clarify ambiguity in previous work. Our results show that the human resemblance of a voice assistant’s voice negatively interacts with the helpfulness of an agent’s contribution to flip its effect on perceived anthropomorphism and perceived animacy. This means human teammates interpret the agent’s contributions differently depending on its voice. Our study found no significant effect of voice on perceived intelligence, trustworthiness, or team performance. We find differences in these measures are caused by manipulating the helpfulness of an agent. These findings suggest that function matters more than form when designing agents for high-performing human–agent teams, but controlling perceptions of anthropomorphism and animacy can be unpredictable even with high human resemblance.
从个人设备到团队环境,语音助手越来越普遍。本研究探讨了语音类型和贡献质量如何影响人类-代理团队的表现以及对拟人化、灵性、智能和可信度的感知。通过对两者的操作,我们揭示了感知机制,并澄清了以往工作中的模糊之处。我们的研究结果表明,语音助手声音的人类相似度与代理贡献的有用性会产生负向互动,从而改变其对感知拟人化和感知生动性的影响。这意味着人类队友会根据语音对代理的贡献做出不同的解释。我们的研究发现,声音对感知智力、可信度或团队表现没有明显影响。我们发现这些指标的差异是由操纵代理的帮助程度造成的。这些研究结果表明,在为高绩效的人类-代理团队设计代理时,功能比形式更重要,但即使人与代理高度相似,控制拟人化和灵性的感知也是不可预测的。
{"title":"How voice and helpfulness shape perceptions in human–agent teams","authors":"Samuel Westby ,&nbsp;Richard J. Radke ,&nbsp;Christoph Riedl ,&nbsp;Brook Foucault Welles","doi":"10.1016/j.chbah.2024.100101","DOIUrl":"10.1016/j.chbah.2024.100101","url":null,"abstract":"<div><div>Voice assistants are increasingly prevalent, from personal devices to team environments. This study explores how voice type and contribution quality influence human–agent team performance and perceptions of anthropomorphism, animacy, intelligence, and trustworthiness. By manipulating both, we reveal mechanisms of perception and clarify ambiguity in previous work. Our results show that the human resemblance of a voice assistant’s voice negatively interacts with the helpfulness of an agent’s contribution to flip its effect on perceived anthropomorphism and perceived animacy. This means human teammates interpret the agent’s contributions differently depending on its voice. Our study found no significant effect of voice on perceived intelligence, trustworthiness, or team performance. We find differences in these measures are caused by manipulating the helpfulness of an agent. These findings suggest that function matters more than form when designing agents for high-performing human–agent teams, but controlling perceptions of anthropomorphism and animacy can be unpredictable even with high human resemblance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100101"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are humanoid robots perceived as mindless mannequins? 人形机器人是否被视为没有思想的人体模型?
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100105
Emmanuele Tidoni , Emily S. Cross , Richard Ramsey , Michele Scandola
The shape and texture of humans and humanoid robots provide perceptual information that help us to appropriately categorise these stimuli. However, it remains unclear which features and attributes are driving the assignment into human and non-human categories. To explore this issue, we ran a series of five preregistered experiments wherein we presented stimuli that varied in their appearance (i.e., humans, humanoid robots, non-human primates, mannequins, hammers, musical instruments) and asked participants to complete a match-to-category task (Experiments 1-2-3), a priming task (Experiment 4), or to rate each category along four dimensions (i.e., similarity, liveliness, body association, action association; Experiment 5). Results indicate that categorising human bodies and humanoid robots requires the integration of both the analyses of their physical shape and visual texture (i.e., to identify a humanoid robot we cannot only rely on its visual shape). Further, our behavioural findings suggest that human bodies may be represented as a special living category separate from non-human animal entities (i.e., primates). Moreover, results also suggest that categorising humans and humanoid robots may rely on a network of information typically associated to human being and inanimate objects respectively (e.g., humans can play musical instruments and have a mind while robots do not play musical instruments and do have not a human mind). Overall, the paradigms introduced here offer new avenues through which to study the perception of human and artificial agents, and how experiences with humanoid robots may change the perception of humanness along a robot—human continuum.
人类和仿人机器人的形状和纹理提供了感知信息,有助于我们对这些刺激进行适当分类。然而,目前还不清楚是哪些特征和属性驱动了人类和非人类的分类。为了探索这个问题,我们进行了一系列五项预先登记的实验,在这些实验中,我们展示了外观各异的刺激物(即人类、仿人机器人、非人灵长类动物、人体模型、锤子、乐器),并要求参与者完成匹配分类任务(实验 1-2-3)、引申任务(实验 4),或从四个维度(即相似性、生动性、肢体关联、动作关联;实验 5)对每个类别进行评分。实验结果表明,对人体和仿人机器人进行分类需要综合分析它们的物理形状和视觉纹理(也就是说,要识别一个仿人机器人,我们不能仅仅依靠它的视觉形状)。此外,我们的行为研究结果表明,人体可以作为一个特殊的生命类别,与非人形动物实体(即灵长类动物)区分开来。此外,研究结果还表明,对人类和仿人机器人的分类可能分别依赖于与人类和无生命物体相关的典型信息网络(例如,人类会演奏乐器并有思想,而机器人不会演奏乐器,也没有人类的思想)。总之,本文介绍的范式为研究人类和人造代理的感知提供了新的途径,也为研究与仿人机器人相处的经历如何改变机器人-人类连续统一体对人类的感知提供了新的途径。
{"title":"Are humanoid robots perceived as mindless mannequins?","authors":"Emmanuele Tidoni ,&nbsp;Emily S. Cross ,&nbsp;Richard Ramsey ,&nbsp;Michele Scandola","doi":"10.1016/j.chbah.2024.100105","DOIUrl":"10.1016/j.chbah.2024.100105","url":null,"abstract":"<div><div>The shape and texture of humans and humanoid robots provide perceptual information that help us to appropriately categorise these stimuli. However, it remains unclear which features and attributes are driving the assignment into human and non-human categories. To explore this issue, we ran a series of five preregistered experiments wherein we presented stimuli that varied in their appearance (i.e., humans, humanoid robots, non-human primates, mannequins, hammers, musical instruments) and asked participants to complete a match-to-category task (Experiments 1-2-3), a priming task (Experiment 4), or to rate each category along four dimensions (i.e., similarity, liveliness, body association, action association; Experiment 5). Results indicate that categorising human bodies and humanoid robots requires the integration of both the analyses of their physical shape and visual texture (i.e., to identify a humanoid robot we cannot only rely on its visual shape). Further, our behavioural findings suggest that human bodies may be represented as a special living category separate from non-human animal entities (i.e., primates). Moreover, results also suggest that categorising humans and humanoid robots may rely on a network of information typically associated to human being and inanimate objects respectively (e.g., humans can play musical instruments and have a mind while robots do not play musical instruments and do have not a human mind). Overall, the paradigms introduced here offer new avenues through which to study the perception of human and artificial agents, and how experiences with humanoid robots may change the perception of humanness along a robot—human continuum.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100105"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142700897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The great AI witch hunt: Reviewers’ perception and (Mis)conception of generative AI in research writing 人工智能大猎杀:审稿人对研究写作中生成式人工智能的看法和(错误)概念
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100095
Hilda Hadan, Derrick M. Wang, Reza Hadi Mogavi, Joseph Tu, Leah Zhang-Kennedy, Lennart E. Nacke
Generative AI (GenAI) use in research writing is growing fast. However, it is unclear how peer reviewers recognize or misjudge AI-augmented manuscripts. To investigate the impact of AI-augmented writing on peer reviews, we conducted a snippet-based online survey with 17 peer reviewers from top-tier HCI conferences. Our findings indicate that while AI-augmented writing improves readability, language diversity, and informativeness, it often lacks research details and reflective insights from authors. Reviewers consistently struggled to distinguish between human and AI-augmented writing but their judgements remained consistent. They noted the loss of a “human touch” and subjective expressions in AI-augmented writing. Based on our findings, we advocate for reviewer guidelines that promote impartial evaluations of submissions, regardless of any personal biases towards GenAI. The quality of the research itself should remain a priority in reviews, regardless of any preconceived notions about the tools used to create it. We emphasize that researchers must maintain their authorship and control over the writing process, even when using GenAI's assistance.
生成式人工智能(GenAI)在研究写作中的应用正在快速增长。然而,同行评审者是如何识别或误判人工智能扩增稿件的还不清楚。为了研究人工智能增强写作对同行评审的影响,我们对来自顶级人机交互会议的17位同行评审员进行了基于片段的在线调查。我们的调查结果表明,虽然人工智能增强写作提高了可读性、语言多样性和信息量,但往往缺乏研究细节和作者的反思性见解。审稿人一直在努力区分人类写作和人工智能增强写作,但他们的判断保持一致。他们注意到人工智能增强写作中 "人情味 "和主观表达的缺失。根据我们的研究结果,我们主张制定审稿人指南,促进对提交的论文进行公正评价,而不考虑对 GenAI 的任何个人偏见。研究本身的质量仍应是评审的优先考虑因素,而不应考虑对用于创建研究的工具有任何先入为主的看法。我们强调,即使在使用 GenAI 的协助时,研究人员也必须保持其作者身份和对撰写过程的控制权。
{"title":"The great AI witch hunt: Reviewers’ perception and (Mis)conception of generative AI in research writing","authors":"Hilda Hadan,&nbsp;Derrick M. Wang,&nbsp;Reza Hadi Mogavi,&nbsp;Joseph Tu,&nbsp;Leah Zhang-Kennedy,&nbsp;Lennart E. Nacke","doi":"10.1016/j.chbah.2024.100095","DOIUrl":"10.1016/j.chbah.2024.100095","url":null,"abstract":"<div><div>Generative AI (GenAI) use in research writing is growing fast. However, it is unclear how peer reviewers recognize or misjudge AI-augmented manuscripts. To investigate the impact of AI-augmented writing on peer reviews, we conducted a snippet-based online survey with 17 peer reviewers from top-tier HCI conferences. Our findings indicate that while AI-augmented writing improves readability, language diversity, and informativeness, it often lacks research details and reflective insights from authors. Reviewers consistently struggled to distinguish between human and AI-augmented writing but their judgements remained consistent. They noted the loss of a “human touch” and subjective expressions in AI-augmented writing. Based on our findings, we advocate for reviewer guidelines that promote impartial evaluations of submissions, regardless of any personal biases towards GenAI. The quality of the research itself should remain a priority in reviews, regardless of any preconceived notions about the tools used to create it. We emphasize that researchers must maintain their authorship and control over the writing process, even when using GenAI's assistance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100095"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142587264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differences between human and artificial/augmented intelligence in medicine 人类智能和人工智能/增强智能在医学中的区别
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100084
Scott Monteith , Tasha Glenn , John R. Geddes , Eric D. Achtyes , Peter C. Whybrow , Michael Bauer

The emphasis on artificial intelligence (AI) is rapidly increasing across many diverse aspects of society. This manuscript discusses some of the key topics related to the expansion of AI. These include a comparison of the unique cognitive capabilities of human intelligence with AI, and the potential risks of using AI in clinical medicine. The general public attitudes towards AI are also discussed, including patient perspectives. As the promotion of AI in high-risk situations such as clinical medicine expands, the limitations, risks and benefits of AI need to be better understood.

对人工智能(AI)的重视正在社会的许多不同方面迅速增加。本手稿讨论了与人工智能发展相关的一些重要话题。其中包括人类智能与人工智能独特认知能力的比较,以及在临床医学中使用人工智能的潜在风险。文章还讨论了公众对人工智能的态度,包括患者的观点。随着人工智能在临床医学等高风险领域的推广,我们需要更好地了解人工智能的局限性、风险和益处。
{"title":"Differences between human and artificial/augmented intelligence in medicine","authors":"Scott Monteith ,&nbsp;Tasha Glenn ,&nbsp;John R. Geddes ,&nbsp;Eric D. Achtyes ,&nbsp;Peter C. Whybrow ,&nbsp;Michael Bauer","doi":"10.1016/j.chbah.2024.100084","DOIUrl":"10.1016/j.chbah.2024.100084","url":null,"abstract":"<div><p>The emphasis on artificial intelligence (AI) is rapidly increasing across many diverse aspects of society. This manuscript discusses some of the key topics related to the expansion of AI. These include a comparison of the unique cognitive capabilities of human intelligence with AI, and the potential risks of using AI in clinical medicine. The general public attitudes towards AI are also discussed, including patient perspectives. As the promotion of AI in high-risk situations such as clinical medicine expands, the limitations, risks and benefits of AI need to be better understood.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100084"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000446/pdfft?md5=de42c1e5a75fbb492e2bc6a082094c1f&pid=1-s2.0-S2949882124000446-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141853511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding AI Chatbot adoption in education: PLS-SEM analysis of user behavior factors 了解人工智能聊天机器人在教育领域的应用:用户行为因素的 PLS-SEM 分析
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100098
Md Rabiul Hasan , Nahian Ismail Chowdhury , Md Hadisur Rahman , Md Asif Bin Syed , JuHyeong Ryu
The integration of Artificial Intelligence (AI) into education is a recent development, with chatbots emerging as a noteworthy addition to this transformative landscape. As online learning platforms rapidly advance, students need to adapt swiftly to excel in this dynamic environment. Consequently, understanding the acceptance of chatbots, particularly those employing Large Language Models (LLM) such as Chat Generative Pretrained Transformer (ChatGPT), Google Bard, and other interactive AI technologies, is of paramount importance. Investigating how students accept and view chatbots is essential to directing their incorporation into Industry 4.0 and enabling a smooth transition to Industry 5.0's customized and human-centered methodology. However, existing research on chatbots in education has overlooked key behavior-related aspects, such as Optimism, Innovativeness, Discomfort, Insecurity, Transparency, Ethics, Interaction, Engagement, and Accuracy, creating a significant literature gap. To address this gap, this study employs Partial Least Squares Structural Equation Modeling (PLS-SEM) to investigate the determinant of chatbots adoption in education among students, considering the Technology Readiness Index and Technology Acceptance Model. Utilizing a five-point Likert scale for data collection, we gathered a total of 185 responses, which were analyzed using R-Studio software. We established 12 hypotheses to achieve its objectives. The results showed that Optimism and Innovativeness are positively associated with Perceived Ease of Use and Perceived Usefulness. Conversely, Discomfort and Insecurity negatively impact Perceived Ease of Use, with only Insecurity negatively affecting Perceived Usefulness. Furthermore, Perceived Ease of Use, Perceived Usefulness, Interaction and Engagement, Accuracy, and Responsiveness all significantly contribute to the Intention to Use, whereas Transparency and Ethics have a negative impact on Intention to Use. Finally, Intention to Use mediates the relationships between Interaction, Engagement, Accuracy, Responsiveness, Transparency, Ethics, and Perception of Decision Making. These findings provide insights for future technology designers, elucidating critical user behavior factors influencing chatbots adoption and utilization in educational contexts.
人工智能(AI)与教育的结合是最近才出现的,而聊天机器人则是这一变革中值得关注的新成员。随着在线学习平台的快速发展,学生需要迅速适应,以便在这一动态环境中脱颖而出。因此,了解学生对聊天机器人的接受程度至关重要,尤其是那些采用大型语言模型(LLM)的聊天机器人,如 Chat Generative Pretrained Transformer(ChatGPT)、Google Bard 和其他交互式人工智能技术。调查学生如何接受和看待聊天机器人,对于引导学生将聊天机器人融入工业 4.0 并顺利过渡到工业 5.0 的定制和以人为本的方法至关重要。然而,关于教育领域聊天机器人的现有研究忽略了与行为相关的关键方面,如乐观、创新、不适、不安全、透明、道德、互动、参与和准确性,从而造成了重大的文献空白。为了填补这一空白,本研究采用偏最小二乘法结构方程模型(PLS-SEM)研究学生在教育领域采用聊天机器人的决定因素,同时考虑了技术准备指数和技术接受模型。我们使用五点李克特量表进行数据收集,共收集到 185 个回答,并使用 R-Studio 软件对其进行了分析。为实现研究目标,我们提出了 12 项假设。结果显示,乐观和创新与 "感知易用性 "和 "感知有用性 "呈正相关。相反,"不舒适 "和 "不安全 "对 "感知易用性 "有负面影响,只有 "不安全 "对 "感知有用性 "有负面影响。此外,"感知易用性"、"感知有用性"、"互动与参与"、"准确性 "和 "响应性 "都对 "使用意向 "有显著促进作用,而 "透明度 "和 "道德规范 "则对 "使用意向 "有负面影响。最后,使用意向对互动、参与、准确性、响应性、透明度、道德和决策感知之间的关系起到了中介作用。这些发现为未来的技术设计者提供了启示,阐明了影响聊天机器人在教育环境中的采用和使用的关键用户行为因素。
{"title":"Understanding AI Chatbot adoption in education: PLS-SEM analysis of user behavior factors","authors":"Md Rabiul Hasan ,&nbsp;Nahian Ismail Chowdhury ,&nbsp;Md Hadisur Rahman ,&nbsp;Md Asif Bin Syed ,&nbsp;JuHyeong Ryu","doi":"10.1016/j.chbah.2024.100098","DOIUrl":"10.1016/j.chbah.2024.100098","url":null,"abstract":"<div><div>The integration of Artificial Intelligence (AI) into education is a recent development, with chatbots emerging as a noteworthy addition to this transformative landscape. As online learning platforms rapidly advance, students need to adapt swiftly to excel in this dynamic environment. Consequently, understanding the acceptance of chatbots, particularly those employing Large Language Models (LLM) such as Chat Generative Pretrained Transformer (ChatGPT), Google Bard, and other interactive AI technologies, is of paramount importance. Investigating how students accept and view chatbots is essential to directing their incorporation into Industry 4.0 and enabling a smooth transition to Industry 5.0's customized and human-centered methodology. However, existing research on chatbots in education has overlooked key behavior-related aspects, such as Optimism, Innovativeness, Discomfort, Insecurity, Transparency, Ethics, Interaction, Engagement, and Accuracy, creating a significant literature gap. To address this gap, this study employs Partial Least Squares Structural Equation Modeling (PLS-SEM) to investigate the determinant of chatbots adoption in education among students, considering the Technology Readiness Index and Technology Acceptance Model. Utilizing a five-point Likert scale for data collection, we gathered a total of 185 responses, which were analyzed using R-Studio software. We established 12 hypotheses to achieve its objectives. The results showed that Optimism and Innovativeness are positively associated with Perceived Ease of Use and Perceived Usefulness. Conversely, Discomfort and Insecurity negatively impact Perceived Ease of Use, with only Insecurity negatively affecting Perceived Usefulness. Furthermore, Perceived Ease of Use, Perceived Usefulness, Interaction and Engagement, Accuracy, and Responsiveness all significantly contribute to the Intention to Use, whereas Transparency and Ethics have a negative impact on Intention to Use. Finally, Intention to Use mediates the relationships between Interaction, Engagement, Accuracy, Responsiveness, Transparency, Ethics, and Perception of Decision Making. These findings provide insights for future technology designers, elucidating critical user behavior factors influencing chatbots adoption and utilization in educational contexts.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100098"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Making moral decisions with artificial agents as advisors. A fNIRS study 以人工代理为顾问做出道德决策。fNIRS 研究
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100096
Eve Florianne Fabre , Damien Mouratille , Vincent Bonnemains , Grazia Pia Palmiotti , Mickael Causse
Artificial Intelligence (AI) is on the verge of impacting every domain of our lives. It is increasingly being used as an advisor to assist in making decisions. The present study aimed at investigating the influence of moral arguments provided by AI-advisors (i.e., decision aid tool) on human moral decision-making and the associated neural correlates. Participants were presented with sacrificial moral dilemmas and had to make moral decisions either by themselves (i.e., baseline run) or with AI-advisors that provided utilitarian or deontological arguments (i.e., AI-advised run), while their brain activity was measured using an fNIRS device. Overall, AI-advisors significantly influenced participants. Longer response times and a decrease in right dorsolateral prefrontal cortex activity were observed in response to deontological arguments than to utilitarian arguments. Being provided with deontological arguments by machines appears to have led to a decreased appraisal of the affective response to the dilemmas. This resulted in a reduced level of utilitarianism, supposedly in an attempt to avoid behaving in a less cold-blooded way than machines and preserve their (self-)image. Taken together, these results suggest that motivational power can led to a voluntary up- and down-regulation of affective processes along moral decision-making.
人工智能(AI)即将影响我们生活的每一个领域。它越来越多地被用作辅助决策的顾问。本研究旨在调查人工智能顾问(即决策辅助工具)提供的道德论据对人类道德决策的影响以及相关的神经关联。研究人员向受试者展示了牺牲性道德困境,受试者必须自己做出道德决策(即基线运行),或者在人工智能顾问提供功利主义或去道义论证的情况下做出道德决策(即人工智能顾问运行),同时使用 fNIRS 设备测量受试者的大脑活动。总体而言,人工智能顾问对参与者的影响很大。与功利主义论点相比,对去义务论点的反应时间更长,右侧背外侧前额叶皮层活动减少。机器提供的去道义论证似乎导致了对两难困境的情感反应评估的降低。这导致了功利主义水平的降低,据说是为了避免行为不如机器冷血,维护自己的(自我)形象。综上所述,这些结果表明,动机力可以导致道德决策过程中情感过程的自愿上调或下调。
{"title":"Making moral decisions with artificial agents as advisors. A fNIRS study","authors":"Eve Florianne Fabre ,&nbsp;Damien Mouratille ,&nbsp;Vincent Bonnemains ,&nbsp;Grazia Pia Palmiotti ,&nbsp;Mickael Causse","doi":"10.1016/j.chbah.2024.100096","DOIUrl":"10.1016/j.chbah.2024.100096","url":null,"abstract":"<div><div>Artificial Intelligence (AI) is on the verge of impacting every domain of our lives. It is increasingly being used as an advisor to assist in making decisions. The present study aimed at investigating the influence of moral arguments provided by AI-advisors (i.e., decision aid tool) on human moral decision-making and the associated neural correlates. Participants were presented with sacrificial moral dilemmas and had to make moral decisions either by themselves (i.e., baseline run) or with AI-advisors that provided utilitarian or deontological arguments (i.e., AI-advised run), while their brain activity was measured using an <em>f</em>NIRS device. Overall, AI-advisors significantly influenced participants. Longer response times and a decrease in right dorsolateral prefrontal cortex activity were observed in response to deontological arguments than to utilitarian arguments. Being provided with deontological arguments by machines appears to have led to a decreased appraisal of the affective response to the dilemmas. This resulted in a reduced level of utilitarianism, supposedly in an attempt to avoid behaving in a less cold-blooded way than machines and preserve their (self-)image. Taken together, these results suggest that motivational power can led to a voluntary up- and down-regulation of affective processes along moral decision-making.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100096"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142554448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aversion against machines with complex mental abilities: The role of individual differences 对具有复杂心理能力的机器的厌恶:个体差异的作用
Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100087
Andrea Grundke , Markus Appel , Jan-Philipp Stein

Theory suggests that robots with human-like mental capabilities (i.e., high agency and experience) evoke stronger aversion than robots without these capabilities. Yet, while several studies support this prediction, there is also evidence that the mental prowess of robots could be evaluated positively, at least by some individuals. To help resolving this ambivalence, we focused on rather stable individual differences that may shape users’ responses to machines with different levels of (perceived) mental ability. Specifically, we explored four key variables as potential moderators: monotheistic religiosity, the tendency to anthropomorphize, prior attitudes towards robots, and the general affinity for complex technology. Two pre-registered online experiments (N1 = 391, N2 = 617) were conducted, using text vignettes to introduce participants to a robot with or without complex, human-like capabilities. Results showed that negative attitudes towards robots increased the relative aversion against machines with (vs. without) complex minds, whereas technology affinity weakened the difference between conditions. Results for monotheistic religiosity turned out mixed, while the tendency to anthropomorphize had no significant impact on the evoked aversion. Overall, we conclude that certain individual differences play an important role in perceptions of machines with complex minds and should be considered in future research.

理论认为,与不具备人类心理能力的机器人相比,具备人类心理能力(即高度代理能力和经验)的机器人会引起更强烈的厌恶感。然而,虽然有多项研究支持这一预测,但也有证据表明,机器人的心理能力可以得到积极的评价,至少在某些个体看来是这样。为了帮助解决这种矛盾,我们重点研究了可能会影响用户对具有不同(感知)智力水平的机器的反应的相当稳定的个体差异。具体来说,我们探讨了作为潜在调节因素的四个关键变量:一神论宗教信仰、拟人化倾向、先前对机器人的态度以及对复杂技术的普遍亲和力。我们进行了两次预先注册的在线实验(N1 = 391,N2 = 617),使用文本小故事向参与者介绍一个具有或不具有复杂的类人功能的机器人。结果显示,对机器人的负面态度增加了人们对具有(与不具有)复杂思维的机器的相对反感,而技术亲和力则削弱了条件之间的差异。一神论宗教信仰的结果不一,而拟人化倾向对唤起的厌恶感没有显著影响。总之,我们得出的结论是,某些个体差异在对具有复杂思维的机器的认知中起着重要作用,在未来的研究中应加以考虑。
{"title":"Aversion against machines with complex mental abilities: The role of individual differences","authors":"Andrea Grundke ,&nbsp;Markus Appel ,&nbsp;Jan-Philipp Stein","doi":"10.1016/j.chbah.2024.100087","DOIUrl":"10.1016/j.chbah.2024.100087","url":null,"abstract":"<div><p>Theory suggests that robots with human-like mental capabilities (i.e., high agency and experience) evoke stronger aversion than robots without these capabilities. Yet, while several studies support this prediction, there is also evidence that the mental prowess of robots could be evaluated positively, at least by some individuals. To help resolving this ambivalence, we focused on rather stable individual differences that may shape users’ responses to machines with different levels of (perceived) mental ability. Specifically, we explored four key variables as potential moderators: monotheistic religiosity, the tendency to anthropomorphize, prior attitudes towards robots, and the general affinity for complex technology. Two pre-registered online experiments (<em>N</em><sub><em>1</em></sub> = 391, <em>N</em><sub><em>2</em></sub> = 617) were conducted, using text vignettes to introduce participants to a robot with or without complex, human-like capabilities. Results showed that negative attitudes towards robots increased the relative aversion against machines with (vs. without) complex minds, whereas technology affinity weakened the difference between conditions. Results for monotheistic religiosity turned out mixed, while the tendency to anthropomorphize had no significant impact on the evoked aversion. Overall, we conclude that certain individual differences play an important role in perceptions of machines with complex minds and should be considered in future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100087"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000471/pdfft?md5=d427d8fd14eb2a20aa2d28b06757e636&pid=1-s2.0-S2949882124000471-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1