Pub Date : 2024-08-01DOI: 10.1016/j.chbah.2024.100089
Tenzin Doleck, Pedram Agand, Dylan Pirrotta
Generative AI applications have increasingly gained visibility in recent educational literature. Yet less is known about how access to generative tools, such as ChatGPT, influences help-seeking during complex problem-solving. In this paper, we aim to advance the understanding of learners' use of a support strategy (hints) when solving data science programming tasks in an online AI-enabled learning environment. The study compared two conditions: students solving problems in DaTu with AI assistance (N = 45) and those without AI assistance (N = 44). Findings reveal no difference in hint-seeking behavior between the two groups, suggesting that the integration of AI assistance has minimal impact on how individuals seek help. The findings also suggest that the availability of AI assistance does not necessarily reduce learners’ reliance on support strategies (such as hints). The current study advances data science education and research by exploring the influence of AI assistance during complex data science problem-solving. We discuss implications and identify paths for future research.
在最近的教育文献中,生成式人工智能应用日益受到关注。然而,人们对使用生成工具(如 ChatGPT)如何影响复杂问题解决过程中的求助行为知之甚少。在本文中,我们旨在进一步了解学习者在人工智能在线学习环境中解决数据科学编程任务时使用辅助策略(提示)的情况。研究比较了两种情况:学生在有人工智能辅助的 DaTu 中解决问题(45 人)和没有人工智能辅助的学生(44 人)。研究结果表明,两组学生在寻求提示行为上没有差异,这表明集成人工智能辅助对个人如何寻求帮助的影响微乎其微。研究结果还表明,提供人工智能辅助并不一定会减少学习者对辅助策略(如提示)的依赖。本研究通过探索人工智能辅助在复杂数据科学问题解决过程中的影响,推动了数据科学教育和研究的发展。我们讨论了研究的意义,并确定了未来研究的方向。
{"title":"Integrating generative AI in data science programming: Group differences in hint requests","authors":"Tenzin Doleck, Pedram Agand, Dylan Pirrotta","doi":"10.1016/j.chbah.2024.100089","DOIUrl":"10.1016/j.chbah.2024.100089","url":null,"abstract":"<div><p>Generative AI applications have increasingly gained visibility in recent educational literature. Yet less is known about how access to generative tools, such as ChatGPT, influences help-seeking during complex problem-solving. In this paper, we aim to advance the understanding of learners' use of a support strategy (hints) when solving data science programming tasks in an online AI-enabled learning environment. The study compared two conditions: students solving problems in <em>DaTu</em> with AI assistance (<em>N</em> = 45) and those without AI assistance (<em>N</em> = 44). Findings reveal no difference in hint-seeking behavior between the two groups, suggesting that the integration of AI assistance has minimal impact on how individuals seek help. The findings also suggest that the availability of AI assistance does not necessarily reduce learners’ reliance on support strategies (such as hints). The current study advances data science education and research by exploring the influence of AI assistance during complex data science problem-solving. We discuss implications and identify paths for future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100089"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000495/pdfft?md5=d2364f734cd75435ea2c327fb376b30e&pid=1-s2.0-S2949882124000495-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.chbah.2024.100094
Aman Pathak, Veena Bansal
AI digital agents may act as decision-aid or as delegated agents. A decision-aid agent helps a user make decisions, whereas a delegated agent makes decisions on behalf of the consumer. The study determines the factors affecting the adoption intention of AI digital agents as decision aids and delegated agents. The domain of study is banking, financial services, and Insurance sector (BFSI). Due to the unique characteristics of AI digital agents, trust has been identified as an important construct in the extant literature. The study decomposed trust into social, cognitive, and affective trust. We incorporated PLS-SEM and fsQCA to examine the factors drawn from the literature. The findings from PLS-SEM suggest that perceived AI quality affects cognitive trust, perceived usefulness affects affective trust, and social trust affects cognitive and affective trust. The intention to adopt AI as a decision-aid is influenced by affective and cognitive trust. The intention to adopt AI as delegated agents is influenced by social, cognitive, and affective trust. FsQCA findings indicate that combining AI quality, perceived usefulness, and trust (social, cognitive, and affective) best explains the intention to adopt AI as a decision aid and delegated agents.
{"title":"AI as decision aid or delegated agent: The effects of trust dimensions on the adoption of AI digital agents","authors":"Aman Pathak, Veena Bansal","doi":"10.1016/j.chbah.2024.100094","DOIUrl":"10.1016/j.chbah.2024.100094","url":null,"abstract":"<div><div>AI digital agents may act as decision-aid or as delegated agents. A decision-aid agent helps a user make decisions, whereas a delegated agent makes decisions on behalf of the consumer. The study determines the factors affecting the adoption intention of AI digital agents as decision aids and delegated agents. The domain of study is banking, financial services, and Insurance sector (BFSI). Due to the unique characteristics of AI digital agents, trust has been identified as an important construct in the extant literature. The study decomposed trust into social, cognitive, and affective trust. We incorporated PLS-SEM and fsQCA to examine the factors drawn from the literature. The findings from PLS-SEM suggest that perceived AI quality affects cognitive trust, perceived usefulness affects affective trust, and social trust affects cognitive and affective trust. The intention to adopt AI as a decision-aid is influenced by affective and cognitive trust. The intention to adopt AI as delegated agents is influenced by social, cognitive, and affective trust. FsQCA findings indicate that combining AI quality, perceived usefulness, and trust (social, cognitive, and affective) best explains the intention to adopt AI as a decision aid and delegated agents.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100094"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142426883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.chbah.2024.100104
Satoshi Nishida
Recent advancements in artificial intelligence (AI) have not eased human anxiety about AI. If such anxiety diminishes human preference for AI-synthesized visual information, the preference should be reduced solely by the belief that the information is synthesized by AI, independently of its appearance. This study tested this hypothesis by asking experimental participants to rate the attractiveness of faces synthesized by an artificial neural network, under the false instruction that some faces were real and others were synthetic. This experimental design isolated the impact of belief on attractiveness ratings from the actual facial appearance. Brain responses were also recorded with fMRI to examine the neural basis of this belief effect. The results showed that participants rated faces significantly lower when they believed them to be synthetic, and this belief altered the responsiveness of fMRI signals to facial attractiveness in the right fusiform cortex. These findings support the notion that human preference for visual information is reduced solely due to the belief that the information is synthesized by AI, suggesting that AI and robot design should focus not only on enhancing appearance but also on alleviating human anxiety about them.
{"title":"Behavioral and neural evidence for the underestimated attractiveness of faces synthesized using an artificial neural network","authors":"Satoshi Nishida","doi":"10.1016/j.chbah.2024.100104","DOIUrl":"10.1016/j.chbah.2024.100104","url":null,"abstract":"<div><div>Recent advancements in artificial intelligence (AI) have not eased human anxiety about AI. If such anxiety diminishes human preference for AI-synthesized visual information, the preference should be reduced solely by the belief that the information is synthesized by AI, independently of its appearance. This study tested this hypothesis by asking experimental participants to rate the attractiveness of faces synthesized by an artificial neural network, under the false instruction that some faces were real and others were synthetic. This experimental design isolated the impact of belief on attractiveness ratings from the actual facial appearance. Brain responses were also recorded with fMRI to examine the neural basis of this belief effect. The results showed that participants rated faces significantly lower when they believed them to be synthetic, and this belief altered the responsiveness of fMRI signals to facial attractiveness in the right fusiform cortex. These findings support the notion that human preference for visual information is reduced solely due to the belief that the information is synthesized by AI, suggesting that AI and robot design should focus not only on enhancing appearance but also on alleviating human anxiety about them.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100104"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.chbah.2024.100101
Samuel Westby , Richard J. Radke , Christoph Riedl , Brook Foucault Welles
Voice assistants are increasingly prevalent, from personal devices to team environments. This study explores how voice type and contribution quality influence human–agent team performance and perceptions of anthropomorphism, animacy, intelligence, and trustworthiness. By manipulating both, we reveal mechanisms of perception and clarify ambiguity in previous work. Our results show that the human resemblance of a voice assistant’s voice negatively interacts with the helpfulness of an agent’s contribution to flip its effect on perceived anthropomorphism and perceived animacy. This means human teammates interpret the agent’s contributions differently depending on its voice. Our study found no significant effect of voice on perceived intelligence, trustworthiness, or team performance. We find differences in these measures are caused by manipulating the helpfulness of an agent. These findings suggest that function matters more than form when designing agents for high-performing human–agent teams, but controlling perceptions of anthropomorphism and animacy can be unpredictable even with high human resemblance.
{"title":"How voice and helpfulness shape perceptions in human–agent teams","authors":"Samuel Westby , Richard J. Radke , Christoph Riedl , Brook Foucault Welles","doi":"10.1016/j.chbah.2024.100101","DOIUrl":"10.1016/j.chbah.2024.100101","url":null,"abstract":"<div><div>Voice assistants are increasingly prevalent, from personal devices to team environments. This study explores how voice type and contribution quality influence human–agent team performance and perceptions of anthropomorphism, animacy, intelligence, and trustworthiness. By manipulating both, we reveal mechanisms of perception and clarify ambiguity in previous work. Our results show that the human resemblance of a voice assistant’s voice negatively interacts with the helpfulness of an agent’s contribution to flip its effect on perceived anthropomorphism and perceived animacy. This means human teammates interpret the agent’s contributions differently depending on its voice. Our study found no significant effect of voice on perceived intelligence, trustworthiness, or team performance. We find differences in these measures are caused by manipulating the helpfulness of an agent. These findings suggest that function matters more than form when designing agents for high-performing human–agent teams, but controlling perceptions of anthropomorphism and animacy can be unpredictable even with high human resemblance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100101"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.chbah.2024.100105
Emmanuele Tidoni , Emily S. Cross , Richard Ramsey , Michele Scandola
The shape and texture of humans and humanoid robots provide perceptual information that help us to appropriately categorise these stimuli. However, it remains unclear which features and attributes are driving the assignment into human and non-human categories. To explore this issue, we ran a series of five preregistered experiments wherein we presented stimuli that varied in their appearance (i.e., humans, humanoid robots, non-human primates, mannequins, hammers, musical instruments) and asked participants to complete a match-to-category task (Experiments 1-2-3), a priming task (Experiment 4), or to rate each category along four dimensions (i.e., similarity, liveliness, body association, action association; Experiment 5). Results indicate that categorising human bodies and humanoid robots requires the integration of both the analyses of their physical shape and visual texture (i.e., to identify a humanoid robot we cannot only rely on its visual shape). Further, our behavioural findings suggest that human bodies may be represented as a special living category separate from non-human animal entities (i.e., primates). Moreover, results also suggest that categorising humans and humanoid robots may rely on a network of information typically associated to human being and inanimate objects respectively (e.g., humans can play musical instruments and have a mind while robots do not play musical instruments and do have not a human mind). Overall, the paradigms introduced here offer new avenues through which to study the perception of human and artificial agents, and how experiences with humanoid robots may change the perception of humanness along a robot—human continuum.
{"title":"Are humanoid robots perceived as mindless mannequins?","authors":"Emmanuele Tidoni , Emily S. Cross , Richard Ramsey , Michele Scandola","doi":"10.1016/j.chbah.2024.100105","DOIUrl":"10.1016/j.chbah.2024.100105","url":null,"abstract":"<div><div>The shape and texture of humans and humanoid robots provide perceptual information that help us to appropriately categorise these stimuli. However, it remains unclear which features and attributes are driving the assignment into human and non-human categories. To explore this issue, we ran a series of five preregistered experiments wherein we presented stimuli that varied in their appearance (i.e., humans, humanoid robots, non-human primates, mannequins, hammers, musical instruments) and asked participants to complete a match-to-category task (Experiments 1-2-3), a priming task (Experiment 4), or to rate each category along four dimensions (i.e., similarity, liveliness, body association, action association; Experiment 5). Results indicate that categorising human bodies and humanoid robots requires the integration of both the analyses of their physical shape and visual texture (i.e., to identify a humanoid robot we cannot only rely on its visual shape). Further, our behavioural findings suggest that human bodies may be represented as a special living category separate from non-human animal entities (i.e., primates). Moreover, results also suggest that categorising humans and humanoid robots may rely on a network of information typically associated to human being and inanimate objects respectively (e.g., humans can play musical instruments and have a mind while robots do not play musical instruments and do have not a human mind). Overall, the paradigms introduced here offer new avenues through which to study the perception of human and artificial agents, and how experiences with humanoid robots may change the perception of humanness along a robot—human continuum.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100105"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142700897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.chbah.2024.100095
Hilda Hadan, Derrick M. Wang, Reza Hadi Mogavi, Joseph Tu, Leah Zhang-Kennedy, Lennart E. Nacke
Generative AI (GenAI) use in research writing is growing fast. However, it is unclear how peer reviewers recognize or misjudge AI-augmented manuscripts. To investigate the impact of AI-augmented writing on peer reviews, we conducted a snippet-based online survey with 17 peer reviewers from top-tier HCI conferences. Our findings indicate that while AI-augmented writing improves readability, language diversity, and informativeness, it often lacks research details and reflective insights from authors. Reviewers consistently struggled to distinguish between human and AI-augmented writing but their judgements remained consistent. They noted the loss of a “human touch” and subjective expressions in AI-augmented writing. Based on our findings, we advocate for reviewer guidelines that promote impartial evaluations of submissions, regardless of any personal biases towards GenAI. The quality of the research itself should remain a priority in reviews, regardless of any preconceived notions about the tools used to create it. We emphasize that researchers must maintain their authorship and control over the writing process, even when using GenAI's assistance.
{"title":"The great AI witch hunt: Reviewers’ perception and (Mis)conception of generative AI in research writing","authors":"Hilda Hadan, Derrick M. Wang, Reza Hadi Mogavi, Joseph Tu, Leah Zhang-Kennedy, Lennart E. Nacke","doi":"10.1016/j.chbah.2024.100095","DOIUrl":"10.1016/j.chbah.2024.100095","url":null,"abstract":"<div><div>Generative AI (GenAI) use in research writing is growing fast. However, it is unclear how peer reviewers recognize or misjudge AI-augmented manuscripts. To investigate the impact of AI-augmented writing on peer reviews, we conducted a snippet-based online survey with 17 peer reviewers from top-tier HCI conferences. Our findings indicate that while AI-augmented writing improves readability, language diversity, and informativeness, it often lacks research details and reflective insights from authors. Reviewers consistently struggled to distinguish between human and AI-augmented writing but their judgements remained consistent. They noted the loss of a “human touch” and subjective expressions in AI-augmented writing. Based on our findings, we advocate for reviewer guidelines that promote impartial evaluations of submissions, regardless of any personal biases towards GenAI. The quality of the research itself should remain a priority in reviews, regardless of any preconceived notions about the tools used to create it. We emphasize that researchers must maintain their authorship and control over the writing process, even when using GenAI's assistance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100095"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142587264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.chbah.2024.100084
Scott Monteith , Tasha Glenn , John R. Geddes , Eric D. Achtyes , Peter C. Whybrow , Michael Bauer
The emphasis on artificial intelligence (AI) is rapidly increasing across many diverse aspects of society. This manuscript discusses some of the key topics related to the expansion of AI. These include a comparison of the unique cognitive capabilities of human intelligence with AI, and the potential risks of using AI in clinical medicine. The general public attitudes towards AI are also discussed, including patient perspectives. As the promotion of AI in high-risk situations such as clinical medicine expands, the limitations, risks and benefits of AI need to be better understood.
{"title":"Differences between human and artificial/augmented intelligence in medicine","authors":"Scott Monteith , Tasha Glenn , John R. Geddes , Eric D. Achtyes , Peter C. Whybrow , Michael Bauer","doi":"10.1016/j.chbah.2024.100084","DOIUrl":"10.1016/j.chbah.2024.100084","url":null,"abstract":"<div><p>The emphasis on artificial intelligence (AI) is rapidly increasing across many diverse aspects of society. This manuscript discusses some of the key topics related to the expansion of AI. These include a comparison of the unique cognitive capabilities of human intelligence with AI, and the potential risks of using AI in clinical medicine. The general public attitudes towards AI are also discussed, including patient perspectives. As the promotion of AI in high-risk situations such as clinical medicine expands, the limitations, risks and benefits of AI need to be better understood.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100084"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000446/pdfft?md5=de42c1e5a75fbb492e2bc6a082094c1f&pid=1-s2.0-S2949882124000446-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141853511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.chbah.2024.100098
Md Rabiul Hasan , Nahian Ismail Chowdhury , Md Hadisur Rahman , Md Asif Bin Syed , JuHyeong Ryu
The integration of Artificial Intelligence (AI) into education is a recent development, with chatbots emerging as a noteworthy addition to this transformative landscape. As online learning platforms rapidly advance, students need to adapt swiftly to excel in this dynamic environment. Consequently, understanding the acceptance of chatbots, particularly those employing Large Language Models (LLM) such as Chat Generative Pretrained Transformer (ChatGPT), Google Bard, and other interactive AI technologies, is of paramount importance. Investigating how students accept and view chatbots is essential to directing their incorporation into Industry 4.0 and enabling a smooth transition to Industry 5.0's customized and human-centered methodology. However, existing research on chatbots in education has overlooked key behavior-related aspects, such as Optimism, Innovativeness, Discomfort, Insecurity, Transparency, Ethics, Interaction, Engagement, and Accuracy, creating a significant literature gap. To address this gap, this study employs Partial Least Squares Structural Equation Modeling (PLS-SEM) to investigate the determinant of chatbots adoption in education among students, considering the Technology Readiness Index and Technology Acceptance Model. Utilizing a five-point Likert scale for data collection, we gathered a total of 185 responses, which were analyzed using R-Studio software. We established 12 hypotheses to achieve its objectives. The results showed that Optimism and Innovativeness are positively associated with Perceived Ease of Use and Perceived Usefulness. Conversely, Discomfort and Insecurity negatively impact Perceived Ease of Use, with only Insecurity negatively affecting Perceived Usefulness. Furthermore, Perceived Ease of Use, Perceived Usefulness, Interaction and Engagement, Accuracy, and Responsiveness all significantly contribute to the Intention to Use, whereas Transparency and Ethics have a negative impact on Intention to Use. Finally, Intention to Use mediates the relationships between Interaction, Engagement, Accuracy, Responsiveness, Transparency, Ethics, and Perception of Decision Making. These findings provide insights for future technology designers, elucidating critical user behavior factors influencing chatbots adoption and utilization in educational contexts.
{"title":"Understanding AI Chatbot adoption in education: PLS-SEM analysis of user behavior factors","authors":"Md Rabiul Hasan , Nahian Ismail Chowdhury , Md Hadisur Rahman , Md Asif Bin Syed , JuHyeong Ryu","doi":"10.1016/j.chbah.2024.100098","DOIUrl":"10.1016/j.chbah.2024.100098","url":null,"abstract":"<div><div>The integration of Artificial Intelligence (AI) into education is a recent development, with chatbots emerging as a noteworthy addition to this transformative landscape. As online learning platforms rapidly advance, students need to adapt swiftly to excel in this dynamic environment. Consequently, understanding the acceptance of chatbots, particularly those employing Large Language Models (LLM) such as Chat Generative Pretrained Transformer (ChatGPT), Google Bard, and other interactive AI technologies, is of paramount importance. Investigating how students accept and view chatbots is essential to directing their incorporation into Industry 4.0 and enabling a smooth transition to Industry 5.0's customized and human-centered methodology. However, existing research on chatbots in education has overlooked key behavior-related aspects, such as Optimism, Innovativeness, Discomfort, Insecurity, Transparency, Ethics, Interaction, Engagement, and Accuracy, creating a significant literature gap. To address this gap, this study employs Partial Least Squares Structural Equation Modeling (PLS-SEM) to investigate the determinant of chatbots adoption in education among students, considering the Technology Readiness Index and Technology Acceptance Model. Utilizing a five-point Likert scale for data collection, we gathered a total of 185 responses, which were analyzed using R-Studio software. We established 12 hypotheses to achieve its objectives. The results showed that Optimism and Innovativeness are positively associated with Perceived Ease of Use and Perceived Usefulness. Conversely, Discomfort and Insecurity negatively impact Perceived Ease of Use, with only Insecurity negatively affecting Perceived Usefulness. Furthermore, Perceived Ease of Use, Perceived Usefulness, Interaction and Engagement, Accuracy, and Responsiveness all significantly contribute to the Intention to Use, whereas Transparency and Ethics have a negative impact on Intention to Use. Finally, Intention to Use mediates the relationships between Interaction, Engagement, Accuracy, Responsiveness, Transparency, Ethics, and Perception of Decision Making. These findings provide insights for future technology designers, elucidating critical user behavior factors influencing chatbots adoption and utilization in educational contexts.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100098"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.chbah.2024.100096
Eve Florianne Fabre , Damien Mouratille , Vincent Bonnemains , Grazia Pia Palmiotti , Mickael Causse
Artificial Intelligence (AI) is on the verge of impacting every domain of our lives. It is increasingly being used as an advisor to assist in making decisions. The present study aimed at investigating the influence of moral arguments provided by AI-advisors (i.e., decision aid tool) on human moral decision-making and the associated neural correlates. Participants were presented with sacrificial moral dilemmas and had to make moral decisions either by themselves (i.e., baseline run) or with AI-advisors that provided utilitarian or deontological arguments (i.e., AI-advised run), while their brain activity was measured using an fNIRS device. Overall, AI-advisors significantly influenced participants. Longer response times and a decrease in right dorsolateral prefrontal cortex activity were observed in response to deontological arguments than to utilitarian arguments. Being provided with deontological arguments by machines appears to have led to a decreased appraisal of the affective response to the dilemmas. This resulted in a reduced level of utilitarianism, supposedly in an attempt to avoid behaving in a less cold-blooded way than machines and preserve their (self-)image. Taken together, these results suggest that motivational power can led to a voluntary up- and down-regulation of affective processes along moral decision-making.
{"title":"Making moral decisions with artificial agents as advisors. A fNIRS study","authors":"Eve Florianne Fabre , Damien Mouratille , Vincent Bonnemains , Grazia Pia Palmiotti , Mickael Causse","doi":"10.1016/j.chbah.2024.100096","DOIUrl":"10.1016/j.chbah.2024.100096","url":null,"abstract":"<div><div>Artificial Intelligence (AI) is on the verge of impacting every domain of our lives. It is increasingly being used as an advisor to assist in making decisions. The present study aimed at investigating the influence of moral arguments provided by AI-advisors (i.e., decision aid tool) on human moral decision-making and the associated neural correlates. Participants were presented with sacrificial moral dilemmas and had to make moral decisions either by themselves (i.e., baseline run) or with AI-advisors that provided utilitarian or deontological arguments (i.e., AI-advised run), while their brain activity was measured using an <em>f</em>NIRS device. Overall, AI-advisors significantly influenced participants. Longer response times and a decrease in right dorsolateral prefrontal cortex activity were observed in response to deontological arguments than to utilitarian arguments. Being provided with deontological arguments by machines appears to have led to a decreased appraisal of the affective response to the dilemmas. This resulted in a reduced level of utilitarianism, supposedly in an attempt to avoid behaving in a less cold-blooded way than machines and preserve their (self-)image. Taken together, these results suggest that motivational power can led to a voluntary up- and down-regulation of affective processes along moral decision-making.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100096"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142554448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01DOI: 10.1016/j.chbah.2024.100087
Andrea Grundke , Markus Appel , Jan-Philipp Stein
Theory suggests that robots with human-like mental capabilities (i.e., high agency and experience) evoke stronger aversion than robots without these capabilities. Yet, while several studies support this prediction, there is also evidence that the mental prowess of robots could be evaluated positively, at least by some individuals. To help resolving this ambivalence, we focused on rather stable individual differences that may shape users’ responses to machines with different levels of (perceived) mental ability. Specifically, we explored four key variables as potential moderators: monotheistic religiosity, the tendency to anthropomorphize, prior attitudes towards robots, and the general affinity for complex technology. Two pre-registered online experiments (N1 = 391, N2 = 617) were conducted, using text vignettes to introduce participants to a robot with or without complex, human-like capabilities. Results showed that negative attitudes towards robots increased the relative aversion against machines with (vs. without) complex minds, whereas technology affinity weakened the difference between conditions. Results for monotheistic religiosity turned out mixed, while the tendency to anthropomorphize had no significant impact on the evoked aversion. Overall, we conclude that certain individual differences play an important role in perceptions of machines with complex minds and should be considered in future research.
{"title":"Aversion against machines with complex mental abilities: The role of individual differences","authors":"Andrea Grundke , Markus Appel , Jan-Philipp Stein","doi":"10.1016/j.chbah.2024.100087","DOIUrl":"10.1016/j.chbah.2024.100087","url":null,"abstract":"<div><p>Theory suggests that robots with human-like mental capabilities (i.e., high agency and experience) evoke stronger aversion than robots without these capabilities. Yet, while several studies support this prediction, there is also evidence that the mental prowess of robots could be evaluated positively, at least by some individuals. To help resolving this ambivalence, we focused on rather stable individual differences that may shape users’ responses to machines with different levels of (perceived) mental ability. Specifically, we explored four key variables as potential moderators: monotheistic religiosity, the tendency to anthropomorphize, prior attitudes towards robots, and the general affinity for complex technology. Two pre-registered online experiments (<em>N</em><sub><em>1</em></sub> = 391, <em>N</em><sub><em>2</em></sub> = 617) were conducted, using text vignettes to introduce participants to a robot with or without complex, human-like capabilities. Results showed that negative attitudes towards robots increased the relative aversion against machines with (vs. without) complex minds, whereas technology affinity weakened the difference between conditions. Results for monotheistic religiosity turned out mixed, while the tendency to anthropomorphize had no significant impact on the evoked aversion. Overall, we conclude that certain individual differences play an important role in perceptions of machines with complex minds and should be considered in future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100087"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000471/pdfft?md5=d427d8fd14eb2a20aa2d28b06757e636&pid=1-s2.0-S2949882124000471-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}