首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Beyond the monotonic: Enhancing human-robot interaction through affective communication
Pub Date : 2025-02-18 DOI: 10.1016/j.chbah.2025.100131
Kim Klüber , Linda Onnasch
As robots increasingly become part of human environments, their ability to convey empathy and emotional expression is critical for effective interaction. While non-verbal cues, such as facial expressions and body language, have been widely researched, the role of verbal communication - especially affective speech - has received less attention, despite being essential in many human-robot interaction scenarios. This study addresses this gap through a laboratory experiment with 157 participants, investigating how a robot's affective speech influences human perceptions and behavior. To explore the effects of varying intonation and content, we manipulated the robot's speech across three conditions: monotonic-neutral, monotonic-emotional, and expressive-emotional. Key measures included attributions of experience and agency (following the Theory of Mind), perceived trustworthiness (cognitive and affective level), and forgiveness. Additionally, the Balloon Analogue Risk Task (BART) was employed to assess dependence behavior objectively, and a teaching task with intentional robot errors was used to measure behavioral forgiveness. Our findings reveal that emotionally expressive speech enhances the robot's perceived capacity for experience (i.e., the ability to feel emotions) and increases affective trustworthiness. The results further suggest that affective content of speech, rather than intonation, is the decisive factor. Consequently, in future robotic applications, the affective content of a robot's communication may play a more critical role than the emotional tone. However, we did not find significant differences in dependence behavior or forgiveness across the varying levels of affective communication. This suggests that while affective speech can influence emotional perceptions of the robot, it does not necessarily alter behavior.
{"title":"Beyond the monotonic: Enhancing human-robot interaction through affective communication","authors":"Kim Klüber ,&nbsp;Linda Onnasch","doi":"10.1016/j.chbah.2025.100131","DOIUrl":"10.1016/j.chbah.2025.100131","url":null,"abstract":"<div><div>As robots increasingly become part of human environments, their ability to convey empathy and emotional expression is critical for effective interaction. While non-verbal cues, such as facial expressions and body language, have been widely researched, the role of verbal communication - especially affective speech - has received less attention, despite being essential in many human-robot interaction scenarios. This study addresses this gap through a laboratory experiment with 157 participants, investigating how a robot's affective speech influences human perceptions and behavior. To explore the effects of varying intonation and content, we manipulated the robot's speech across three conditions: monotonic-neutral, monotonic-emotional, and expressive-emotional. Key measures included attributions of experience and agency (following the Theory of Mind), perceived trustworthiness (cognitive and affective level), and forgiveness. Additionally, the Balloon Analogue Risk Task (BART) was employed to assess dependence behavior objectively, and a teaching task with intentional robot errors was used to measure behavioral forgiveness. Our findings reveal that emotionally expressive speech enhances the robot's perceived capacity for experience (i.e., the ability to feel emotions) and increases affective trustworthiness. The results further suggest that affective content of speech, rather than intonation, is the decisive factor. Consequently, in future robotic applications, the affective content of a robot's communication may play a more critical role than the emotional tone. However, we did not find significant differences in dependence behavior or forgiveness across the varying levels of affective communication. This suggests that while affective speech can influence emotional perceptions of the robot, it does not necessarily alter behavior.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100131"},"PeriodicalIF":0.0,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
More is more: Addition bias in large language models
Pub Date : 2025-02-18 DOI: 10.1016/j.chbah.2025.100129
Luca Santagata , Cristiano De Nobili
In this paper, we investigate the presence of addition bias in Large Language Models (LLMs), drawing a parallel to the cognitive bias observed in humans where individuals tend to favor additive over sub-tractive changes [3]. Using a series of controlled experiments, we tested various LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, Mistral, MathΣtral, and Llama 3.1, on tasks designed to measure their propensity for additive versus subtractive modifications. Our findings demonstrate a significant preference for additive changes across all tested models. For example, in a palindrome creation task, Llama 3.1 favored adding let-ters 97.85% of the time over removing them. Similarly, in a Lego tower balancing task, GPT-3.5 Turbo chose to add a brick 76.38% of the time rather than remove one. In a text summarization task, Mistral 7B pro-duced longer summaries in 59.40%–75.10% of cases when asked to improve its own or others’ writing. These results indicate that, similar to humans, LLMs exhibit a marked addition bias, which might have im-plications when LLMs are used on a large scale. Addittive bias might increase resource use and environmental impact, leading to higher eco-nomic costs due to overconsumption and waste. This bias should be con-sidered in the development and application of LLMs to ensure balanced and efficient problem-solving approaches.
{"title":"More is more: Addition bias in large language models","authors":"Luca Santagata ,&nbsp;Cristiano De Nobili","doi":"10.1016/j.chbah.2025.100129","DOIUrl":"10.1016/j.chbah.2025.100129","url":null,"abstract":"<div><div>In this paper, we investigate the presence of addition bias in Large Language Models (LLMs), drawing a parallel to the cognitive bias observed in humans where individuals tend to favor additive over sub-tractive changes [3]. Using a series of controlled experiments, we tested various LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, Mistral, Math<em>Σ</em>tral, and Llama 3.1, on tasks designed to measure their propensity for additive versus subtractive modifications. Our findings demonstrate a significant preference for additive changes across all tested models. For example, in a palindrome creation task, Llama 3.1 favored adding let-ters 97.85% of the time over removing them. Similarly, in a Lego tower balancing task, GPT-3.5 Turbo chose to add a brick 76.38% of the time rather than remove one. In a text summarization task, Mistral 7B pro-duced longer summaries in 59.40%–75.10% of cases when asked to improve its own or others’ writing. These results indicate that, similar to humans, LLMs exhibit a marked addition bias, which might have im-plications when LLMs are used on a large scale. Addittive bias might increase resource use and environmental impact, leading to higher eco-nomic costs due to overconsumption and waste. This bias should be con-sidered in the development and application of LLMs to ensure balanced and efficient problem-solving approaches.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100129"},"PeriodicalIF":0.0,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From robot to android to humanoid: Does self-referencing influence uncanny valley perceptions of mechanic or anthropomorphic face morphs?
Pub Date : 2025-02-14 DOI: 10.1016/j.chbah.2025.100130
William D. Weisman, Jorge Peña
To examine how the self-referencing effect influences uncanny valley perceptions, this study (N = 188) employed an 11-level mechanic-to-human face morph continuum (ranging from 0% to 100% human-likeness in 10% increments) by 2 (self-face vs. stranger-face morphs) within-subjects repeated measures design. Contrary to expectations, self-morphs only enhanced similarity identification and resource allocation. In contrast, anthropomorphic morphs increased human perception, likability, resource allocation, mind perception of experience and agency, and similarity identification, while reducing eerie perceptions relative to mechanical morphs. Individual differences in science fiction and technology affinity influenced responses. Higher affinity participants attributed greater mind perception and showed increased acceptance of synthetic faces. These findings reinforce anthropomorphism as the primary driver of uncanny valley responses, while self-related stimuli exert a limited yet reliable influence on select social perception outcomes. The study also highlighted the role of individual differences in shaping responses to artificial faces.
{"title":"From robot to android to humanoid: Does self-referencing influence uncanny valley perceptions of mechanic or anthropomorphic face morphs?","authors":"William D. Weisman,&nbsp;Jorge Peña","doi":"10.1016/j.chbah.2025.100130","DOIUrl":"10.1016/j.chbah.2025.100130","url":null,"abstract":"<div><div>To examine how the self-referencing effect influences uncanny valley perceptions, this study (N = 188) employed an 11-level mechanic-to-human face morph continuum (ranging from 0% to 100% human-likeness in 10% increments) by 2 (self-face vs. stranger-face morphs) within-subjects repeated measures design. Contrary to expectations, self-morphs only enhanced similarity identification and resource allocation. In contrast, anthropomorphic morphs increased human perception, likability, resource allocation, mind perception of experience and agency, and similarity identification, while reducing eerie perceptions relative to mechanical morphs. Individual differences in science fiction and technology affinity influenced responses. Higher affinity participants attributed greater mind perception and showed increased acceptance of synthetic faces. These findings reinforce anthropomorphism as the primary driver of uncanny valley responses, while self-related stimuli exert a limited yet reliable influence on select social perception outcomes. The study also highlighted the role of individual differences in shaping responses to artificial faces.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100130"},"PeriodicalIF":0.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using AI chatbots (e.g., CHATGPT) in seeking health-related information online: The case of a common ailment
Pub Date : 2025-02-04 DOI: 10.1016/j.chbah.2025.100127
Pouyan Esmaeilzadeh , Mahed Maddah , Tala Mirzaei
In the age of AI, healthcare practices and patient-provider communications can be significantly transformed via AI-based tools and systems that distribute Intelligence on the Internet. This study employs a quantitative approach to explore the public value perceptions of using conversational AI (e.g., CHATGPT) to find health-related information online under non-emergency conditions related to a common ailment. Using structural equation modeling on survey data collected from 231 respondents in the US, our study examines the hypotheses linking hedonic and utilitarian values, user satisfaction, willingness to reuse conversational AI, and intentions to take recommended actions. The results show that both hedonic and utilitarian values strongly influence users' satisfaction with conversational AI. The utilitarian values of ease of use, accuracy, relevance, completeness, timeliness, clarity, variety, timesaving, cost-effectiveness, and privacy concern, and the hedonic values of emotional impact and user engagement are significant predictors of satisfaction with conversational AI. Moreover, satisfaction directly influences users' continued intention to use and their willingness to adopt generated results and medical advice. Also, the mediating effect of satisfaction is crucial as it helps to understand the underlying mechanisms of the relationship between value perceptions and desired use behavior. The study emphasizes considering not only the instrumental benefits but also the enjoyment derived from interacting with conversational AI for healthcare purposes. We believe that this study offers valuable theoretical and practical implications for stakeholders interested in advancing the application of AI chatbots for health information provision. Our study provides insights into AI research by explaining the multidimensional nature of public value grounded in functional and emotional gratification. The practical contributions of this study can be useful for developers and designers of conversational AI, as they can focus on improving the design features of AI chatbots to meet users’ expectations, preferences, and satisfaction and promote their adoption and continued use.
{"title":"Using AI chatbots (e.g., CHATGPT) in seeking health-related information online: The case of a common ailment","authors":"Pouyan Esmaeilzadeh ,&nbsp;Mahed Maddah ,&nbsp;Tala Mirzaei","doi":"10.1016/j.chbah.2025.100127","DOIUrl":"10.1016/j.chbah.2025.100127","url":null,"abstract":"<div><div>In the age of AI, healthcare practices and patient-provider communications can be significantly transformed via AI-based tools and systems that distribute Intelligence on the Internet. This study employs a quantitative approach to explore the public value perceptions of using conversational AI (e.g., CHATGPT) to find health-related information online under non-emergency conditions related to a common ailment. Using structural equation modeling on survey data collected from 231 respondents in the US, our study examines the hypotheses linking hedonic and utilitarian values, user satisfaction, willingness to reuse conversational AI, and intentions to take recommended actions. The results show that both hedonic and utilitarian values strongly influence users' satisfaction with conversational AI. The utilitarian values of ease of use, accuracy, relevance, completeness, timeliness, clarity, variety, timesaving, cost-effectiveness, and privacy concern, and the hedonic values of emotional impact and user engagement are significant predictors of satisfaction with conversational AI. Moreover, satisfaction directly influences users' continued intention to use and their willingness to adopt generated results and medical advice. Also, the mediating effect of satisfaction is crucial as it helps to understand the underlying mechanisms of the relationship between value perceptions and desired use behavior. The study emphasizes considering not only the instrumental benefits but also the enjoyment derived from interacting with conversational AI for healthcare purposes. We believe that this study offers valuable theoretical and practical implications for stakeholders interested in advancing the application of AI chatbots for health information provision. Our study provides insights into AI research by explaining the multidimensional nature of public value grounded in functional and emotional gratification. The practical contributions of this study can be useful for developers and designers of conversational AI, as they can focus on improving the design features of AI chatbots to meet users’ expectations, preferences, and satisfaction and promote their adoption and continued use.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100127"},"PeriodicalIF":0.0,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI anxiety: Explication and exploration of effect on state anxiety when interacting with AI doctors
Pub Date : 2025-02-04 DOI: 10.1016/j.chbah.2025.100128
Hyun Yang , S. Shyam Sundar
People often have anxiety toward artificial intelligence (AI) due to lack of transparency about its operation. This study explicates this anxiety by conceptualizing it as a trait, and examines its effect. It hypothesizes that users with higher AI (trait) anxiety would have higher state anxiety when interacting with an AI doctor, compared to those with lower AI (trait) anxiety, in part because it is a deviation from the status quo of being treated by a human doctor. As a solution, it hypothesizes that an AI doctor's explanations for its diagnosis would relieve patients' state anxiety. Furthermore, based on the status quo bias theory and an adaptation of the theory of interactive media effects (TIME) for the study of human-AI interaction (HAII), this study hypothesizes that the affect heuristic triggered by state anxiety would mediate the causal relationship between the source cue of a doctor and user experience (UX) as well as behavioral intentions. A pre-registered 2 (human vs. AI) x 2 (explainable vs. non-explainable) experiment (N = 346) was conducted to test the hypotheses. Data revealed that AI (trait) anxiety is significantly associated with state anxiety. Additionally, data showed that an AI doctor's explanations for its diagnosis significantly reduce state anxiety in patients with high AI (trait) anxiety but increase state anxiety in those with low AI (trait) anxiety, but these effects of explanations are not significant among patients who interact with a human doctor. Theoretical and design implications of these findings and limitations of this study are discussed.
{"title":"AI anxiety: Explication and exploration of effect on state anxiety when interacting with AI doctors","authors":"Hyun Yang ,&nbsp;S. Shyam Sundar","doi":"10.1016/j.chbah.2025.100128","DOIUrl":"10.1016/j.chbah.2025.100128","url":null,"abstract":"<div><div>People often have anxiety toward artificial intelligence (AI) due to lack of transparency about its operation. This study explicates this anxiety by conceptualizing it as a trait, and examines its effect. It hypothesizes that users with higher AI (trait) anxiety would have higher state anxiety when interacting with an AI doctor, compared to those with lower AI (trait) anxiety, in part because it is a deviation from the status quo of being treated by a human doctor. As a solution, it hypothesizes that an AI doctor's explanations for its diagnosis would relieve patients' state anxiety. Furthermore, based on the status quo bias theory and an adaptation of the theory of interactive media effects (TIME) for the study of human-AI interaction (HAII), this study hypothesizes that the affect heuristic triggered by state anxiety would mediate the causal relationship between the source cue of a doctor and user experience (UX) as well as behavioral intentions. A pre-registered 2 (human vs. AI) x 2 (explainable vs. non-explainable) experiment (<em>N</em> = 346) was conducted to test the hypotheses. Data revealed that AI (trait) anxiety is significantly associated with state anxiety. Additionally, data showed that an AI doctor's explanations for its diagnosis significantly reduce state anxiety in patients with high AI (trait) anxiety but increase state anxiety in those with low AI (trait) anxiety, but these effects of explanations are not significant among patients who interact with a human doctor. Theoretical and design implications of these findings and limitations of this study are discussed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100128"},"PeriodicalIF":0.0,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143376434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reevaluating personalization in AI-powered service chatbots: A study on identity matching via few-shot learning
Pub Date : 2025-02-04 DOI: 10.1016/j.chbah.2025.100126
Jan Blömker, Carmen-Maria Albrecht
This study explores the potential of AI-based few-shot learning in creating distinct service chatbot identities (i.e., based on gender and personality). Further, it examines the impact of customer-chatbot identity congruity on perceived enjoyment, usefulness, ease of use, and future chatbot usage intention. A scenario-based online experiment with a 4 (Chatbot identity: extraverted vs. introverted vs. male vs. female) × 2 (Congruity: matching vs. mismatching) between-subjects design with N = 475 participants was conducted. The results confirmed that customers could distinguish between different chatbot identities created via few-shot learning. Contrary to the initial hypothesis, gender-based personalization led to a stronger future chatbot usage intention than personalization based on personality traits. This finding challenges the assumption that an increased depth of personalization is inherently more effective. Customer-chatbot identity congruity did not significantly impact future chatbot usage intention, questioning existing beliefs about the benefits of identity matching. Perceived enjoyment and perceived usefulness mediated the relationship between chatbot identity and future chatbot usage intention, while perceived ease of use did not. High levels of perceived enjoyment and usefulness were strong predictors for the future chatbot usage intention. Thus, while few-shot learning effectively creates distinct chatbot identities, an increased depth of personalization and identity matching do not significantly influence future chatbot usage intentions. Practitioners should prioritize enhancing perceived enjoyment and usefulness in chatbot interactions to encourage future chatbot use.
{"title":"Reevaluating personalization in AI-powered service chatbots: A study on identity matching via few-shot learning","authors":"Jan Blömker,&nbsp;Carmen-Maria Albrecht","doi":"10.1016/j.chbah.2025.100126","DOIUrl":"10.1016/j.chbah.2025.100126","url":null,"abstract":"<div><div>This study explores the potential of AI-based few-shot learning in creating distinct service chatbot identities (i.e., based on gender and personality). Further, it examines the impact of customer-chatbot identity congruity on perceived enjoyment, usefulness, ease of use, and future chatbot usage intention. A scenario-based online experiment with a 4 (Chatbot identity: extraverted vs. introverted vs. male vs. female) × 2 (Congruity: matching vs. mismatching) between-subjects design with <em>N</em> = 475 participants was conducted. The results confirmed that customers could distinguish between different chatbot identities created via few-shot learning. Contrary to the initial hypothesis, gender-based personalization led to a stronger future chatbot usage intention than personalization based on personality traits. This finding challenges the assumption that an increased depth of personalization is inherently more effective. Customer-chatbot identity congruity did not significantly impact future chatbot usage intention, questioning existing beliefs about the benefits of identity matching. Perceived enjoyment and perceived usefulness mediated the relationship between chatbot identity and future chatbot usage intention, while perceived ease of use did not. High levels of perceived enjoyment and usefulness were strong predictors for the future chatbot usage intention. Thus, while few-shot learning effectively creates distinct chatbot identities, an increased depth of personalization and identity matching do not significantly influence future chatbot usage intentions. Practitioners should prioritize enhancing perceived enjoyment and usefulness in chatbot interactions to encourage future chatbot use.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100126"},"PeriodicalIF":0.0,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning through AI-clones: Enhancing self-perception and presentation performance
Pub Date : 2025-02-03 DOI: 10.1016/j.chbah.2025.100117
Qingxiao Zheng , Zhuoer Chen , Yun Huang
This study examines the impact of AI-generated digital clones with self-images (AI-clones) on enhancing perceptions and skills in online presentations. A mixed-design experiment with 44 international students compared self-recording videos (self-recording group) to AI-clone videos (AI-clone group) for online English presentation practice. AI-clone videos were generated using voice cloning, face swapping, lip-syncing, and body-language simulation, refining the repetition, filler words, and pronunciation of participants' original presentations. The results, viewed through the lens of social comparison theory, showed that AI clones functioned as positive “role models” for encouraging positive social comparisons. Regarding self-perceptions, speech qualities, and self-kindness, the self-recording group showed an increase in pronunciation satisfaction. However, the AI-clone group exhibited greater self-kindness, a wider scope of self-observation, and a meaningful transition from a corrective to an enhancive approach in self-critique. Moreover, machine-rated scores revealed immediate performance gains only within the AI-clone group. Considering individual differences, aligning interventions with participants’ regulatory focus significantly enhanced their learning experience. These findings highlight the theoretical, practical, and ethical implications of AI clones in supporting emotional and cognitive skill development.
{"title":"Learning through AI-clones: Enhancing self-perception and presentation performance","authors":"Qingxiao Zheng ,&nbsp;Zhuoer Chen ,&nbsp;Yun Huang","doi":"10.1016/j.chbah.2025.100117","DOIUrl":"10.1016/j.chbah.2025.100117","url":null,"abstract":"<div><div>This study examines the impact of AI-generated digital clones with self-images (AI-clones) on enhancing perceptions and skills in online presentations. A mixed-design experiment with 44 international students compared self-recording videos (self-recording group) to AI-clone videos (AI-clone group) for online English presentation practice. AI-clone videos were generated using voice cloning, face swapping, lip-syncing, and body-language simulation, refining the repetition, filler words, and pronunciation of participants' original presentations. The results, viewed through the lens of social comparison theory, showed that AI clones functioned as positive “role models” for encouraging positive social comparisons. Regarding self-perceptions, speech qualities, and self-kindness, the self-recording group showed an increase in pronunciation satisfaction. However, the AI-clone group exhibited greater self-kindness, a wider scope of self-observation, and a meaningful transition from a corrective to an enhancive approach in self-critique. Moreover, machine-rated scores revealed immediate performance gains only within the AI-clone group. Considering individual differences, aligning interventions with participants’ regulatory focus significantly enhanced their learning experience. These findings highlight the theoretical, practical, and ethical implications of AI clones in supporting emotional and cognitive skill development.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100117"},"PeriodicalIF":0.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robots as social companions for space exploration
Pub Date : 2025-01-31 DOI: 10.1016/j.chbah.2025.100124
Matthieu J. Guitton
Space is the next border that humanity needs to cross to reach new developments. Yet, space exploration faces numerous challenges, especially when it comes to hazard putting in danger human health. While a lot of efforts are being made to mitigate the impact of space travel on physical health, mental health of space travelers is also highly at risk, notably due to isolation and the associated lack of meaningful social interactions. Given the social potentiality of artificial agents, we propose here that social robots could play the role of social partners to mitigate the impact of space travel on mental health. We will explore the logics behind using robots as partners for in-space social training. We will then identify what are the advantages of using social robots for this purpose, either for crew members and passengers on shorter spaceflights, or for potential colons for possible future longer-term space exploration missions.
{"title":"Robots as social companions for space exploration","authors":"Matthieu J. Guitton","doi":"10.1016/j.chbah.2025.100124","DOIUrl":"10.1016/j.chbah.2025.100124","url":null,"abstract":"<div><div>Space is the next border that humanity needs to cross to reach new developments. Yet, space exploration faces numerous challenges, especially when it comes to hazard putting in danger human health. While a lot of efforts are being made to mitigate the impact of space travel on physical health, mental health of space travelers is also highly at risk, notably due to isolation and the associated lack of meaningful social interactions. Given the social potentiality of artificial agents, we propose here that social robots could play the role of social partners to mitigate the impact of space travel on mental health. We will explore the logics behind using robots as partners for in-space social training. We will then identify what are the advantages of using social robots for this purpose, either for crew members and passengers on shorter spaceflights, or for potential colons for possible future longer-term space exploration missions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100124"},"PeriodicalIF":0.0,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143377939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contradictory attitudes toward academic AI tools: The effect of awe-proneness and corresponding self-regulation
Pub Date : 2025-01-24 DOI: 10.1016/j.chbah.2025.100123
Jiajin Tong , Yangmingxi Zhang , Yutong Li

Objective

Artificial intelligence (AI for short) tools become increasingly popular. To better understand the connections between technology and human beings, this research examines the contradictory impacts of awe-proneness on people's attitudes toward academic AI tools and underlying self-regulation processes, which goes beyond the small-self or self-transcendent hypotheses by further clarifying and elaborating on the complex self-change as a consequence of successful and unsuccessful accommodations induced by awe-proneness.

Method

We conducted two studies with Chinese university students and a third study using GPT-3.5 simulations to test on a larger scale and explore age and country differences.

Results

Awe-proneness increased both satisfaction and worries about academic AI tools (Study 1, N = 252). Awe-proneness led to satisfaction via promotion and to worries via prevention (Study 2, N = 212). GPT simulation data replicated the above findings and further validated the model across age and country groups (Study 3, simulated N = 1846).

Conclusions

This research provides a new perspective to understand the complex nature of awe-proneness and its relation to contradictory AI attitudes. The findings offer novel insights into the rapid application of AI from the perspective of personality psychology. It would further cultivate and promote awe research development both in psychology and in other disciplines.
{"title":"Contradictory attitudes toward academic AI tools: The effect of awe-proneness and corresponding self-regulation","authors":"Jiajin Tong ,&nbsp;Yangmingxi Zhang ,&nbsp;Yutong Li","doi":"10.1016/j.chbah.2025.100123","DOIUrl":"10.1016/j.chbah.2025.100123","url":null,"abstract":"<div><h3>Objective</h3><div>Artificial intelligence (AI for short) tools become increasingly popular. To better understand the connections between technology and human beings, this research examines the contradictory impacts of awe-proneness on people's attitudes toward academic AI tools and underlying self-regulation processes, which goes beyond the small-self or self-transcendent hypotheses by further clarifying and elaborating on the complex self-change as a consequence of successful and unsuccessful accommodations induced by awe-proneness.</div></div><div><h3>Method</h3><div>We conducted two studies with Chinese university students and a third study using GPT-3.5 simulations to test on a larger scale and explore age and country differences.</div></div><div><h3>Results</h3><div>Awe-proneness increased both satisfaction and worries about academic AI tools (Study 1, <em>N</em> = 252). Awe-proneness led to satisfaction via promotion and to worries via prevention (Study 2, <em>N</em> = 212). GPT simulation data replicated the above findings and further validated the model across age and country groups (Study 3, simulated <em>N</em> = 1846).</div></div><div><h3>Conclusions</h3><div>This research provides a new perspective to understand the complex nature of awe-proneness and its relation to contradictory AI attitudes. The findings offer novel insights into the rapid application of AI from the perspective of personality psychology. It would further cultivate and promote awe research development both in psychology and in other disciplines.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100123"},"PeriodicalIF":0.0,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance rather than reputation affects humans’ trust towards an artificial agent
Pub Date : 2025-01-22 DOI: 10.1016/j.chbah.2025.100122
Fritz Becker , Celine Ina Spannagl , Jürgen Buder , Markus Huff
To succeed in teamwork with artificial agents, humans have to calibrate their trust towards agents based on information they receive about an agent before interaction (reputation information) as well as on experiences they have during interaction (agent performance). This study (N = 253) focused on the influence of a virtual agent's reputation (high/low) and actual observed performance (high/low) on a human user's behavioral trust (delegation behavior) and self-reported trust (questionnaires) in a cooperative Tetris game. The main findings suggested that agent reputation influences self-reported trust prior to interaction. However, the effect of reputation immediately got overridden by performance of the agent during the interaction. The agent's performance during the interactive task influenced delegation behavior, as well as self-reported trust measured post-interaction. Pre-to post-change in self-reported trust was significantly larger when reputation and performance were incongruent. We concluded that reputation might have had a smaller than expected influence on behavior in the presence of a novel tool that afforded exploration. Our research contributes to understanding trust and delegation dynamics, which is crucial for the design and adequate use of artificial agent team partners in a world of digital transformation.
{"title":"Performance rather than reputation affects humans’ trust towards an artificial agent","authors":"Fritz Becker ,&nbsp;Celine Ina Spannagl ,&nbsp;Jürgen Buder ,&nbsp;Markus Huff","doi":"10.1016/j.chbah.2025.100122","DOIUrl":"10.1016/j.chbah.2025.100122","url":null,"abstract":"<div><div>To succeed in teamwork with artificial agents, humans have to calibrate their trust towards agents based on information they receive about an agent before interaction (reputation information) as well as on experiences they have during interaction (agent performance). This study (N = 253) focused on the influence of a virtual agent's reputation (high/low) and actual observed performance (high/low) on a human user's behavioral trust (delegation behavior) and self-reported trust (questionnaires) in a cooperative Tetris game. The main findings suggested that agent reputation influences self-reported trust prior to interaction. However, the effect of reputation immediately got overridden by performance of the agent during the interaction. The agent's performance during the interactive task influenced delegation behavior, as well as self-reported trust measured post-interaction. Pre-to post-change in self-reported trust was significantly larger when reputation and performance were incongruent. We concluded that reputation might have had a smaller than expected influence on behavior in the presence of a novel tool that afforded exploration. Our research contributes to understanding trust and delegation dynamics, which is crucial for the design and adequate use of artificial agent team partners in a world of digital transformation.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100122"},"PeriodicalIF":0.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1