首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Understanding successful human–AI teaming: The role of goal alignment and AI autonomy for social perception of LLM-based chatbots 理解成功的人类-人工智能团队:目标对齐和人工智能自主性对基于法学硕士的聊天机器人的社会感知的作用
Pub Date : 2025-12-19 DOI: 10.1016/j.chbah.2025.100246
Christiane Attig , Luisa Winzer , Tim Schrills , Mourad Zoubir , Maged Mortaga , Patricia Wollstadt , Christiane Wiebel-Herboth , Thomas Franke
LLM-based chatbots such as ChatGPT support collaborative, complex tasks by leveraging natural language processing to provide skills, knowledge, or resources beyond the user’s immediate capabilities. Joint activity theory suggests that effective human–AI collaboration, however, requires more than responding to verbatim prompts – it depends on aligning with the user’s underlying goal. Since prompts may not always explicitly state the goal, an effective LLM should analyze the input to approximate the intended objective before autonomously tailoring its response to align with the user’s goal. To test these assumptions, we examined the effects of LLM-based chatbots’ autonomy and goal alignment on multiple social perception metrics as key criteria for successful human–AI teaming (i.e., perceived cooperation, warmth, competence, traceability, usefulness, and trustworthiness). We conducted a scenario-based online experiment where participants (N = 182, within-subjects design) were instructed to collaborate with four different versions of an LLM-based chatbot. The overall goal of the study scenario was to detect and correct erroneous information in short encyclopedic articles, representing a prototypical knowledge work task. Four custom-instructed chatbots were provided in random order: three chatbots varying in goal alignment and AI autonomy and one chatbot serving as a control condition not fulfilling user prompts. Repeated-measures ANOVAs demonstrate that a chatbot which is able to excel in goal alignment by autonomously going beyond verbatim user prompts is perceived as superior compared to a chatbot that adheres rigidly to user prompts without adapting to implicit objectives and chatbots that fail to meet the explicit or implicit user goal. These results support the notion that AI autonomy is only perceived as beneficial as long as user goals are not undermined by the chatbot, emphasizing the importance of balancing user and AI autonomy in human-centered design of AI systems.
基于法学硕士的聊天机器人(如ChatGPT)通过利用自然语言处理来提供超出用户直接能力的技能、知识或资源,从而支持协作、复杂的任务。然而,联合活动理论表明,有效的人类与人工智能协作需要的不仅仅是对逐字提示做出反应——它取决于与用户的潜在目标保持一致。由于提示可能并不总是明确地说明目标,因此有效的LLM应该在自动调整响应以与用户目标保持一致之前分析输入以接近预期目标。为了验证这些假设,我们研究了基于llm的聊天机器人的自主性和目标一致性对多个社会感知指标的影响,这些指标是成功的人类-人工智能团队的关键标准(即感知合作、温暖、能力、可追溯性、有用性和可信度)。我们进行了一个基于场景的在线实验,参与者(N = 182,受试者内部设计)被指示与四个不同版本的基于llm的聊天机器人合作。研究场景的总体目标是检测和纠正百科全书短文中的错误信息,这是一个典型的知识工作任务。四个自定义指示的聊天机器人以随机顺序提供:三个聊天机器人在目标一致性和人工智能自主性方面有所不同,一个聊天机器人作为不满足用户提示的控制条件。重复测量方差分析表明,与严格遵循用户提示而不适应隐含目标的聊天机器人和无法满足明确或隐含用户目标的聊天机器人相比,能够通过自主超越逐字用户提示而在目标一致性方面表现出色的聊天机器人被认为是优越的。这些结果支持了人工智能自治只有在用户目标不被聊天机器人破坏的情况下才被认为是有益的这一观点,强调了在以人为中心的人工智能系统设计中平衡用户和人工智能自治的重要性。
{"title":"Understanding successful human–AI teaming: The role of goal alignment and AI autonomy for social perception of LLM-based chatbots","authors":"Christiane Attig ,&nbsp;Luisa Winzer ,&nbsp;Tim Schrills ,&nbsp;Mourad Zoubir ,&nbsp;Maged Mortaga ,&nbsp;Patricia Wollstadt ,&nbsp;Christiane Wiebel-Herboth ,&nbsp;Thomas Franke","doi":"10.1016/j.chbah.2025.100246","DOIUrl":"10.1016/j.chbah.2025.100246","url":null,"abstract":"<div><div>LLM-based chatbots such as ChatGPT support collaborative, complex tasks by leveraging natural language processing to provide skills, knowledge, or resources beyond the user’s immediate capabilities. Joint activity theory suggests that effective human–AI collaboration, however, requires more than responding to verbatim prompts – it depends on aligning with the user’s underlying goal. Since prompts may not always explicitly state the goal, an effective LLM should analyze the input to approximate the intended objective before autonomously tailoring its response to align with the user’s goal. To test these assumptions, we examined the effects of LLM-based chatbots’ autonomy and goal alignment on multiple social perception metrics as key criteria for successful human–AI teaming (i.e., perceived cooperation, warmth, competence, traceability, usefulness, and trustworthiness). We conducted a scenario-based online experiment where participants (<span><math><mi>N</mi></math></span> = 182, within-subjects design) were instructed to collaborate with four different versions of an LLM-based chatbot. The overall goal of the study scenario was to detect and correct erroneous information in short encyclopedic articles, representing a prototypical knowledge work task. Four custom-instructed chatbots were provided in random order: three chatbots varying in goal alignment and AI autonomy and one chatbot serving as a control condition not fulfilling user prompts. Repeated-measures ANOVAs demonstrate that a chatbot which is able to excel in goal alignment by autonomously going beyond verbatim user prompts is perceived as superior compared to a chatbot that adheres rigidly to user prompts without adapting to implicit objectives and chatbots that fail to meet the explicit or implicit user goal. These results support the notion that AI autonomy is only perceived as beneficial as long as user goals are not undermined by the chatbot, emphasizing the importance of balancing user and AI autonomy in human-centered design of AI systems.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100246"},"PeriodicalIF":0.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal robotic storytelling integrating sound effects and background music 多模式机器人讲故事整合声音效果和背景音乐
Pub Date : 2025-12-16 DOI: 10.1016/j.chbah.2025.100248
Sophia C. Steinhaeusser, Sophia Maier, Birgit Lugrin
Music can induce emotions and is often used to enhance emotional experiences of storytelling media, while sound effects can convey information on a story’s environmental setting. While these non-speech sounds are well-integrated into traditional media, their use in newer forms such as robotic storytelling is still developing. To address this gap, we developed guidelines for emotion-inducing music based on theoretical knowledge from music theory, psychology, and media studies, and validated them in an online perception study. Subsequently, a laboratory prestudy compared the effects of the music’s source during robotic storytelling, finding no significant differences between the robotic storyteller and an external loudspeaker. Building on these results, our main study compared storytelling with added background music, sound effects, a combination of both, and a control condition without non-speech sounds. Results showed that while subjective evaluations of presentation liking and qualitative feedback did not significantly differ, background music alone yielded the best outcomes on standardized measures, enhancing transportation, cognitive absorption, emotion induction, and objectively assessed attention-related affects. These findings support incorporating emotion-inducing background music into robotic storytelling to enhance its immersive and emotional effects.
音乐可以诱导情感,通常用于增强叙事媒体的情感体验,而音效可以传达故事环境设置的信息。虽然这些非语音已经很好地融入了传统媒体,但它们在机器人讲故事等新形式中的应用仍在发展中。为了解决这一差距,我们基于音乐理论、心理学和媒体研究的理论知识,制定了情感诱导音乐的指导方针,并在一项在线感知研究中对其进行了验证。随后,一项实验室预研究比较了机器人讲故事时音乐来源的影响,发现机器人讲故事和外部扬声器之间没有显著差异。基于这些结果,我们的主要研究将讲故事与添加背景音乐、音效、两者的组合以及没有非言语声音的控制条件进行了比较。结果表明,虽然对呈现喜好的主观评价和定性反馈没有显著差异,但背景音乐在标准化测量、增强运输、认知吸收、情感诱导和客观评估注意力相关影响方面产生了最好的结果。这些发现支持将情感诱导背景音乐融入机器人讲故事中,以增强其沉浸感和情感效果。
{"title":"Multimodal robotic storytelling integrating sound effects and background music","authors":"Sophia C. Steinhaeusser,&nbsp;Sophia Maier,&nbsp;Birgit Lugrin","doi":"10.1016/j.chbah.2025.100248","DOIUrl":"10.1016/j.chbah.2025.100248","url":null,"abstract":"<div><div>Music can induce emotions and is often used to enhance emotional experiences of storytelling media, while sound effects can convey information on a story’s environmental setting. While these non-speech sounds are well-integrated into traditional media, their use in newer forms such as robotic storytelling is still developing. To address this gap, we developed guidelines for emotion-inducing music based on theoretical knowledge from music theory, psychology, and media studies, and validated them in an online perception study. Subsequently, a laboratory prestudy compared the effects of the music’s source during robotic storytelling, finding no significant differences between the robotic storyteller and an external loudspeaker. Building on these results, our main study compared storytelling with added background music, sound effects, a combination of both, and a control condition without non-speech sounds. Results showed that while subjective evaluations of presentation liking and qualitative feedback did not significantly differ, background music alone yielded the best outcomes on standardized measures, enhancing transportation, cognitive absorption, emotion induction, and objectively assessed attention-related affects. These findings support incorporating emotion-inducing background music into robotic storytelling to enhance its immersive and emotional effects.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100248"},"PeriodicalIF":0.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The acceptability of artificial intelligence to support university students’ mental health: the role of Asian cultural values and social support 人工智能支持大学生心理健康的可接受性:亚洲文化价值观和社会支持的作用
Pub Date : 2025-12-11 DOI: 10.1016/j.chbah.2025.100247
Mazel Lye, Rachael Martin, Sally Richmond
The global prevalence of mental disorders is rising, with university students at-risk due to the increased psychological distress associated with this period simultaneously occurring with the age of onset of many mental disorders. Artificial intelligence (AI) has been proposed as a novel treatment approach for mental disorders, yet little is known about its acceptability when applied to mental health interventions from the perspective of university students. This study aimed to explore the role of social support and Asian cultural values, two factors linked more broadly with mental health help-seeking, in the acceptability of AI-based mental health interventions among university students. A sample of 135 Australian university students (Mage = 25.26 years, SDage = 7.72 years) of diverse ethnicity (e.g. 43.7 % White/European, 29.6 % Asian and 11.9 % Aboriginal and/or Torres Strait Islander) completed questionnaires for social support, Asian cultural values, and acceptability. A multiple linear regression indicated that Asian cultural values were positively associated with acceptability of AI-based mental health interventions; no support was found for the association with social support. Age and ethnicity were included in the regression model as covariates to adjust for their potential influence on acceptability. These findings provide insight into the relationship between the acceptability of AI-based mental health interventions, social support and Asian cultural values, and demonstrate the need to understand student perspectives before implementing such interventions.
精神障碍的全球患病率正在上升,大学生面临风险,因为与这一时期相关的心理困扰增加,与许多精神障碍的发病年龄同时发生。人工智能(AI)已被提出作为一种新的精神障碍治疗方法,但从大学生的角度来看,其可接受性尚不清楚。本研究旨在探讨社会支持和亚洲文化价值观在大学生对基于人工智能的心理健康干预的接受度中的作用,这两个因素与心理健康求助有着更广泛的联系。135名来自不同种族(43.7%为白人/欧洲人,29.6%为亚洲人,11.9%为土著和/或托雷斯海峡岛民)的澳大利亚大学生(年龄25.26岁,年龄7.72岁)完成了关于社会支持、亚洲文化价值观和可接受性的问卷调查。多元线性回归表明,亚洲文化价值观与基于人工智能的心理健康干预的可接受性呈正相关;没有发现与社会支持有关的证据。年龄和种族作为协变量纳入回归模型,以调整其对可接受性的潜在影响。这些发现揭示了基于人工智能的心理健康干预的可接受性、社会支持和亚洲文化价值观之间的关系,并表明在实施此类干预之前需要了解学生的观点。
{"title":"The acceptability of artificial intelligence to support university students’ mental health: the role of Asian cultural values and social support","authors":"Mazel Lye,&nbsp;Rachael Martin,&nbsp;Sally Richmond","doi":"10.1016/j.chbah.2025.100247","DOIUrl":"10.1016/j.chbah.2025.100247","url":null,"abstract":"<div><div>The global prevalence of mental disorders is rising, with university students at-risk due to the increased psychological distress associated with this period simultaneously occurring with the age of onset of many mental disorders. Artificial intelligence (AI) has been proposed as a novel treatment approach for mental disorders, yet little is known about its acceptability when applied to mental health interventions from the perspective of university students. This study aimed to explore the role of social support and Asian cultural values, two factors linked more broadly with mental health help-seeking, in the acceptability of AI-based mental health interventions among university students. A sample of 135 Australian university students (<em>M</em><sub><em>age</em></sub> = 25.26 years, <em>SD</em><sub><em>age</em></sub> = 7.72 years) of diverse ethnicity (e.g. 43.7 % White/European, 29.6 % Asian and 11.9 % Aboriginal and/or Torres Strait Islander) completed questionnaires for social support, Asian cultural values, and acceptability. A multiple linear regression indicated that Asian cultural values were positively associated with acceptability of AI-based mental health interventions; no support was found for the association with social support. Age and ethnicity were included in the regression model as covariates to adjust for their potential influence on acceptability. These findings provide insight into the relationship between the acceptability of AI-based mental health interventions, social support and Asian cultural values, and demonstrate the need to understand student perspectives before implementing such interventions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100247"},"PeriodicalIF":0.0,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception of human-robot relationships: Professional, communicative, impersonal and emotionless 对人机关系的感知:专业、交流、客观和无情感
Pub Date : 2025-12-01 DOI: 10.1016/j.chbah.2025.100226
Monika Menz , Markus Huff
Humans extend social perceptions not only to other people but also to non-human entities such as pets, toys, and robots. This study investigates how individuals differentiate between various relationship partners—a familiar person, a professional person, a pet, a cuddly toy, and a social robot—across three interaction contexts: caregiving, conversation, and leisure. Using the Repertory Grid Method, 103 participants generated 811 construct pairs, which were categorized into seven psychological dimensions: Verbal Communication, Assistance and Competences, Liveness and Humanity, Emotional and Empathic Ability, Autonomy and Voluntariness, Trust and Closeness, and Physical Activity and Responsiveness. Cluster analyses revealed that in Verbal Communication and Assistance and Competences, robots were perceived similarly to human partners, indicating functional comparability. In contrast, for Liveness and Humanity and Emotional and Empathic Ability, humans clustered with pets—distinct from robots and cuddly toys—highlighting robots’ lack of perceived emotional richness and animacy. Interestingly, in Autonomy and Voluntariness and Trust and Closeness, robots were grouped with professional humans, while familiar persons, pets, and cuddly toys formed a separate cluster, suggesting that robots are seen as formal, emotionally distant partners. These findings indicate that while robots may match human partners in communicative and task-oriented domains, they are not regarded as emotionally intimate or fully animate beings. Instead, they occupy a hybrid role—competent yet impersonal—situated between tools and social agents. The study contributes to a more nuanced understanding of human-robot relationships by identifying the psychological dimensions that shape perceptions of sociality, animacy, and relational closeness with non-human partners.
人类不仅将社会感知扩展到其他人身上,还将其扩展到宠物、玩具和机器人等非人类实体上。这项研究调查了人们如何区分不同的关系伙伴——一个熟悉的人、一个专业人士、一个宠物、一个可爱的玩具和一个社交机器人——在三种互动环境中:照顾、交谈和休闲。运用储备网格法,103名被试生成了811个构式对,这些构式对分为七个心理维度:言语沟通、协助与能力、活力与人性、情感与共情能力、自主与自愿、信任与亲密、身体活动与反应。聚类分析显示,在语言交流和辅助和能力方面,机器人被认为与人类伴侣相似,表明功能上的可比性。相比之下,在活力和人性以及情感和移情能力方面,人类与宠物聚集在一起——不同于机器人和可爱的玩具——这凸显了机器人缺乏感知到的情感丰富性和活力。有趣的是,在自主性和自愿性以及信任和亲密度中,机器人被归为专业人员,而熟悉的人、宠物和可爱的玩具则被归为单独的一组,这表明机器人被视为正式的、情感上疏远的伙伴。这些发现表明,虽然机器人在交流和任务导向领域可能与人类伴侣相匹配,但它们并不被视为情感亲密或完全有生命的生物。相反,他们扮演着一种介于工具和社会代理人之间的混合角色——既有能力又没有个性。该研究通过确定塑造社会性、动物性和与非人类伙伴关系亲密度的心理维度,有助于更细致入微地理解人与机器人的关系。
{"title":"Perception of human-robot relationships: Professional, communicative, impersonal and emotionless","authors":"Monika Menz ,&nbsp;Markus Huff","doi":"10.1016/j.chbah.2025.100226","DOIUrl":"10.1016/j.chbah.2025.100226","url":null,"abstract":"<div><div>Humans extend social perceptions not only to other people but also to non-human entities such as pets, toys, and robots. This study investigates how individuals differentiate between various relationship partners—a familiar person, a professional person, a pet, a cuddly toy, and a social robot—across three interaction contexts: caregiving, conversation, and leisure. Using the Repertory Grid Method, 103 participants generated 811 construct pairs, which were categorized into seven psychological dimensions: Verbal Communication, Assistance and Competences, Liveness and Humanity, Emotional and Empathic Ability, Autonomy and Voluntariness, Trust and Closeness, and Physical Activity and Responsiveness. Cluster analyses revealed that in Verbal Communication and Assistance and Competences, robots were perceived similarly to human partners, indicating functional comparability. In contrast, for Liveness and Humanity and Emotional and Empathic Ability, humans clustered with pets—distinct from robots and cuddly toys—highlighting robots’ lack of perceived emotional richness and animacy. Interestingly, in Autonomy and Voluntariness and Trust and Closeness, robots were grouped with professional humans, while familiar persons, pets, and cuddly toys formed a separate cluster, suggesting that robots are seen as formal, emotionally distant partners. These findings indicate that while robots may match human partners in communicative and task-oriented domains, they are not regarded as emotionally intimate or fully animate beings. Instead, they occupy a hybrid role—competent yet impersonal—situated between tools and social agents. The study contributes to a more nuanced understanding of human-robot relationships by identifying the psychological dimensions that shape perceptions of sociality, animacy, and relational closeness with non-human partners.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100226"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing intercultural sensitivity in large language models: A comparative study of GPT-3.5 and GPT-4 across eight languages 评估大型语言模型中的跨文化敏感性:跨八种语言的GPT-3.5和GPT-4的比较研究
Pub Date : 2025-12-01 DOI: 10.1016/j.chbah.2025.100241
Yiwen Jin , Lies Sercu , Feng Guo
As large language models (LLMs) such as ChatGPT are increasingly used across cultures and languages, concerns have arisen about their ability to respond in culturally sensitive ways. This study evaluated the intercultural sensitivity of GPT-3.5 and GPT-4 using the Intercultural Sensitivity Scale (ISS) translated into eight languages. Each model completed ten randomized iterations of the 24-item ISS per language, and the results were analyzed using descriptive statistics and three-way ANOVA. GPT-4 achieved significantly higher intercultural sensitivity scores than GPT-3.5 across all dimensions, with “respect for cultural differences” scoring highest and “interaction confidence” lowest. Significant interactions were found between model version and language, and between model version and ISS dimensions, indicating that GPT-4's improvements vary by linguistic context. Nonetheless, the interaction between language and dimensions did not yield significant results. Future research should focus on increasing the amount of training data for the less spoken languages, as well as adding rich emotional and cultural background data to improve the model's understanding of cultural norms and nuances.
随着像ChatGPT这样的大型语言模型(llm)越来越多地跨文化和语言使用,人们开始关注它们以文化敏感方式做出反应的能力。本研究使用翻译成八种语言的跨文化敏感性量表(ISS)评估GPT-3.5和GPT-4的跨文化敏感性。每个模型完成了每种语言24项ISS的10次随机迭代,并使用描述性统计和三向方差分析对结果进行分析。在所有维度上,GPT-4的跨文化敏感性得分明显高于GPT-3.5,其中“尊重文化差异”得分最高,“互动信心”得分最低。模型版本与语言之间、模型版本与ISS维度之间存在显著的交互作用,表明GPT-4的改善因语言语境而异。然而,语言和维度之间的相互作用并没有产生显著的结果。未来的研究应侧重于增加较少使用语言的训练数据量,并增加丰富的情感和文化背景数据,以提高模型对文化规范和细微差别的理解。
{"title":"Assessing intercultural sensitivity in large language models: A comparative study of GPT-3.5 and GPT-4 across eight languages","authors":"Yiwen Jin ,&nbsp;Lies Sercu ,&nbsp;Feng Guo","doi":"10.1016/j.chbah.2025.100241","DOIUrl":"10.1016/j.chbah.2025.100241","url":null,"abstract":"<div><div>As large language models (LLMs) such as ChatGPT are increasingly used across cultures and languages, concerns have arisen about their ability to respond in culturally sensitive ways. This study evaluated the intercultural sensitivity of GPT-3.5 and GPT-4 using the Intercultural Sensitivity Scale (ISS) translated into eight languages. Each model completed ten randomized iterations of the 24-item ISS per language, and the results were analyzed using descriptive statistics and three-way ANOVA. GPT-4 achieved significantly higher intercultural sensitivity scores than GPT-3.5 across all dimensions, with “respect for cultural differences” scoring highest and “interaction confidence” lowest. Significant interactions were found between model version and language, and between model version and ISS dimensions, indicating that GPT-4's improvements vary by linguistic context. Nonetheless, the interaction between language and dimensions did not yield significant results. Future research should focus on increasing the amount of training data for the less spoken languages, as well as adding rich emotional and cultural background data to improve the model's understanding of cultural norms and nuances.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100241"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aesthetic Integrity Index (AII) for human–AI hybrid epistemology: Reconfiguring the Beholder’s Share through Xie He’s Six Principles 人-人工智能混合认识论的审美完整性指数(AII):通过解贺六原则重新配置旁观者份额
Pub Date : 2025-12-01 DOI: 10.1016/j.chbah.2025.100242
Rong Chang
{"title":"Aesthetic Integrity Index (AII) for human–AI hybrid epistemology: Reconfiguring the Beholder’s Share through Xie He’s Six Principles","authors":"Rong Chang","doi":"10.1016/j.chbah.2025.100242","DOIUrl":"10.1016/j.chbah.2025.100242","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100242"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantitative fairness—A framework for the design of equitable cybernetic societies 定量公平——设计公平控制论社会的框架
Pub Date : 2025-11-19 DOI: 10.1016/j.chbah.2025.100236
Kevin Riehl, Anastasios Kouvelas, Michail A. Makridis
Advancements in computer science, artificial intelligence, and control systems have catalyzed the emergence of cybernetic societies, where algorithms play a pivotal role in decision-making processes shaping nearly every aspect of human life. Automated decision-making for resource allocation has expanded into industry, government processes, critical infrastructures, and even determines the very fabric of social interactions and communication. While these systems promise greater efficiency and reduced corruption, misspecified cybernetic mechanisms harbor the threat for reinforcing inequities, discrimination, and even dystopian or totalitarian structures. Fairness thus becomes a crucial component in the design of cybernetic systems, to promote cooperation between selfish individuals, to achieve better outcomes at the system level, to confront public resistance, to gain trust and acceptance for rules and institutions, to perforate self-reinforcing cycles of poverty through social mobility, to incentivize motivation, contribution and satisfaction of people through inclusion, to increase social-cohesion in groups, and ultimately to improve life quality. Quantitative descriptions of fairness are crucial to reflect equity into algorithms, but only few works in the fairness literature offer such measures; the existing quantitative measures in the literature are either too application-specific, suffer from undesirable characteristics, or are not ideology-agnostic. This study proposes a quantitative, transactional, and distributive fairness framework based on an interdisciplinary foundation that supports the systematic design of socially-feasible decision-making systems. Moreover, it emphasizes the importance of fairness and transparency when designing algorithms for equitable, cybernetic societies, and establishes a connection between fairness literature and resource allocating systems.
计算机科学、人工智能和控制系统的进步催化了控制论社会的出现,算法在决定人类生活几乎方方面面的决策过程中发挥着关键作用。用于资源分配的自动化决策已经扩展到工业、政府流程、关键基础设施,甚至决定了社会交互和通信的结构。虽然这些系统有望提高效率并减少腐败,但不明确的控制论机制却有可能加剧不平等、歧视,甚至是反乌托邦或极权主义结构。因此,公平成为控制论系统设计中的一个重要组成部分,它可以促进自私的个人之间的合作,在系统层面取得更好的结果,对抗公众的抵制,获得对规则和制度的信任和接受,通过社会流动打破贫困的自我强化循环,通过包容激励人们的动机、贡献和满意度,增加群体的社会凝聚力。并最终提高生活质量。公平的定量描述对于将公平反映到算法中至关重要,但公平文献中只有很少的作品提供了这样的措施;文献中现有的量化措施要么过于具体,要么具有不良特征,要么不是意识形态不可知论的。本研究提出了一个基于跨学科基础的定量、交易和分配公平框架,支持社会可行决策系统的系统设计。此外,它强调了公平和透明的重要性设计算法公平,控制论社会,并建立公平文献和资源分配系统之间的联系。
{"title":"Quantitative fairness—A framework for the design of equitable cybernetic societies","authors":"Kevin Riehl,&nbsp;Anastasios Kouvelas,&nbsp;Michail A. Makridis","doi":"10.1016/j.chbah.2025.100236","DOIUrl":"10.1016/j.chbah.2025.100236","url":null,"abstract":"<div><div>Advancements in computer science, artificial intelligence, and control systems have catalyzed the emergence of cybernetic societies, where algorithms play a pivotal role in decision-making processes shaping nearly every aspect of human life. Automated decision-making for resource allocation has expanded into industry, government processes, critical infrastructures, and even determines the very fabric of social interactions and communication. While these systems promise greater efficiency and reduced corruption, misspecified cybernetic mechanisms harbor the threat for reinforcing inequities, discrimination, and even dystopian or totalitarian structures. Fairness thus becomes a crucial component in the design of cybernetic systems, to promote cooperation between selfish individuals, to achieve better outcomes at the system level, to confront public resistance, to gain trust and acceptance for rules and institutions, to perforate self-reinforcing cycles of poverty through social mobility, to incentivize motivation, contribution and satisfaction of people through inclusion, to increase social-cohesion in groups, and ultimately to improve life quality. Quantitative descriptions of fairness are crucial to reflect equity into algorithms, but only few works in the fairness literature offer such measures; the existing quantitative measures in the literature are either too application-specific, suffer from undesirable characteristics, or are not ideology-agnostic. This study proposes a quantitative, transactional, and distributive fairness framework based on an interdisciplinary foundation that supports the systematic design of socially-feasible decision-making systems. Moreover, it emphasizes the importance of fairness and transparency when designing algorithms for equitable, cybernetic societies, and establishes a connection between fairness literature and resource allocating systems.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100236"},"PeriodicalIF":0.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why human mistakes hurt more? Emotional responses in human-AI errors 为什么人为的错误伤害更大?人类-人工智能错误中的情绪反应
Pub Date : 2025-11-19 DOI: 10.1016/j.chbah.2025.100238
Ying Qin, Wanhui Zhou, Bu Zhong
Understanding user responses to AI versus human errors is crucial, as they shape trust, acceptance, and interaction outcomes. This study investigates the emotional dynamics of human-AI interactions by examining how agent identity (human vs. AI) and error severity (low vs. high) influence negative emotional reactions. Using a 2 × 2 factorial design (N = 250), the findings reveal that human agents consistently elicit stronger negative emotions than AI agents, regardless of error severity. Moreover, perceived experience moderates this relationship under specific conditions: individuals who view AI less experienced than humans exhibit stronger negative emotions toward human errors, while this effect diminishes when AI is perceived as having higher experience. However, perceived agency does not significantly influence emotional responses. These findings highlight the critical role of agent identity and perceived experience in shaping emotional reactions to errors, adding insights into the dynamics of human-AI interactions. This research shows that developing effective AI systems needs to manage user emotional responses and trust, in which perceived experience and competency play pivotal roles in adoption. The findings can guide the design of AI systems that adjust user expectations and emotional responses in accordance with the AI's perceived level of experience.
理解用户对人工智能和人为错误的反应至关重要,因为它们会影响信任、接受和交互结果。本研究通过研究代理身份(人类与人工智能)和错误严重程度(低与高)如何影响负面情绪反应,调查了人类与人工智能互动的情感动态。使用2 × 2因子设计(N = 250),研究结果显示,无论错误严重程度如何,人类代理始终比人工智能代理引发更强烈的负面情绪。此外,感知经验在特定条件下调节了这种关系:认为人工智能经验不如人类的个体对人类的错误表现出更强烈的负面情绪,而当人工智能被认为具有更高的经验时,这种影响就会减弱。然而,感知代理对情绪反应没有显著影响。这些发现强调了代理身份和感知经验在塑造对错误的情绪反应方面的关键作用,增加了对人类与人工智能互动动态的见解。这项研究表明,开发有效的人工智能系统需要管理用户的情绪反应和信任,其中感知经验和能力在采用中起着关键作用。这些发现可以指导人工智能系统的设计,根据人工智能感知的经验水平调整用户的期望和情绪反应。
{"title":"Why human mistakes hurt more? Emotional responses in human-AI errors","authors":"Ying Qin,&nbsp;Wanhui Zhou,&nbsp;Bu Zhong","doi":"10.1016/j.chbah.2025.100238","DOIUrl":"10.1016/j.chbah.2025.100238","url":null,"abstract":"<div><div>Understanding user responses to AI versus human errors is crucial, as they shape trust, acceptance, and interaction outcomes. This study investigates the emotional dynamics of human-AI interactions by examining how agent identity (human vs. AI) and error severity (low vs. high) influence negative emotional reactions. Using a 2 × 2 factorial design (<em>N</em> = 250), the findings reveal that human agents consistently elicit stronger negative emotions than AI agents, regardless of error severity. Moreover, perceived experience moderates this relationship under specific conditions: individuals who view AI less experienced than humans exhibit stronger negative emotions toward human errors, while this effect diminishes when AI is perceived as having higher experience. However, perceived agency does not significantly influence emotional responses. These findings highlight the critical role of agent identity and perceived experience in shaping emotional reactions to errors, adding insights into the dynamics of human-AI interactions. This research shows that developing effective AI systems needs to manage user emotional responses and trust, in which perceived experience and competency play pivotal roles in adoption. The findings can guide the design of AI systems that adjust user expectations and emotional responses in accordance with the AI's perceived level of experience.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100238"},"PeriodicalIF":0.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping user gratifications in the age of LLM-based chatbots: An affordance perspective 在基于法学硕士的聊天机器人时代映射用户满意度:一个可视性视角
Pub Date : 2025-11-19 DOI: 10.1016/j.chbah.2025.100240
Eun Go , Taeyoung Kim
Despite the widespread use of large language model (LLM)-based chatbots, little is known about what specific gratifications users obtain from the unique affordances of these systems and how these affordance-driven gratifications shape user evaluations. To address this gap, the present study maps the gratification structure of LLM chatbot use and examines whether users’ primary purpose of chatbot use (information-, conversation-, or task-oriented) influences the gratifications they derive. A survey of 249 LLM chatbot users revealed nine distinct gratifications aligned with four affordance types: modality, agency, interactivity, and navigability. Purpose of use meaningfully shaped which gratifications were most salient. For example, conversational use heightened Immersive Realism and Fun, whereas information- and task-oriented use elevated Adaptive Responsiveness. In turn, these affordance-driven gratifications predicted key outcomes, including perceived expertise, perceived friendliness, satisfaction, attitudes, and behavioral intentions to continued use. Across outcomes, Adaptive Responsiveness consistently emerged as the strongest predictor, underscoring the pivotal role of contingent, high-quality dialogue in LLM-based human–AI interaction. These findings extend uses and gratifications theory and offer design implications for developing more engaging, responsive, and purpose-tailored chatbot experiences.
尽管基于大型语言模型(LLM)的聊天机器人被广泛使用,但人们对用户从这些系统的独特功能中获得的特定满足以及这些功能支持驱动的满足如何影响用户评估知之甚少。为了解决这一差距,本研究绘制了LLM聊天机器人使用的满足结构,并检查用户使用聊天机器人的主要目的(信息导向、对话导向或任务导向)是否会影响他们获得的满足。一项针对249名LLM聊天机器人用户的调查显示,九种不同的满足感与四种提供类型相一致:模态、代理、交互性和可导航性。使用目的有意义地决定了哪些满足是最显著的。例如,会话使用增强了沉浸式现实性和趣味性,而信息和任务导向的使用增强了适应性响应性。反过来,这些可视性驱动的满足预测了关键结果,包括感知到的专业知识、感知到的友好、满意度、态度和继续使用的行为意图。在结果中,适应性反应一直是最强的预测因子,强调了基于法学硕士的人类-人工智能交互中偶然的高质量对话的关键作用。这些发现扩展了使用和满足理论,并为开发更具吸引力、响应性和针对性的聊天机器人体验提供了设计启示。
{"title":"Mapping user gratifications in the age of LLM-based chatbots: An affordance perspective","authors":"Eun Go ,&nbsp;Taeyoung Kim","doi":"10.1016/j.chbah.2025.100240","DOIUrl":"10.1016/j.chbah.2025.100240","url":null,"abstract":"<div><div>Despite the widespread use of large language model (LLM)-based chatbots, little is known about what specific gratifications users obtain from the unique affordances of these systems and how these affordance-driven gratifications shape user evaluations. To address this gap, the present study maps the gratification structure of LLM chatbot use and examines whether users’ primary purpose of chatbot use (information-, conversation-, or task-oriented) influences the gratifications they derive. A survey of 249 LLM chatbot users revealed nine distinct gratifications aligned with four affordance types: modality, agency, interactivity, and navigability. Purpose of use meaningfully shaped which gratifications were most salient. For example, conversational use heightened <em>Immersive Realism</em> and <em>Fun</em>, whereas information- and task-oriented use elevated <em>Adaptive Responsiveness</em>. In turn, these affordance-driven gratifications predicted key outcomes, including perceived expertise, perceived friendliness, satisfaction, attitudes, and behavioral intentions to continued use. Across outcomes, <em>Adaptive Responsiveness</em> consistently emerged as the strongest predictor, underscoring the pivotal role of contingent, high-quality dialogue in LLM-based human–AI interaction. These findings extend uses and gratifications theory and offer design implications for developing more engaging, responsive, and purpose-tailored chatbot experiences.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100240"},"PeriodicalIF":0.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145698045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Avatar or human, who is experiencing it? Impact of social interaction in virtual gaming worlds on personal space 化身还是人类,是谁在体验?虚拟游戏世界中社交互动对个人空间的影响
Pub Date : 2025-11-17 DOI: 10.1016/j.chbah.2025.100237
Ruoyu Niu, Mengzhu Huang, Rixin Tang
Virtual gaming worlds support rich social interaction in which players use avatars to collaborate, compete, and communicate across distance. Motivated by the increasing reliance on mediated social contact, this research examined whether virtual shared space and avatar properties shape personal space regulation in ways that parallel face to face encounters. Three experiments tested how virtual shared space, avatar agency, and avatar anthropomorphism influence interpersonal distance. Across studies, virtual comfort distance and psychological distance were used as complementary indicators of changes in personal space, and physical comfort distance was additionally assessed in a subset of conditions with a physically present human partner. Experiment 1 showed that, when interacting with a human driven partner in the laboratory, occupying a shared virtual space reliably reduced comfort distance and increased psychological closeness compared with interacting in separate virtual spaces, even after controlling for physical shared space. Experiment 2 replicated the virtual shared space effect with computer driven partners in an online Virtual gaming world setting, indicating that reduced interpersonal distance does not depend on human agency alone. Experiment 3 revealed that anthropomorphic avatars increased comfort toward computer driven partners, whereas avatar form had little impact when the partner was known to be human. Together, the findings indicate that virtual shared space, perceived agency, and avatar appearance jointly shape personal space regulation in digital environments and offer actionable guidance for designing avatars and virtual spaces that foster approach oriented, prosocial interaction.
虚拟游戏世界支持丰富的社交互动,玩家可以使用虚拟角色进行协作、竞争和远距离交流。由于越来越依赖中介社会联系,本研究考察了虚拟共享空间和虚拟角色属性是否以平行面对面接触的方式塑造了个人空间调节。三个实验测试了虚拟共享空间、化身代理和化身拟人化对人际距离的影响。在所有研究中,虚拟舒适距离和心理距离被用作个人空间变化的补充指标,而物理舒适距离在有实际在场的人类伴侣的情况下被额外评估。实验1表明,即使在控制了物理共享空间之后,与在实验室中与人类驱动的伙伴互动时,与在单独的虚拟空间互动相比,占据共享虚拟空间可靠地减少了舒适距离,增加了心理亲密度。实验2模拟了网络虚拟游戏世界中由电脑驱动的伴侣所产生的虚拟共享空间效应,表明人与人之间距离的减少并不仅仅取决于人的行为。实验3显示,拟人化的虚拟形象增加了对电脑驱动的伴侣的舒适度,而当伴侣是人类时,虚拟形象的形式几乎没有影响。总之,研究结果表明,虚拟共享空间、感知代理和虚拟形象共同塑造了数字环境中的个人空间监管,并为设计虚拟形象和虚拟空间提供了可操作的指导,以促进面向方法的、亲社会的互动。
{"title":"Avatar or human, who is experiencing it? Impact of social interaction in virtual gaming worlds on personal space","authors":"Ruoyu Niu,&nbsp;Mengzhu Huang,&nbsp;Rixin Tang","doi":"10.1016/j.chbah.2025.100237","DOIUrl":"10.1016/j.chbah.2025.100237","url":null,"abstract":"<div><div>Virtual gaming worlds support rich social interaction in which players use avatars to collaborate, compete, and communicate across distance. Motivated by the increasing reliance on mediated social contact, this research examined whether virtual shared space and avatar properties shape personal space regulation in ways that parallel face to face encounters. Three experiments tested how virtual shared space, avatar agency, and avatar anthropomorphism influence interpersonal distance. Across studies, virtual comfort distance and psychological distance were used as complementary indicators of changes in personal space, and physical comfort distance was additionally assessed in a subset of conditions with a physically present human partner. Experiment 1 showed that, when interacting with a human driven partner in the laboratory, occupying a shared virtual space reliably reduced comfort distance and increased psychological closeness compared with interacting in separate virtual spaces, even after controlling for physical shared space. Experiment 2 replicated the virtual shared space effect with computer driven partners in an online Virtual gaming world setting, indicating that reduced interpersonal distance does not depend on human agency alone. Experiment 3 revealed that anthropomorphic avatars increased comfort toward computer driven partners, whereas avatar form had little impact when the partner was known to be human. Together, the findings indicate that virtual shared space, perceived agency, and avatar appearance jointly shape personal space regulation in digital environments and offer actionable guidance for designing avatars and virtual spaces that foster approach oriented, prosocial interaction.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100237"},"PeriodicalIF":0.0,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1