首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Mapping AI learning readiness self-efficacy worldwide: Scale validation and cross-continental patterns 绘制全球人工智能学习准备自我效能:规模验证和跨大陆模式
Pub Date : 2026-01-21 DOI: 10.1016/j.chbah.2026.100251
Atte Oksanen , Teijo Osma , Moona Heiskari , Anica Cvetkovic , Eerik Soares Ruokosuo , Mayu Koike , Patrícia Arriaga , Iina Savolainen
In today's world, knowing how to use artificial intelligence (AI) technologies is becoming an essential skill. While methods for measuring the perceived efficacy of AI use are emerging, brief measures of users' self-evaluated learning and self-efficacy regarding AI use are still lacking. This study aimed to validate the five-item AI Learning Readiness Self-Efficacy (AILRSE) scale and examine cross-national differences between 12 countries on six continents. We used large-scale, adult population samples from Australia, Brazil, Finland, France, Germany, Ireland, Italy, Japan, Poland, Portugal, South Africa, and the United States collected in 2024–2025 (N = 20,173), enabling both cross-sectional and longitudinal analysis. Scale validation involved confirmatory factor analysis and measurement invariance testing across countries and over time. The results supported a one-factor structure with high internal consistency and scalar invariance across countries as well as strict invariance in Finnish cross-sectional and longitudinal data. AI positivity emerged as the strongest predictor of AILRSE-5 scores across all models, followed by younger age and more frequent use of text-to-text AI tools (e.g., ChatGPT, Copilot). Education and gender effects were small and context dependent. The findings indicate that AILRSE-5 is a brief, reliable, and valid tool for assessing self-efficacy in AI learning readiness. Its invariance across diverse national contexts supports its applicability in cross-cultural research, while its longitudinal invariance suggests stability over time. Furthermore, our results provide rare cross-national evidence on the individual factors shaping AI learning readiness self-efficacy. The study advances understanding of how people adapt to the rapidly evolving AI landscape.
在当今世界,了解如何使用人工智能(AI)技术正在成为一项基本技能。虽然测量人工智能使用感知效能的方法正在出现,但用户对人工智能使用的自我评估学习和自我效能的简要衡量仍然缺乏。本研究旨在验证五项人工智能学习准备自我效能(AILRSE)量表,并检查六大洲12个国家之间的跨国差异。我们使用了2024-2025年从澳大利亚、巴西、芬兰、法国、德国、爱尔兰、意大利、日本、波兰、葡萄牙、南非和美国收集的大规模成人样本(N = 20,173),进行了横断面和纵向分析。量表验证涉及验证性因素分析和测量不变性测试跨国家和时间。结果支持一个单因素结构,具有高内部一致性和跨国家的标量不变性,以及芬兰横断面和纵向数据的严格不变性。在所有模型中,人工智能的积极性是预测AILRSE-5得分的最强因素,其次是年龄较小和更频繁地使用文本对文本人工智能工具(例如,ChatGPT, Copilot)。教育和性别的影响很小,并且依赖于环境。研究结果表明,AILRSE-5是评估人工智能学习准备自我效能的一个简短、可靠和有效的工具。其在不同国家背景下的不变性支持其在跨文化研究中的适用性,而其纵向不变性表明其随时间的稳定性。此外,我们的研究结果为影响人工智能学习准备自我效能的个体因素提供了罕见的跨国证据。这项研究促进了人们对如何适应快速发展的人工智能环境的理解。
{"title":"Mapping AI learning readiness self-efficacy worldwide: Scale validation and cross-continental patterns","authors":"Atte Oksanen ,&nbsp;Teijo Osma ,&nbsp;Moona Heiskari ,&nbsp;Anica Cvetkovic ,&nbsp;Eerik Soares Ruokosuo ,&nbsp;Mayu Koike ,&nbsp;Patrícia Arriaga ,&nbsp;Iina Savolainen","doi":"10.1016/j.chbah.2026.100251","DOIUrl":"10.1016/j.chbah.2026.100251","url":null,"abstract":"<div><div>In today's world, knowing how to use artificial intelligence (AI) technologies is becoming an essential skill. While methods for measuring the perceived efficacy of AI use are emerging, brief measures of users' self-evaluated learning and self-efficacy regarding AI use are still lacking. This study aimed to validate the five-item AI Learning Readiness Self-Efficacy (AILRSE) scale and examine cross-national differences between 12 countries on six continents. We used large-scale, adult population samples from Australia, Brazil, Finland, France, Germany, Ireland, Italy, Japan, Poland, Portugal, South Africa, and the United States collected in 2024–2025 (N = 20,173), enabling both cross-sectional and longitudinal analysis. Scale validation involved confirmatory factor analysis and measurement invariance testing across countries and over time. The results supported a one-factor structure with high internal consistency and scalar invariance across countries as well as strict invariance in Finnish cross-sectional and longitudinal data. AI positivity emerged as the strongest predictor of AILRSE-5 scores across all models, followed by younger age and more frequent use of text-to-text AI tools (e.g., ChatGPT, Copilot). Education and gender effects were small and context dependent. The findings indicate that AILRSE-5 is a brief, reliable, and valid tool for assessing self-efficacy in AI learning readiness. Its invariance across diverse national contexts supports its applicability in cross-cultural research, while its longitudinal invariance suggests stability over time. Furthermore, our results provide rare cross-national evidence on the individual factors shaping AI learning readiness self-efficacy. The study advances understanding of how people adapt to the rapidly evolving AI landscape.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100251"},"PeriodicalIF":0.0,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146078002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The reasoning-like capabilities of large language models across different languages: Insights from representational similarity analysis 跨不同语言的大型语言模型的类似推理的能力:来自表示相似性分析的见解
Pub Date : 2026-01-20 DOI: 10.1016/j.chbah.2026.100250
Chris M. Stolle , Rongjun Yu , Yi Huang
Recent research shows that Large Language Models (LLMs) demonstrate human-comparable performance on various cognitive tasks, suggesting reasoning-like capabilities. However, the language dependency of these capabilities and the contribution of their neural network states remain underexplored. This study investigates how different prompts and languages influence the reasoning performance of LLMs compared to humans, while exploring the internal cognitive-like processes of LLMs through representational similarity analysis (RSA). Using scenario-based and mathematical Cognitive Reflection Test (CRT) questions across four languages, we evaluated the reasoning capabilities of LLM Qwen 2.5 (including Gemma 2.9 and Llama 3.1 replications). Results showed that language significantly impacts performance in scenario-based CRT that requires nuanced semantic processing. However, RSA of the inner state activations revealed that the LLM processed identical questions similarly across languages, suggesting that the model encodes semantics in a language-independent latent space. Additionally, the LLM's performance improved when it verbalised its reasoning, and this verbalisation increased similarity in activations. Layer-wise analyses revealed a U-shaped similarity pattern across early to late layers in Qwen and Gemma but not Llama. Furthermore, scenario-based and equivalent mathematical CRT versions elicited similar activation patterns for the paired questions, even after controlling for input and output confounds, pointing to format-agnostic reasoning mechanisms. These results highlight that while LLMs exhibit language-invariant semantic representations and format-agnostic reasoning, their performance remains sensitive to linguistic nuances and self-generated verbalisations, offering insights into both the strengths and limitations of their cognitive-like processing.
最近的研究表明,大型语言模型(llm)在各种认知任务中表现出与人类相当的表现,表明类似推理的能力。然而,这些能力的语言依赖性及其神经网络状态的贡献仍未得到充分探索。本研究探讨了不同提示语和语言对法学硕士推理表现的影响,并通过表征相似度分析(RSA)探索了法学硕士的内部认知过程。使用基于场景和数学认知反射测试(CRT)的四种语言问题,我们评估了LLM Qwen 2.5(包括Gemma 2.9和Llama 3.1重复)的推理能力。结果表明,语言显著影响基于场景的CRT的表现,这需要细微的语义处理。然而,内部状态激活的RSA显示,LLM在不同语言中处理相同的问题相似,这表明该模型在一个与语言无关的潜在空间中编码语义。此外,当LLM用语言表达推理时,它的性能得到了提高,这种语言表达增加了激活的相似性。分层分析显示,Qwen和Gemma在早期到晚期的相似性呈u型,而Llama没有。此外,即使在控制了输入和输出混淆之后,基于场景的和等效的数学CRT版本对配对问题也产生了类似的激活模式,这表明了格式不可知的推理机制。这些结果强调,虽然法学硕士表现出语言不变的语义表示和格式不可知的推理,但他们的表现仍然对语言细微差别和自我生成的语言表达敏感,这为他们的认知类处理的优势和局限性提供了见解。
{"title":"The reasoning-like capabilities of large language models across different languages: Insights from representational similarity analysis","authors":"Chris M. Stolle ,&nbsp;Rongjun Yu ,&nbsp;Yi Huang","doi":"10.1016/j.chbah.2026.100250","DOIUrl":"10.1016/j.chbah.2026.100250","url":null,"abstract":"<div><div>Recent research shows that Large Language Models (LLMs) demonstrate human-comparable performance on various cognitive tasks, suggesting reasoning-like capabilities. However, the language dependency of these capabilities and the contribution of their neural network states remain underexplored. This study investigates how different prompts and languages influence the reasoning performance of LLMs compared to humans, while exploring the internal cognitive-like processes of LLMs through representational similarity analysis (RSA). Using scenario-based and mathematical Cognitive Reflection Test (CRT) questions across four languages, we evaluated the reasoning capabilities of LLM Qwen 2.5 (including Gemma 2.9 and Llama 3.1 replications). Results showed that language significantly impacts performance in scenario-based CRT that requires nuanced semantic processing. However, RSA of the inner state activations revealed that the LLM processed identical questions similarly across languages, suggesting that the model encodes semantics in a language-independent latent space. Additionally, the LLM's performance improved when it verbalised its reasoning, and this verbalisation increased similarity in activations. Layer-wise analyses revealed a U-shaped similarity pattern across early to late layers in Qwen and Gemma but not Llama. Furthermore, scenario-based and equivalent mathematical CRT versions elicited similar activation patterns for the paired questions, even after controlling for input and output confounds, pointing to format-agnostic reasoning mechanisms. These results highlight that while LLMs exhibit language-invariant semantic representations and format-agnostic reasoning, their performance remains sensitive to linguistic nuances and self-generated verbalisations, offering insights into both the strengths and limitations of their cognitive-like processing.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100250"},"PeriodicalIF":0.0,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding successful human–AI teaming: The role of goal alignment and AI autonomy for social perception of LLM-based chatbots 理解成功的人类-人工智能团队:目标对齐和人工智能自主性对基于法学硕士的聊天机器人的社会感知的作用
Pub Date : 2025-12-19 DOI: 10.1016/j.chbah.2025.100246
Christiane Attig , Luisa Winzer , Tim Schrills , Mourad Zoubir , Maged Mortaga , Patricia Wollstadt , Christiane Wiebel-Herboth , Thomas Franke
LLM-based chatbots such as ChatGPT support collaborative, complex tasks by leveraging natural language processing to provide skills, knowledge, or resources beyond the user’s immediate capabilities. Joint activity theory suggests that effective human–AI collaboration, however, requires more than responding to verbatim prompts – it depends on aligning with the user’s underlying goal. Since prompts may not always explicitly state the goal, an effective LLM should analyze the input to approximate the intended objective before autonomously tailoring its response to align with the user’s goal. To test these assumptions, we examined the effects of LLM-based chatbots’ autonomy and goal alignment on multiple social perception metrics as key criteria for successful human–AI teaming (i.e., perceived cooperation, warmth, competence, traceability, usefulness, and trustworthiness). We conducted a scenario-based online experiment where participants (N = 182, within-subjects design) were instructed to collaborate with four different versions of an LLM-based chatbot. The overall goal of the study scenario was to detect and correct erroneous information in short encyclopedic articles, representing a prototypical knowledge work task. Four custom-instructed chatbots were provided in random order: three chatbots varying in goal alignment and AI autonomy and one chatbot serving as a control condition not fulfilling user prompts. Repeated-measures ANOVAs demonstrate that a chatbot which is able to excel in goal alignment by autonomously going beyond verbatim user prompts is perceived as superior compared to a chatbot that adheres rigidly to user prompts without adapting to implicit objectives and chatbots that fail to meet the explicit or implicit user goal. These results support the notion that AI autonomy is only perceived as beneficial as long as user goals are not undermined by the chatbot, emphasizing the importance of balancing user and AI autonomy in human-centered design of AI systems.
基于法学硕士的聊天机器人(如ChatGPT)通过利用自然语言处理来提供超出用户直接能力的技能、知识或资源,从而支持协作、复杂的任务。然而,联合活动理论表明,有效的人类与人工智能协作需要的不仅仅是对逐字提示做出反应——它取决于与用户的潜在目标保持一致。由于提示可能并不总是明确地说明目标,因此有效的LLM应该在自动调整响应以与用户目标保持一致之前分析输入以接近预期目标。为了验证这些假设,我们研究了基于llm的聊天机器人的自主性和目标一致性对多个社会感知指标的影响,这些指标是成功的人类-人工智能团队的关键标准(即感知合作、温暖、能力、可追溯性、有用性和可信度)。我们进行了一个基于场景的在线实验,参与者(N = 182,受试者内部设计)被指示与四个不同版本的基于llm的聊天机器人合作。研究场景的总体目标是检测和纠正百科全书短文中的错误信息,这是一个典型的知识工作任务。四个自定义指示的聊天机器人以随机顺序提供:三个聊天机器人在目标一致性和人工智能自主性方面有所不同,一个聊天机器人作为不满足用户提示的控制条件。重复测量方差分析表明,与严格遵循用户提示而不适应隐含目标的聊天机器人和无法满足明确或隐含用户目标的聊天机器人相比,能够通过自主超越逐字用户提示而在目标一致性方面表现出色的聊天机器人被认为是优越的。这些结果支持了人工智能自治只有在用户目标不被聊天机器人破坏的情况下才被认为是有益的这一观点,强调了在以人为中心的人工智能系统设计中平衡用户和人工智能自治的重要性。
{"title":"Understanding successful human–AI teaming: The role of goal alignment and AI autonomy for social perception of LLM-based chatbots","authors":"Christiane Attig ,&nbsp;Luisa Winzer ,&nbsp;Tim Schrills ,&nbsp;Mourad Zoubir ,&nbsp;Maged Mortaga ,&nbsp;Patricia Wollstadt ,&nbsp;Christiane Wiebel-Herboth ,&nbsp;Thomas Franke","doi":"10.1016/j.chbah.2025.100246","DOIUrl":"10.1016/j.chbah.2025.100246","url":null,"abstract":"<div><div>LLM-based chatbots such as ChatGPT support collaborative, complex tasks by leveraging natural language processing to provide skills, knowledge, or resources beyond the user’s immediate capabilities. Joint activity theory suggests that effective human–AI collaboration, however, requires more than responding to verbatim prompts – it depends on aligning with the user’s underlying goal. Since prompts may not always explicitly state the goal, an effective LLM should analyze the input to approximate the intended objective before autonomously tailoring its response to align with the user’s goal. To test these assumptions, we examined the effects of LLM-based chatbots’ autonomy and goal alignment on multiple social perception metrics as key criteria for successful human–AI teaming (i.e., perceived cooperation, warmth, competence, traceability, usefulness, and trustworthiness). We conducted a scenario-based online experiment where participants (<span><math><mi>N</mi></math></span> = 182, within-subjects design) were instructed to collaborate with four different versions of an LLM-based chatbot. The overall goal of the study scenario was to detect and correct erroneous information in short encyclopedic articles, representing a prototypical knowledge work task. Four custom-instructed chatbots were provided in random order: three chatbots varying in goal alignment and AI autonomy and one chatbot serving as a control condition not fulfilling user prompts. Repeated-measures ANOVAs demonstrate that a chatbot which is able to excel in goal alignment by autonomously going beyond verbatim user prompts is perceived as superior compared to a chatbot that adheres rigidly to user prompts without adapting to implicit objectives and chatbots that fail to meet the explicit or implicit user goal. These results support the notion that AI autonomy is only perceived as beneficial as long as user goals are not undermined by the chatbot, emphasizing the importance of balancing user and AI autonomy in human-centered design of AI systems.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100246"},"PeriodicalIF":0.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal robotic storytelling integrating sound effects and background music 多模式机器人讲故事整合声音效果和背景音乐
Pub Date : 2025-12-16 DOI: 10.1016/j.chbah.2025.100248
Sophia C. Steinhaeusser, Sophia Maier, Birgit Lugrin
Music can induce emotions and is often used to enhance emotional experiences of storytelling media, while sound effects can convey information on a story’s environmental setting. While these non-speech sounds are well-integrated into traditional media, their use in newer forms such as robotic storytelling is still developing. To address this gap, we developed guidelines for emotion-inducing music based on theoretical knowledge from music theory, psychology, and media studies, and validated them in an online perception study. Subsequently, a laboratory prestudy compared the effects of the music’s source during robotic storytelling, finding no significant differences between the robotic storyteller and an external loudspeaker. Building on these results, our main study compared storytelling with added background music, sound effects, a combination of both, and a control condition without non-speech sounds. Results showed that while subjective evaluations of presentation liking and qualitative feedback did not significantly differ, background music alone yielded the best outcomes on standardized measures, enhancing transportation, cognitive absorption, emotion induction, and objectively assessed attention-related affects. These findings support incorporating emotion-inducing background music into robotic storytelling to enhance its immersive and emotional effects.
音乐可以诱导情感,通常用于增强叙事媒体的情感体验,而音效可以传达故事环境设置的信息。虽然这些非语音已经很好地融入了传统媒体,但它们在机器人讲故事等新形式中的应用仍在发展中。为了解决这一差距,我们基于音乐理论、心理学和媒体研究的理论知识,制定了情感诱导音乐的指导方针,并在一项在线感知研究中对其进行了验证。随后,一项实验室预研究比较了机器人讲故事时音乐来源的影响,发现机器人讲故事和外部扬声器之间没有显著差异。基于这些结果,我们的主要研究将讲故事与添加背景音乐、音效、两者的组合以及没有非言语声音的控制条件进行了比较。结果表明,虽然对呈现喜好的主观评价和定性反馈没有显著差异,但背景音乐在标准化测量、增强运输、认知吸收、情感诱导和客观评估注意力相关影响方面产生了最好的结果。这些发现支持将情感诱导背景音乐融入机器人讲故事中,以增强其沉浸感和情感效果。
{"title":"Multimodal robotic storytelling integrating sound effects and background music","authors":"Sophia C. Steinhaeusser,&nbsp;Sophia Maier,&nbsp;Birgit Lugrin","doi":"10.1016/j.chbah.2025.100248","DOIUrl":"10.1016/j.chbah.2025.100248","url":null,"abstract":"<div><div>Music can induce emotions and is often used to enhance emotional experiences of storytelling media, while sound effects can convey information on a story’s environmental setting. While these non-speech sounds are well-integrated into traditional media, their use in newer forms such as robotic storytelling is still developing. To address this gap, we developed guidelines for emotion-inducing music based on theoretical knowledge from music theory, psychology, and media studies, and validated them in an online perception study. Subsequently, a laboratory prestudy compared the effects of the music’s source during robotic storytelling, finding no significant differences between the robotic storyteller and an external loudspeaker. Building on these results, our main study compared storytelling with added background music, sound effects, a combination of both, and a control condition without non-speech sounds. Results showed that while subjective evaluations of presentation liking and qualitative feedback did not significantly differ, background music alone yielded the best outcomes on standardized measures, enhancing transportation, cognitive absorption, emotion induction, and objectively assessed attention-related affects. These findings support incorporating emotion-inducing background music into robotic storytelling to enhance its immersive and emotional effects.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100248"},"PeriodicalIF":0.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The acceptability of artificial intelligence to support university students’ mental health: the role of Asian cultural values and social support 人工智能支持大学生心理健康的可接受性:亚洲文化价值观和社会支持的作用
Pub Date : 2025-12-11 DOI: 10.1016/j.chbah.2025.100247
Mazel Lye, Rachael Martin, Sally Richmond
The global prevalence of mental disorders is rising, with university students at-risk due to the increased psychological distress associated with this period simultaneously occurring with the age of onset of many mental disorders. Artificial intelligence (AI) has been proposed as a novel treatment approach for mental disorders, yet little is known about its acceptability when applied to mental health interventions from the perspective of university students. This study aimed to explore the role of social support and Asian cultural values, two factors linked more broadly with mental health help-seeking, in the acceptability of AI-based mental health interventions among university students. A sample of 135 Australian university students (Mage = 25.26 years, SDage = 7.72 years) of diverse ethnicity (e.g. 43.7 % White/European, 29.6 % Asian and 11.9 % Aboriginal and/or Torres Strait Islander) completed questionnaires for social support, Asian cultural values, and acceptability. A multiple linear regression indicated that Asian cultural values were positively associated with acceptability of AI-based mental health interventions; no support was found for the association with social support. Age and ethnicity were included in the regression model as covariates to adjust for their potential influence on acceptability. These findings provide insight into the relationship between the acceptability of AI-based mental health interventions, social support and Asian cultural values, and demonstrate the need to understand student perspectives before implementing such interventions.
精神障碍的全球患病率正在上升,大学生面临风险,因为与这一时期相关的心理困扰增加,与许多精神障碍的发病年龄同时发生。人工智能(AI)已被提出作为一种新的精神障碍治疗方法,但从大学生的角度来看,其可接受性尚不清楚。本研究旨在探讨社会支持和亚洲文化价值观在大学生对基于人工智能的心理健康干预的接受度中的作用,这两个因素与心理健康求助有着更广泛的联系。135名来自不同种族(43.7%为白人/欧洲人,29.6%为亚洲人,11.9%为土著和/或托雷斯海峡岛民)的澳大利亚大学生(年龄25.26岁,年龄7.72岁)完成了关于社会支持、亚洲文化价值观和可接受性的问卷调查。多元线性回归表明,亚洲文化价值观与基于人工智能的心理健康干预的可接受性呈正相关;没有发现与社会支持有关的证据。年龄和种族作为协变量纳入回归模型,以调整其对可接受性的潜在影响。这些发现揭示了基于人工智能的心理健康干预的可接受性、社会支持和亚洲文化价值观之间的关系,并表明在实施此类干预之前需要了解学生的观点。
{"title":"The acceptability of artificial intelligence to support university students’ mental health: the role of Asian cultural values and social support","authors":"Mazel Lye,&nbsp;Rachael Martin,&nbsp;Sally Richmond","doi":"10.1016/j.chbah.2025.100247","DOIUrl":"10.1016/j.chbah.2025.100247","url":null,"abstract":"<div><div>The global prevalence of mental disorders is rising, with university students at-risk due to the increased psychological distress associated with this period simultaneously occurring with the age of onset of many mental disorders. Artificial intelligence (AI) has been proposed as a novel treatment approach for mental disorders, yet little is known about its acceptability when applied to mental health interventions from the perspective of university students. This study aimed to explore the role of social support and Asian cultural values, two factors linked more broadly with mental health help-seeking, in the acceptability of AI-based mental health interventions among university students. A sample of 135 Australian university students (<em>M</em><sub><em>age</em></sub> = 25.26 years, <em>SD</em><sub><em>age</em></sub> = 7.72 years) of diverse ethnicity (e.g. 43.7 % White/European, 29.6 % Asian and 11.9 % Aboriginal and/or Torres Strait Islander) completed questionnaires for social support, Asian cultural values, and acceptability. A multiple linear regression indicated that Asian cultural values were positively associated with acceptability of AI-based mental health interventions; no support was found for the association with social support. Age and ethnicity were included in the regression model as covariates to adjust for their potential influence on acceptability. These findings provide insight into the relationship between the acceptability of AI-based mental health interventions, social support and Asian cultural values, and demonstrate the need to understand student perspectives before implementing such interventions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100247"},"PeriodicalIF":0.0,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception of human-robot relationships: Professional, communicative, impersonal and emotionless 对人机关系的感知:专业、交流、客观和无情感
Pub Date : 2025-12-01 DOI: 10.1016/j.chbah.2025.100226
Monika Menz , Markus Huff
Humans extend social perceptions not only to other people but also to non-human entities such as pets, toys, and robots. This study investigates how individuals differentiate between various relationship partners—a familiar person, a professional person, a pet, a cuddly toy, and a social robot—across three interaction contexts: caregiving, conversation, and leisure. Using the Repertory Grid Method, 103 participants generated 811 construct pairs, which were categorized into seven psychological dimensions: Verbal Communication, Assistance and Competences, Liveness and Humanity, Emotional and Empathic Ability, Autonomy and Voluntariness, Trust and Closeness, and Physical Activity and Responsiveness. Cluster analyses revealed that in Verbal Communication and Assistance and Competences, robots were perceived similarly to human partners, indicating functional comparability. In contrast, for Liveness and Humanity and Emotional and Empathic Ability, humans clustered with pets—distinct from robots and cuddly toys—highlighting robots’ lack of perceived emotional richness and animacy. Interestingly, in Autonomy and Voluntariness and Trust and Closeness, robots were grouped with professional humans, while familiar persons, pets, and cuddly toys formed a separate cluster, suggesting that robots are seen as formal, emotionally distant partners. These findings indicate that while robots may match human partners in communicative and task-oriented domains, they are not regarded as emotionally intimate or fully animate beings. Instead, they occupy a hybrid role—competent yet impersonal—situated between tools and social agents. The study contributes to a more nuanced understanding of human-robot relationships by identifying the psychological dimensions that shape perceptions of sociality, animacy, and relational closeness with non-human partners.
人类不仅将社会感知扩展到其他人身上,还将其扩展到宠物、玩具和机器人等非人类实体上。这项研究调查了人们如何区分不同的关系伙伴——一个熟悉的人、一个专业人士、一个宠物、一个可爱的玩具和一个社交机器人——在三种互动环境中:照顾、交谈和休闲。运用储备网格法,103名被试生成了811个构式对,这些构式对分为七个心理维度:言语沟通、协助与能力、活力与人性、情感与共情能力、自主与自愿、信任与亲密、身体活动与反应。聚类分析显示,在语言交流和辅助和能力方面,机器人被认为与人类伴侣相似,表明功能上的可比性。相比之下,在活力和人性以及情感和移情能力方面,人类与宠物聚集在一起——不同于机器人和可爱的玩具——这凸显了机器人缺乏感知到的情感丰富性和活力。有趣的是,在自主性和自愿性以及信任和亲密度中,机器人被归为专业人员,而熟悉的人、宠物和可爱的玩具则被归为单独的一组,这表明机器人被视为正式的、情感上疏远的伙伴。这些发现表明,虽然机器人在交流和任务导向领域可能与人类伴侣相匹配,但它们并不被视为情感亲密或完全有生命的生物。相反,他们扮演着一种介于工具和社会代理人之间的混合角色——既有能力又没有个性。该研究通过确定塑造社会性、动物性和与非人类伙伴关系亲密度的心理维度,有助于更细致入微地理解人与机器人的关系。
{"title":"Perception of human-robot relationships: Professional, communicative, impersonal and emotionless","authors":"Monika Menz ,&nbsp;Markus Huff","doi":"10.1016/j.chbah.2025.100226","DOIUrl":"10.1016/j.chbah.2025.100226","url":null,"abstract":"<div><div>Humans extend social perceptions not only to other people but also to non-human entities such as pets, toys, and robots. This study investigates how individuals differentiate between various relationship partners—a familiar person, a professional person, a pet, a cuddly toy, and a social robot—across three interaction contexts: caregiving, conversation, and leisure. Using the Repertory Grid Method, 103 participants generated 811 construct pairs, which were categorized into seven psychological dimensions: Verbal Communication, Assistance and Competences, Liveness and Humanity, Emotional and Empathic Ability, Autonomy and Voluntariness, Trust and Closeness, and Physical Activity and Responsiveness. Cluster analyses revealed that in Verbal Communication and Assistance and Competences, robots were perceived similarly to human partners, indicating functional comparability. In contrast, for Liveness and Humanity and Emotional and Empathic Ability, humans clustered with pets—distinct from robots and cuddly toys—highlighting robots’ lack of perceived emotional richness and animacy. Interestingly, in Autonomy and Voluntariness and Trust and Closeness, robots were grouped with professional humans, while familiar persons, pets, and cuddly toys formed a separate cluster, suggesting that robots are seen as formal, emotionally distant partners. These findings indicate that while robots may match human partners in communicative and task-oriented domains, they are not regarded as emotionally intimate or fully animate beings. Instead, they occupy a hybrid role—competent yet impersonal—situated between tools and social agents. The study contributes to a more nuanced understanding of human-robot relationships by identifying the psychological dimensions that shape perceptions of sociality, animacy, and relational closeness with non-human partners.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100226"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing intercultural sensitivity in large language models: A comparative study of GPT-3.5 and GPT-4 across eight languages 评估大型语言模型中的跨文化敏感性:跨八种语言的GPT-3.5和GPT-4的比较研究
Pub Date : 2025-12-01 DOI: 10.1016/j.chbah.2025.100241
Yiwen Jin , Lies Sercu , Feng Guo
As large language models (LLMs) such as ChatGPT are increasingly used across cultures and languages, concerns have arisen about their ability to respond in culturally sensitive ways. This study evaluated the intercultural sensitivity of GPT-3.5 and GPT-4 using the Intercultural Sensitivity Scale (ISS) translated into eight languages. Each model completed ten randomized iterations of the 24-item ISS per language, and the results were analyzed using descriptive statistics and three-way ANOVA. GPT-4 achieved significantly higher intercultural sensitivity scores than GPT-3.5 across all dimensions, with “respect for cultural differences” scoring highest and “interaction confidence” lowest. Significant interactions were found between model version and language, and between model version and ISS dimensions, indicating that GPT-4's improvements vary by linguistic context. Nonetheless, the interaction between language and dimensions did not yield significant results. Future research should focus on increasing the amount of training data for the less spoken languages, as well as adding rich emotional and cultural background data to improve the model's understanding of cultural norms and nuances.
随着像ChatGPT这样的大型语言模型(llm)越来越多地跨文化和语言使用,人们开始关注它们以文化敏感方式做出反应的能力。本研究使用翻译成八种语言的跨文化敏感性量表(ISS)评估GPT-3.5和GPT-4的跨文化敏感性。每个模型完成了每种语言24项ISS的10次随机迭代,并使用描述性统计和三向方差分析对结果进行分析。在所有维度上,GPT-4的跨文化敏感性得分明显高于GPT-3.5,其中“尊重文化差异”得分最高,“互动信心”得分最低。模型版本与语言之间、模型版本与ISS维度之间存在显著的交互作用,表明GPT-4的改善因语言语境而异。然而,语言和维度之间的相互作用并没有产生显著的结果。未来的研究应侧重于增加较少使用语言的训练数据量,并增加丰富的情感和文化背景数据,以提高模型对文化规范和细微差别的理解。
{"title":"Assessing intercultural sensitivity in large language models: A comparative study of GPT-3.5 and GPT-4 across eight languages","authors":"Yiwen Jin ,&nbsp;Lies Sercu ,&nbsp;Feng Guo","doi":"10.1016/j.chbah.2025.100241","DOIUrl":"10.1016/j.chbah.2025.100241","url":null,"abstract":"<div><div>As large language models (LLMs) such as ChatGPT are increasingly used across cultures and languages, concerns have arisen about their ability to respond in culturally sensitive ways. This study evaluated the intercultural sensitivity of GPT-3.5 and GPT-4 using the Intercultural Sensitivity Scale (ISS) translated into eight languages. Each model completed ten randomized iterations of the 24-item ISS per language, and the results were analyzed using descriptive statistics and three-way ANOVA. GPT-4 achieved significantly higher intercultural sensitivity scores than GPT-3.5 across all dimensions, with “respect for cultural differences” scoring highest and “interaction confidence” lowest. Significant interactions were found between model version and language, and between model version and ISS dimensions, indicating that GPT-4's improvements vary by linguistic context. Nonetheless, the interaction between language and dimensions did not yield significant results. Future research should focus on increasing the amount of training data for the less spoken languages, as well as adding rich emotional and cultural background data to improve the model's understanding of cultural norms and nuances.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100241"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aesthetic Integrity Index (AII) for human–AI hybrid epistemology: Reconfiguring the Beholder’s Share through Xie He’s Six Principles 人-人工智能混合认识论的审美完整性指数(AII):通过解贺六原则重新配置旁观者份额
Pub Date : 2025-12-01 DOI: 10.1016/j.chbah.2025.100242
Rong Chang
{"title":"Aesthetic Integrity Index (AII) for human–AI hybrid epistemology: Reconfiguring the Beholder’s Share through Xie He’s Six Principles","authors":"Rong Chang","doi":"10.1016/j.chbah.2025.100242","DOIUrl":"10.1016/j.chbah.2025.100242","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100242"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantitative fairness—A framework for the design of equitable cybernetic societies 定量公平——设计公平控制论社会的框架
Pub Date : 2025-11-19 DOI: 10.1016/j.chbah.2025.100236
Kevin Riehl, Anastasios Kouvelas, Michail A. Makridis
Advancements in computer science, artificial intelligence, and control systems have catalyzed the emergence of cybernetic societies, where algorithms play a pivotal role in decision-making processes shaping nearly every aspect of human life. Automated decision-making for resource allocation has expanded into industry, government processes, critical infrastructures, and even determines the very fabric of social interactions and communication. While these systems promise greater efficiency and reduced corruption, misspecified cybernetic mechanisms harbor the threat for reinforcing inequities, discrimination, and even dystopian or totalitarian structures. Fairness thus becomes a crucial component in the design of cybernetic systems, to promote cooperation between selfish individuals, to achieve better outcomes at the system level, to confront public resistance, to gain trust and acceptance for rules and institutions, to perforate self-reinforcing cycles of poverty through social mobility, to incentivize motivation, contribution and satisfaction of people through inclusion, to increase social-cohesion in groups, and ultimately to improve life quality. Quantitative descriptions of fairness are crucial to reflect equity into algorithms, but only few works in the fairness literature offer such measures; the existing quantitative measures in the literature are either too application-specific, suffer from undesirable characteristics, or are not ideology-agnostic. This study proposes a quantitative, transactional, and distributive fairness framework based on an interdisciplinary foundation that supports the systematic design of socially-feasible decision-making systems. Moreover, it emphasizes the importance of fairness and transparency when designing algorithms for equitable, cybernetic societies, and establishes a connection between fairness literature and resource allocating systems.
计算机科学、人工智能和控制系统的进步催化了控制论社会的出现,算法在决定人类生活几乎方方面面的决策过程中发挥着关键作用。用于资源分配的自动化决策已经扩展到工业、政府流程、关键基础设施,甚至决定了社会交互和通信的结构。虽然这些系统有望提高效率并减少腐败,但不明确的控制论机制却有可能加剧不平等、歧视,甚至是反乌托邦或极权主义结构。因此,公平成为控制论系统设计中的一个重要组成部分,它可以促进自私的个人之间的合作,在系统层面取得更好的结果,对抗公众的抵制,获得对规则和制度的信任和接受,通过社会流动打破贫困的自我强化循环,通过包容激励人们的动机、贡献和满意度,增加群体的社会凝聚力。并最终提高生活质量。公平的定量描述对于将公平反映到算法中至关重要,但公平文献中只有很少的作品提供了这样的措施;文献中现有的量化措施要么过于具体,要么具有不良特征,要么不是意识形态不可知论的。本研究提出了一个基于跨学科基础的定量、交易和分配公平框架,支持社会可行决策系统的系统设计。此外,它强调了公平和透明的重要性设计算法公平,控制论社会,并建立公平文献和资源分配系统之间的联系。
{"title":"Quantitative fairness—A framework for the design of equitable cybernetic societies","authors":"Kevin Riehl,&nbsp;Anastasios Kouvelas,&nbsp;Michail A. Makridis","doi":"10.1016/j.chbah.2025.100236","DOIUrl":"10.1016/j.chbah.2025.100236","url":null,"abstract":"<div><div>Advancements in computer science, artificial intelligence, and control systems have catalyzed the emergence of cybernetic societies, where algorithms play a pivotal role in decision-making processes shaping nearly every aspect of human life. Automated decision-making for resource allocation has expanded into industry, government processes, critical infrastructures, and even determines the very fabric of social interactions and communication. While these systems promise greater efficiency and reduced corruption, misspecified cybernetic mechanisms harbor the threat for reinforcing inequities, discrimination, and even dystopian or totalitarian structures. Fairness thus becomes a crucial component in the design of cybernetic systems, to promote cooperation between selfish individuals, to achieve better outcomes at the system level, to confront public resistance, to gain trust and acceptance for rules and institutions, to perforate self-reinforcing cycles of poverty through social mobility, to incentivize motivation, contribution and satisfaction of people through inclusion, to increase social-cohesion in groups, and ultimately to improve life quality. Quantitative descriptions of fairness are crucial to reflect equity into algorithms, but only few works in the fairness literature offer such measures; the existing quantitative measures in the literature are either too application-specific, suffer from undesirable characteristics, or are not ideology-agnostic. This study proposes a quantitative, transactional, and distributive fairness framework based on an interdisciplinary foundation that supports the systematic design of socially-feasible decision-making systems. Moreover, it emphasizes the importance of fairness and transparency when designing algorithms for equitable, cybernetic societies, and establishes a connection between fairness literature and resource allocating systems.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100236"},"PeriodicalIF":0.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why human mistakes hurt more? Emotional responses in human-AI errors 为什么人为的错误伤害更大?人类-人工智能错误中的情绪反应
Pub Date : 2025-11-19 DOI: 10.1016/j.chbah.2025.100238
Ying Qin, Wanhui Zhou, Bu Zhong
Understanding user responses to AI versus human errors is crucial, as they shape trust, acceptance, and interaction outcomes. This study investigates the emotional dynamics of human-AI interactions by examining how agent identity (human vs. AI) and error severity (low vs. high) influence negative emotional reactions. Using a 2 × 2 factorial design (N = 250), the findings reveal that human agents consistently elicit stronger negative emotions than AI agents, regardless of error severity. Moreover, perceived experience moderates this relationship under specific conditions: individuals who view AI less experienced than humans exhibit stronger negative emotions toward human errors, while this effect diminishes when AI is perceived as having higher experience. However, perceived agency does not significantly influence emotional responses. These findings highlight the critical role of agent identity and perceived experience in shaping emotional reactions to errors, adding insights into the dynamics of human-AI interactions. This research shows that developing effective AI systems needs to manage user emotional responses and trust, in which perceived experience and competency play pivotal roles in adoption. The findings can guide the design of AI systems that adjust user expectations and emotional responses in accordance with the AI's perceived level of experience.
理解用户对人工智能和人为错误的反应至关重要,因为它们会影响信任、接受和交互结果。本研究通过研究代理身份(人类与人工智能)和错误严重程度(低与高)如何影响负面情绪反应,调查了人类与人工智能互动的情感动态。使用2 × 2因子设计(N = 250),研究结果显示,无论错误严重程度如何,人类代理始终比人工智能代理引发更强烈的负面情绪。此外,感知经验在特定条件下调节了这种关系:认为人工智能经验不如人类的个体对人类的错误表现出更强烈的负面情绪,而当人工智能被认为具有更高的经验时,这种影响就会减弱。然而,感知代理对情绪反应没有显著影响。这些发现强调了代理身份和感知经验在塑造对错误的情绪反应方面的关键作用,增加了对人类与人工智能互动动态的见解。这项研究表明,开发有效的人工智能系统需要管理用户的情绪反应和信任,其中感知经验和能力在采用中起着关键作用。这些发现可以指导人工智能系统的设计,根据人工智能感知的经验水平调整用户的期望和情绪反应。
{"title":"Why human mistakes hurt more? Emotional responses in human-AI errors","authors":"Ying Qin,&nbsp;Wanhui Zhou,&nbsp;Bu Zhong","doi":"10.1016/j.chbah.2025.100238","DOIUrl":"10.1016/j.chbah.2025.100238","url":null,"abstract":"<div><div>Understanding user responses to AI versus human errors is crucial, as they shape trust, acceptance, and interaction outcomes. This study investigates the emotional dynamics of human-AI interactions by examining how agent identity (human vs. AI) and error severity (low vs. high) influence negative emotional reactions. Using a 2 × 2 factorial design (<em>N</em> = 250), the findings reveal that human agents consistently elicit stronger negative emotions than AI agents, regardless of error severity. Moreover, perceived experience moderates this relationship under specific conditions: individuals who view AI less experienced than humans exhibit stronger negative emotions toward human errors, while this effect diminishes when AI is perceived as having higher experience. However, perceived agency does not significantly influence emotional responses. These findings highlight the critical role of agent identity and perceived experience in shaping emotional reactions to errors, adding insights into the dynamics of human-AI interactions. This research shows that developing effective AI systems needs to manage user emotional responses and trust, in which perceived experience and competency play pivotal roles in adoption. The findings can guide the design of AI systems that adjust user expectations and emotional responses in accordance with the AI's perceived level of experience.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100238"},"PeriodicalIF":0.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1