首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Mapping user gratifications in the age of LLM-based chatbots: An affordance perspective 在基于法学硕士的聊天机器人时代映射用户满意度:一个可视性视角
Pub Date : 2025-11-19 DOI: 10.1016/j.chbah.2025.100240
Eun Go , Taeyoung Kim
Despite the widespread use of large language model (LLM)-based chatbots, little is known about what specific gratifications users obtain from the unique affordances of these systems and how these affordance-driven gratifications shape user evaluations. To address this gap, the present study maps the gratification structure of LLM chatbot use and examines whether users’ primary purpose of chatbot use (information-, conversation-, or task-oriented) influences the gratifications they derive. A survey of 249 LLM chatbot users revealed nine distinct gratifications aligned with four affordance types: modality, agency, interactivity, and navigability. Purpose of use meaningfully shaped which gratifications were most salient. For example, conversational use heightened Immersive Realism and Fun, whereas information- and task-oriented use elevated Adaptive Responsiveness. In turn, these affordance-driven gratifications predicted key outcomes, including perceived expertise, perceived friendliness, satisfaction, attitudes, and behavioral intentions to continued use. Across outcomes, Adaptive Responsiveness consistently emerged as the strongest predictor, underscoring the pivotal role of contingent, high-quality dialogue in LLM-based human–AI interaction. These findings extend uses and gratifications theory and offer design implications for developing more engaging, responsive, and purpose-tailored chatbot experiences.
尽管基于大型语言模型(LLM)的聊天机器人被广泛使用,但人们对用户从这些系统的独特功能中获得的特定满足以及这些功能支持驱动的满足如何影响用户评估知之甚少。为了解决这一差距,本研究绘制了LLM聊天机器人使用的满足结构,并检查用户使用聊天机器人的主要目的(信息导向、对话导向或任务导向)是否会影响他们获得的满足。一项针对249名LLM聊天机器人用户的调查显示,九种不同的满足感与四种提供类型相一致:模态、代理、交互性和可导航性。使用目的有意义地决定了哪些满足是最显著的。例如,会话使用增强了沉浸式现实性和趣味性,而信息和任务导向的使用增强了适应性响应性。反过来,这些可视性驱动的满足预测了关键结果,包括感知到的专业知识、感知到的友好、满意度、态度和继续使用的行为意图。在结果中,适应性反应一直是最强的预测因子,强调了基于法学硕士的人类-人工智能交互中偶然的高质量对话的关键作用。这些发现扩展了使用和满足理论,并为开发更具吸引力、响应性和针对性的聊天机器人体验提供了设计启示。
{"title":"Mapping user gratifications in the age of LLM-based chatbots: An affordance perspective","authors":"Eun Go ,&nbsp;Taeyoung Kim","doi":"10.1016/j.chbah.2025.100240","DOIUrl":"10.1016/j.chbah.2025.100240","url":null,"abstract":"<div><div>Despite the widespread use of large language model (LLM)-based chatbots, little is known about what specific gratifications users obtain from the unique affordances of these systems and how these affordance-driven gratifications shape user evaluations. To address this gap, the present study maps the gratification structure of LLM chatbot use and examines whether users’ primary purpose of chatbot use (information-, conversation-, or task-oriented) influences the gratifications they derive. A survey of 249 LLM chatbot users revealed nine distinct gratifications aligned with four affordance types: modality, agency, interactivity, and navigability. Purpose of use meaningfully shaped which gratifications were most salient. For example, conversational use heightened <em>Immersive Realism</em> and <em>Fun</em>, whereas information- and task-oriented use elevated <em>Adaptive Responsiveness</em>. In turn, these affordance-driven gratifications predicted key outcomes, including perceived expertise, perceived friendliness, satisfaction, attitudes, and behavioral intentions to continued use. Across outcomes, <em>Adaptive Responsiveness</em> consistently emerged as the strongest predictor, underscoring the pivotal role of contingent, high-quality dialogue in LLM-based human–AI interaction. These findings extend uses and gratifications theory and offer design implications for developing more engaging, responsive, and purpose-tailored chatbot experiences.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"7 ","pages":"Article 100240"},"PeriodicalIF":0.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145698045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Avatar or human, who is experiencing it? Impact of social interaction in virtual gaming worlds on personal space 化身还是人类,是谁在体验?虚拟游戏世界中社交互动对个人空间的影响
Pub Date : 2025-11-17 DOI: 10.1016/j.chbah.2025.100237
Ruoyu Niu, Mengzhu Huang, Rixin Tang
Virtual gaming worlds support rich social interaction in which players use avatars to collaborate, compete, and communicate across distance. Motivated by the increasing reliance on mediated social contact, this research examined whether virtual shared space and avatar properties shape personal space regulation in ways that parallel face to face encounters. Three experiments tested how virtual shared space, avatar agency, and avatar anthropomorphism influence interpersonal distance. Across studies, virtual comfort distance and psychological distance were used as complementary indicators of changes in personal space, and physical comfort distance was additionally assessed in a subset of conditions with a physically present human partner. Experiment 1 showed that, when interacting with a human driven partner in the laboratory, occupying a shared virtual space reliably reduced comfort distance and increased psychological closeness compared with interacting in separate virtual spaces, even after controlling for physical shared space. Experiment 2 replicated the virtual shared space effect with computer driven partners in an online Virtual gaming world setting, indicating that reduced interpersonal distance does not depend on human agency alone. Experiment 3 revealed that anthropomorphic avatars increased comfort toward computer driven partners, whereas avatar form had little impact when the partner was known to be human. Together, the findings indicate that virtual shared space, perceived agency, and avatar appearance jointly shape personal space regulation in digital environments and offer actionable guidance for designing avatars and virtual spaces that foster approach oriented, prosocial interaction.
虚拟游戏世界支持丰富的社交互动,玩家可以使用虚拟角色进行协作、竞争和远距离交流。由于越来越依赖中介社会联系,本研究考察了虚拟共享空间和虚拟角色属性是否以平行面对面接触的方式塑造了个人空间调节。三个实验测试了虚拟共享空间、化身代理和化身拟人化对人际距离的影响。在所有研究中,虚拟舒适距离和心理距离被用作个人空间变化的补充指标,而物理舒适距离在有实际在场的人类伴侣的情况下被额外评估。实验1表明,即使在控制了物理共享空间之后,与在实验室中与人类驱动的伙伴互动时,与在单独的虚拟空间互动相比,占据共享虚拟空间可靠地减少了舒适距离,增加了心理亲密度。实验2模拟了网络虚拟游戏世界中由电脑驱动的伴侣所产生的虚拟共享空间效应,表明人与人之间距离的减少并不仅仅取决于人的行为。实验3显示,拟人化的虚拟形象增加了对电脑驱动的伴侣的舒适度,而当伴侣是人类时,虚拟形象的形式几乎没有影响。总之,研究结果表明,虚拟共享空间、感知代理和虚拟形象共同塑造了数字环境中的个人空间监管,并为设计虚拟形象和虚拟空间提供了可操作的指导,以促进面向方法的、亲社会的互动。
{"title":"Avatar or human, who is experiencing it? Impact of social interaction in virtual gaming worlds on personal space","authors":"Ruoyu Niu,&nbsp;Mengzhu Huang,&nbsp;Rixin Tang","doi":"10.1016/j.chbah.2025.100237","DOIUrl":"10.1016/j.chbah.2025.100237","url":null,"abstract":"<div><div>Virtual gaming worlds support rich social interaction in which players use avatars to collaborate, compete, and communicate across distance. Motivated by the increasing reliance on mediated social contact, this research examined whether virtual shared space and avatar properties shape personal space regulation in ways that parallel face to face encounters. Three experiments tested how virtual shared space, avatar agency, and avatar anthropomorphism influence interpersonal distance. Across studies, virtual comfort distance and psychological distance were used as complementary indicators of changes in personal space, and physical comfort distance was additionally assessed in a subset of conditions with a physically present human partner. Experiment 1 showed that, when interacting with a human driven partner in the laboratory, occupying a shared virtual space reliably reduced comfort distance and increased psychological closeness compared with interacting in separate virtual spaces, even after controlling for physical shared space. Experiment 2 replicated the virtual shared space effect with computer driven partners in an online Virtual gaming world setting, indicating that reduced interpersonal distance does not depend on human agency alone. Experiment 3 revealed that anthropomorphic avatars increased comfort toward computer driven partners, whereas avatar form had little impact when the partner was known to be human. Together, the findings indicate that virtual shared space, perceived agency, and avatar appearance jointly shape personal space regulation in digital environments and offer actionable guidance for designing avatars and virtual spaces that foster approach oriented, prosocial interaction.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100237"},"PeriodicalIF":0.0,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A scoping review of nonverbal mimicry in human-virtual human interaction 非语言模仿在人-虚拟人互动中的研究综述
Pub Date : 2025-11-17 DOI: 10.1016/j.chbah.2025.100230
Kyana H.J. van Eijndhoven , Ethel Pruss , Pieter Spronck
A growing body of research has focused on examining the role of nonverbal mimicry, the spontaneous imitation of others’ physical behavior during social interactions, in human-virtual human interaction. The increasing deployment of virtual humans, and growing advancements in technology vital to virtual human development, emphasize the necessity to review studies incorporating such state-of-the-art technologies. To this end, we conducted a scoping review of empirical work studying nonverbal mimicry in human-virtual human interaction. This review focused on outlining (1) the contexts in which such interactions occurred, (2) implementations of nonverbal mimicry, (3) individual and situational factors that can lead one to mimic more (facilitators) or less (inhibitors), and (4) individual and social consequences. By creating this comprehensive outline, we were able to capture the current state of nonverbal mimicry research, and identify methodological, evidence, and empirical research gaps, that may serve as future guidelines to drive the field of virtual human research forward.
越来越多的研究集中在研究非语言模仿的作用,即在社交互动中自发地模仿他人的身体行为,在人与人之间的虚拟互动中。越来越多的虚拟人的部署,以及对虚拟人发展至关重要的技术的不断进步,强调了审查纳入这些最先进技术的研究的必要性。为此,我们对人类-虚拟人类互动中非语言模仿的实证研究进行了范围审查。这篇综述着重概述了(1)这种互动发生的背景,(2)非语言模仿的实施,(3)导致一个人模仿更多(促进者)或更少(抑制者)的个人和情境因素,以及(4)个人和社会后果。通过创建这个全面的大纲,我们能够捕捉到非语言模仿研究的现状,并确定方法,证据和经验研究差距,这可能作为未来的指导方针,推动虚拟人类研究领域向前发展。
{"title":"A scoping review of nonverbal mimicry in human-virtual human interaction","authors":"Kyana H.J. van Eijndhoven ,&nbsp;Ethel Pruss ,&nbsp;Pieter Spronck","doi":"10.1016/j.chbah.2025.100230","DOIUrl":"10.1016/j.chbah.2025.100230","url":null,"abstract":"<div><div>A growing body of research has focused on examining the role of nonverbal mimicry, the spontaneous imitation of others’ physical behavior during social interactions, in human-virtual human interaction. The increasing deployment of virtual humans, and growing advancements in technology vital to virtual human development, emphasize the necessity to review studies incorporating such state-of-the-art technologies. To this end, we conducted a scoping review of empirical work studying nonverbal mimicry in human-virtual human interaction. This review focused on outlining (1) the contexts in which such interactions occurred, (2) implementations of nonverbal mimicry, (3) individual and situational factors that can lead one to mimic more (facilitators) or less (inhibitors), and (4) individual and social consequences. By creating this comprehensive outline, we were able to capture the current state of nonverbal mimicry research, and identify methodological, evidence, and empirical research gaps, that may serve as future guidelines to drive the field of virtual human research forward.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100230"},"PeriodicalIF":0.0,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Less human, less positive? How AI involvement in leadership shapes employees’ affective well-being across different supervisor decisions 少了人性,少了积极?人工智能参与领导如何在不同的主管决策中塑造员工的情感幸福感
Pub Date : 2025-11-17 DOI: 10.1016/j.chbah.2025.100239
Emily Lochner , René Schmoll , Stephan Kaiser
As artificial intelligence (AI) becomes increasingly integrated into organizational leadership, it is critical to understand how algorithmic decision-making affects employee well-being. This study investigates how varying levels of AI involvement in leadership – ranging from fully human to hybrid (human-AI collaboration) to fully automated – influence employees' emotional responses at work. It also examines whether the emotional impact of leader type depends on the outcome of a managerial decision (positive vs. negative). To investigate these questions, we conducted a vignette-based online experiment using a 3x2 between-subjects design. Participants (N = 153 workers) were randomly assigned to one of six short, standardized leadership scenarios that varied by leader type (human, hybrid, or AI) and decision outcome (positive or negative). The vignettes described a realistic workplace situation in which a leader communicates a decision about a project's continuation. Subsequently, emotional responses were measured using validated affective scales.
The results showed that higher AI involvement led to lower positive affect, particularly following favorable decisions, while negative affect remained largely unaffected. These results suggest that, while AI leadership is not emotionally harmful, it also fails to generate positive engagement. Positive affect was strongest when positive decisions were delivered by a human leader and weakest when delivered by an AI.
These findings contribute to leadership and human-AI interaction research by highlighting an emotional asymmetry in AI-led leadership. Practically speaking, these results imply that while AI offers efficiency, it lacks the interpersonal resonance necessary for emotionally meaningful interactions. Therefore, organizations should consider maintaining human involvement in contexts where recognition, trust, or relational sensitivity are important.
随着人工智能(AI)越来越多地融入组织领导,了解算法决策如何影响员工福祉至关重要。这项研究调查了不同程度的人工智能参与领导——从完全人工到混合(人类-人工智能协作)到完全自动化——如何影响员工在工作中的情绪反应。它还检验了领导者类型的情绪影响是否取决于管理决策的结果(积极的还是消极的)。为了调查这些问题,我们使用3x2受试者间设计进行了一个基于小插图的在线实验。参与者(153名员工)被随机分配到六个简短的标准化领导场景中,这些场景根据领导者类型(人类、混合或人工智能)和决策结果(积极或消极)而有所不同。这些小插曲描述了一个现实的工作环境,在这个环境中,一位领导者传达了一个关于项目继续进行的决定。随后,使用经过验证的情感量表测量情绪反应。结果显示,人工智能参与度越高,积极情绪越低,尤其是在做出有利决定后,而消极情绪基本不受影响。这些结果表明,虽然人工智能领导不会在情感上造成伤害,但它也无法产生积极的参与。当人类领导做出积极决策时,积极影响最强,而当人工智能做出积极决策时,积极影响最弱。这些发现通过强调人工智能领导下的情感不对称,有助于领导力和人类与人工智能互动的研究。实际上,这些结果意味着,虽然人工智能提供了效率,但它缺乏情感上有意义的互动所必需的人际共鸣。因此,组织应该考虑在识别、信任或关系敏感性很重要的环境中保持人类的参与。
{"title":"Less human, less positive? How AI involvement in leadership shapes employees’ affective well-being across different supervisor decisions","authors":"Emily Lochner ,&nbsp;René Schmoll ,&nbsp;Stephan Kaiser","doi":"10.1016/j.chbah.2025.100239","DOIUrl":"10.1016/j.chbah.2025.100239","url":null,"abstract":"<div><div>As artificial intelligence (AI) becomes increasingly integrated into organizational leadership, it is critical to understand how algorithmic decision-making affects employee well-being. This study investigates how varying levels of AI involvement in leadership – ranging from fully human to hybrid (human-AI collaboration) to fully automated – influence employees' emotional responses at work. It also examines whether the emotional impact of leader type depends on the outcome of a managerial decision (positive vs. negative). To investigate these questions, we conducted a vignette-based online experiment using a 3x2 between-subjects design. Participants (N = 153 workers) were randomly assigned to one of six short, standardized leadership scenarios that varied by leader type (human, hybrid, or AI) and decision outcome (positive or negative). The vignettes described a realistic workplace situation in which a leader communicates a decision about a project's continuation. Subsequently, emotional responses were measured using validated affective scales.</div><div>The results showed that higher AI involvement led to lower positive affect, particularly following favorable decisions, while negative affect remained largely unaffected. These results suggest that, while AI leadership is not emotionally harmful, it also fails to generate positive engagement. Positive affect was strongest when positive decisions were delivered by a human leader and weakest when delivered by an AI.</div><div>These findings contribute to leadership and human-AI interaction research by highlighting an emotional asymmetry in AI-led leadership. Practically speaking, these results imply that while AI offers efficiency, it lacks the interpersonal resonance necessary for emotionally meaningful interactions. Therefore, organizations should consider maintaining human involvement in contexts where recognition, trust, or relational sensitivity are important.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100239"},"PeriodicalIF":0.0,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can large language models exhibit cognitive and affective empathy as humans? 大型语言模型能像人类一样表现出认知和情感同理心吗?
Pub Date : 2025-11-13 DOI: 10.1016/j.chbah.2025.100233
Tengfei Yu , Siyu Pan , Caoyun Fan , Siyang Luo , Yaohui Jin , Binglei Zhao
Empathy, a key component of human-social interaction, has become a core con-cern in human-computer interaction. This study examines whether current large language models (LLMs) can exhibit empathy in both cognitive and affective dimensions as humans. In our study, we used the standardized questionnaire to assess LLMs empathy ability and a novel paradigm was developed for LLMs eval-uation. Four main experiments were reported on LLMs empathy abilities using the Interpersonal Reactivity Index (IRI) and the Basic Empathy Scale (BES) on GPT-4 and Llama3 respectively. Two levels of evaluations were conducted to investigate whether the structural validity of the questionnaire in LLMs was aligned with humans and to compare the LLMs' empathy abilities with humans. We found GPT-4 show identical empathy dimension structure with humans while exhibiting significantly lower empathy abilities as compared to humans. Moreover, systemati-cal difference empathy ability was evident in Llama3 showing its failure to exhibit the same empathy dimensions as humans. All these findings indicate that though GPT-4 kept the same structure of human empathy (cognitive and affective), the current LLMs can not simulate empathy as we humans as indexed by the response to the questionnaire. This highlights the urgent requirements for further improving LLMs’ empathy abilities for more user-friendly human-LLMs interactions. In addition, the pipeline to generate diverse LLMs-simulated participants was also discussed.
移情是人与社会互动的重要组成部分,已成为人机交互研究的核心问题。本研究考察了当前的大型语言模型(llm)是否能像人类一样在认知和情感维度上表现出同理心。在本研究中,我们采用标准化问卷对法学硕士共情能力进行评估,并开发了一种新的法学硕士评估范式。采用人际反应指数(IRI)和基本共情量表(BES)分别在GPT-4和Llama3上对LLMs共情能力进行了四个主要实验。本研究通过两个层面的评估来考察法学硕士问卷的结构效度是否与人类一致,并比较法学硕士与人类的共情能力。我们发现GPT-4的共情维度结构与人类相同,但共情能力明显低于人类。此外,美洲驼的系统差异共情能力也很明显,表明它没有表现出与人类相同的共情维度。这些结果表明,虽然GPT-4保持了人类共情(认知和情感)的相同结构,但目前的法学硕士无法像问卷反应那样模拟人类的共情。这凸显了进一步提高法学硕士的移情能力以实现更友好的人机交互的迫切需求。此外,还讨论了生成各种llms模拟参与者的管道。
{"title":"Can large language models exhibit cognitive and affective empathy as humans?","authors":"Tengfei Yu ,&nbsp;Siyu Pan ,&nbsp;Caoyun Fan ,&nbsp;Siyang Luo ,&nbsp;Yaohui Jin ,&nbsp;Binglei Zhao","doi":"10.1016/j.chbah.2025.100233","DOIUrl":"10.1016/j.chbah.2025.100233","url":null,"abstract":"<div><div>Empathy, a key component of human-social interaction, has become a core con-cern in human-computer interaction. This study examines whether current large language models (LLMs) can exhibit empathy in both cognitive and affective dimensions as humans. In our study, we used the standardized questionnaire to assess LLMs empathy ability and a novel paradigm was developed for LLMs eval-uation. Four main experiments were reported on LLMs empathy abilities using the Interpersonal Reactivity Index (IRI) and the Basic Empathy Scale (BES) on GPT-4 and Llama3 respectively. Two levels of evaluations were conducted to investigate whether the structural validity of the questionnaire in LLMs was aligned with humans and to compare the LLMs' empathy abilities with humans. We found GPT-4 show identical empathy dimension structure with humans while exhibiting significantly lower empathy abilities as compared to humans. Moreover, systemati-cal difference empathy ability was evident in Llama3 showing its failure to exhibit the same empathy dimensions as humans. All these findings indicate that though GPT-4 kept the same structure of human empathy (cognitive and affective), the current LLMs can not simulate empathy as we humans as indexed by the response to the questionnaire. This highlights the urgent requirements for further improving LLMs’ empathy abilities for more user-friendly human-LLMs interactions. In addition, the pipeline to generate diverse LLMs-simulated participants was also discussed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100233"},"PeriodicalIF":0.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“What is the latest news, Avatar Pavel?” - AI assistants in transformation processes of metaverse “有什么最新消息吗,阿凡达·帕维尔”- AI辅助元宇宙的转换过程
Pub Date : 2025-11-01 DOI: 10.1016/j.chbah.2025.100225
Vaclav Moravec , Beata Gavurova , Martin Rigelsky
The main goal of the study was to examine and to evaluate the relationships between public attitudes towards AI avatars, their selected socio-demographic characteristics, fields of their media consumption, as well as ideological attitudes in order to reveal other adoption perspectives of AI avatars, which are not yet explored, and their strong economic and social potential in the metaverse. The data collection was carried out on a sample of 1250 respondents aged 18 and over in the period from 2 April 2025 to 9 April 2025. The research used an AI avatar experimentally developed by the start-up company The MAMA AI.
The outcomes of the descriptive analysis confirmed the fact that the AI news avatar Pavel was perceived rather neutrally to slightly positively, but as impersonal, with respondents demonstrating a low willingness to accept him as a guide across the media. The respondents also evaluated the use of AI assistants most favorably in the technical-service fields, but significantly more negatively in the sensitive domains such as psychology or politics. The differences between these groups were most noticeable in the perception of the AI avatar as more or less human and intimate, especially between men and women. Contrariwise, media habits played a much larger role. The study confirmed the importance of investigating specific adoption factors related to media consumption, media habits, and ideological attitudes along with the socio-demographic factors and thus, allowed us to understand the new adoption potential of AI avatars and the possibilities of its expansion.
该研究的主要目标是检查和评估公众对人工智能化身的态度、他们所选择的社会人口特征、媒体消费领域以及意识形态态度之间的关系,以揭示尚未探索的人工智能化身的其他采用观点,以及它们在虚拟世界中强大的经济和社会潜力。在2025年4月2日至2025年4月9日期间,对1250名18岁及以上的受访者进行了数据收集。这项研究使用了初创公司The MAMA AI实验性开发的人工智能化身。描述性分析的结果证实了这样一个事实,即人们对人工智能新闻化身帕维尔的看法相当中性或略微积极,但没有人情感,受访者表示不太愿意接受他作为整个媒体的向导。受访者还对人工智能助手在技术服务领域的使用进行了最有利的评估,但在心理学或政治等敏感领域则明显更为负面。这些群体之间的差异最明显的是对人工智能化身的看法,尤其是在男性和女性之间。相反,媒体习惯发挥了更大的作用。该研究证实了调查与媒体消费、媒体习惯、意识形态态度以及社会人口因素相关的具体采用因素的重要性,从而使我们能够了解人工智能化身的新采用潜力及其扩展的可能性。
{"title":"“What is the latest news, Avatar Pavel?” - AI assistants in transformation processes of metaverse","authors":"Vaclav Moravec ,&nbsp;Beata Gavurova ,&nbsp;Martin Rigelsky","doi":"10.1016/j.chbah.2025.100225","DOIUrl":"10.1016/j.chbah.2025.100225","url":null,"abstract":"<div><div>The main goal of the study was to examine and to evaluate the relationships between public attitudes towards AI avatars, their selected socio-demographic characteristics, fields of their media consumption, as well as ideological attitudes in order to reveal other adoption perspectives of AI avatars, which are not yet explored, and their strong economic and social potential in the metaverse. The data collection was carried out on a sample of 1250 respondents aged 18 and over in the period from 2 April 2025 to 9 April 2025. The research used an AI avatar experimentally developed by the start-up company The MAMA AI.</div><div>The outcomes of the descriptive analysis confirmed the fact that the AI news avatar Pavel was perceived rather neutrally to slightly positively, but as impersonal, with respondents demonstrating a low willingness to accept him as a guide across the media. The respondents also evaluated the use of AI assistants most favorably in the technical-service fields, but significantly more negatively in the sensitive domains such as psychology or politics. The differences between these groups were most noticeable in the perception of the AI avatar as more or less human and intimate, especially between men and women. Contrariwise, media habits played a much larger role. The study confirmed the importance of investigating specific adoption factors related to media consumption, media habits, and ideological attitudes along with the socio-demographic factors and thus, allowed us to understand the new adoption potential of AI avatars and the possibilities of its expansion.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100225"},"PeriodicalIF":0.0,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nonlinear transformation of probabilities by large language models 大型语言模型的非线性概率变换
Pub Date : 2025-10-31 DOI: 10.1016/j.chbah.2025.100227
Arend Hintze , Charu Bisht , Jory Schossau , Ralph Hertwig
Large Language Models (LLMs) such as ChatGPT and Claude demonstrate impressive abilities to generate meaningful text and mimic human-like responses. While they undoubtedly can boost human performance, there is also the risk that uninstructed users rely on them for direct advice without critical distance. A case in point is advice on economic choice. Choice tasks often involve probabilistic outcomes. In these tasks, human choice has been demonstrated to diverge from rational systematically, that is, linear weighting of probabilities, and reveals an inverse S-shaped weighting pattern in description-based choice (i.e., overweighting of small probabilities and underweighting of large ones), and an S-shaped weighting pattern in experience-based choice. We investigate how LLMs’ choices transform probabilities in simple economic tasks involving a sure outcome and a simple lottery with two probabilistic outcomes. LLMs’ choices do most often not yield an inverse S-shaped probability weighting pattern; instead, they display distinct nonlinearity-in-probabilities. Some models exhibited risk-seeking behavior, others a strong recency bias, and those who are more accurate underweighted small and overweighted large probabilities, resembling weighting patterns of decisions from experience rather than from description. These findings raise concerns about the quality of the advice users would receive on economic choice from LLMs, highlighting the necessity of critically using LLMs in decision-making contexts.
大型语言模型(llm),如ChatGPT和Claude,在生成有意义的文本和模仿类似人类的反应方面表现出了令人印象深刻的能力。虽然它们无疑可以提高人类的表现,但也存在风险,即未经指导的用户在没有临界距离的情况下依赖它们获得直接建议。关于经济选择的建议就是一个很好的例子。选择任务通常涉及概率结果。在这些任务中,人类的选择已经被证明系统性地偏离理性,即概率的线性加权,并且在基于描述的选择中显示出逆s形加权模式(即小概率的超重和大概率的低估),在基于经验的选择中显示出s形加权模式。我们研究了法学硕士的选择如何在涉及确定结果和具有两个概率结果的简单彩票的简单经济任务中转换概率。法学硕士的选择通常不会产生反s形的概率加权模式;相反,它们表现出明显的非线性概率。一些模型表现出寻求风险的行为,另一些则表现出强烈的近因偏差,而那些更准确的模型则低估了小概率和高估了大概率,类似于基于经验而非描述的决策加权模式。这些发现引起了人们对法学硕士在经济选择方面的建议质量的关注,强调了在决策环境中批判性地使用法学硕士的必要性。
{"title":"Nonlinear transformation of probabilities by large language models","authors":"Arend Hintze ,&nbsp;Charu Bisht ,&nbsp;Jory Schossau ,&nbsp;Ralph Hertwig","doi":"10.1016/j.chbah.2025.100227","DOIUrl":"10.1016/j.chbah.2025.100227","url":null,"abstract":"<div><div>Large Language Models (LLMs) such as ChatGPT and Claude demonstrate impressive abilities to generate meaningful text and mimic human-like responses. While they undoubtedly can boost human performance, there is also the risk that uninstructed users rely on them for direct advice without critical distance. A case in point is advice on economic choice. Choice tasks often involve probabilistic outcomes. In these tasks, human choice has been demonstrated to diverge from rational systematically, that is, linear weighting of probabilities, and reveals an inverse S-shaped weighting pattern in description-based choice (i.e., overweighting of small probabilities and underweighting of large ones), and an S-shaped weighting pattern in experience-based choice. We investigate how LLMs’ choices transform probabilities in simple economic tasks involving a sure outcome and a simple lottery with two probabilistic outcomes. LLMs’ choices do most often not yield an inverse S-shaped probability weighting pattern; instead, they display distinct nonlinearity-in-probabilities. Some models exhibited risk-seeking behavior, others a strong recency bias, and those who are more accurate underweighted small and overweighted large probabilities, resembling weighting patterns of decisions from experience rather than from description. These findings raise concerns about the quality of the advice users would receive on economic choice from LLMs, highlighting the necessity of critically using LLMs in decision-making contexts.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100227"},"PeriodicalIF":0.0,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Determinants of self-reported and behavioral trust in an AI advisor within a cooperative problem-solving game 合作解决问题游戏中AI顾问自我报告和行为信任的决定因素
Pub Date : 2025-10-30 DOI: 10.1016/j.chbah.2025.100235
Simon Schreibelmayr, Martina Mara
The widespread adoption of artificially intelligent advisory systems in everyday decision-making situations draws attention to the topic of user trust. Based on psychological theories of trust formation, several key determinants of Trust in Automation (TiA) have been proposed, though systematic empirical validation remains limited. To test them under highly controlled conditions, we implemented an immersive Virtual Reality trust game in which 165 participants solved riddles together with a voice-based AI assistant, evaluated it along multiple theoretically derived dimensions, and indicated how much they would rely on its advice. Largely consistent with the TiA model by Körber (2019), we found perceived system competence, understandability, assumed intentions of developers, and participants’ individual trust propensity to significantly predict user trust in the AI advisor, with the first having the largest influence. Additionally, familiarity moderated the relation between perceived system competence and trust. This model, derived from subjective trust measures (self-report scales), was then re-evaluated using behavioral reliance (i.e., the number of accepted in-game AI recommendations) as the outcome variable. Theoretical, empirical, and practical implications of the results are discussed.
人工智能咨询系统在日常决策情境中的广泛应用引起了人们对用户信任话题的关注。基于信任形成的心理学理论,提出了自动化信任的几个关键决定因素,但系统的实证验证仍然有限。为了在高度控制的条件下测试他们,我们实施了一个沉浸式虚拟现实信任游戏,在这个游戏中,165名参与者与基于语音的人工智能助手一起解决谜语,根据多个理论推导的维度对其进行评估,并表明他们对其建议的依赖程度。与Körber(2019)的TiA模型基本一致,我们发现感知到的系统能力、可理解性、开发人员的假设意图和参与者的个人信任倾向显著地预测了用户对AI顾问的信任,其中前者的影响最大。此外,熟悉程度调节了感知系统能力与信任之间的关系。该模型源自主观信任测量(自我报告量表),然后使用行为依赖(即接受的游戏内AI推荐的数量)作为结果变量重新评估。讨论了研究结果的理论、实证和实践意义。
{"title":"Determinants of self-reported and behavioral trust in an AI advisor within a cooperative problem-solving game","authors":"Simon Schreibelmayr,&nbsp;Martina Mara","doi":"10.1016/j.chbah.2025.100235","DOIUrl":"10.1016/j.chbah.2025.100235","url":null,"abstract":"<div><div>The widespread adoption of artificially intelligent advisory systems in everyday decision-making situations draws attention to the topic of user trust. Based on psychological theories of trust formation, several key determinants of Trust in Automation (TiA) have been proposed, though systematic empirical validation remains limited. To test them under highly controlled conditions, we implemented an immersive Virtual Reality trust game in which 165 participants solved riddles together with a voice-based AI assistant, evaluated it along multiple theoretically derived dimensions, and indicated how much they would rely on its advice. Largely consistent with the TiA model by Körber (2019), we found perceived system competence, understandability, assumed intentions of developers, and participants’ individual trust propensity to significantly predict user trust in the AI advisor, with the first having the largest influence. Additionally, familiarity moderated the relation between perceived system competence and trust. This model, derived from subjective trust measures (self-report scales), was then re-evaluated using behavioral reliance (i.e., the number of accepted in-game AI recommendations) as the outcome variable. Theoretical, empirical, and practical implications of the results are discussed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100235"},"PeriodicalIF":0.0,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Culturally responsive AI chatbots: From framework to field evidence 文化响应型人工智能聊天机器人:从框架到现场证据
Pub Date : 2025-10-28 DOI: 10.1016/j.chbah.2025.100224
Vik Naidoo , Karman Kaur Chadha
As AI systems become part of everyday life around the world, their failure to recognise and respond to cultural differences can erode trust, reduce engagement, and undermine legitimacy. This paper introduces the Culturally Responsive Artificial Intelligence (Chatbot) Framework (CRAIF-C), a practical, modular approach to building AI chatbots that understand and respect cultural diversity. CRAIF-C is novel in that it operationalises cultural responsiveness across the entire AI lifecycle, combining domain-specific technical methods with validated measurement tools and multi-context empirical testing. It addresses persistent limitations of earlier approaches, such as Value-Sensitive Design or Participatory AI, which often remain conceptual, sector-bound, or late-stage interventions. CRAIF-C works across four key domains: Enculturation, Adaptive Interaction, Explainability & Transparency, and Governance & Accountability. The framework's effectiveness is demonstrated through four complementary studies, which consistently show that AI chatbot systems using CRAIF-C achieve meaningful gains in cultural fit, natural communication, clear explanations, user trust, and sustained engagement. By incorporating cultural sensitivity into the core of AI chatbot design, CRAIF-C provides a roadmap for creating technology that is both technically capable, socially aware, ethically robust, and globally adaptable.
随着人工智能系统成为世界各地日常生活的一部分,它们无法识别和应对文化差异,可能会侵蚀信任、减少参与并破坏合法性。本文介绍了文化响应型人工智能(聊天机器人)框架(CRAIF-C),这是一种实用的模块化方法,用于构建理解和尊重文化多样性的人工智能聊天机器人。crif -c的新颖之处在于,它将特定领域的技术方法与经过验证的测量工具和多上下文经验测试相结合,在整个人工智能生命周期中实现文化响应性。它解决了早期方法的持续局限性,如价值敏感设计或参与式人工智能,这些方法通常仍然是概念性的,受部门限制的,或后期干预。crif - c在四个关键领域开展工作:文化适应、适应性互动、可解释性和透明度以及治理和问责制。该框架的有效性通过四项互补研究得到了证明,这些研究一致表明,使用CRAIF-C的人工智能聊天机器人系统在文化契合度、自然沟通、清晰解释、用户信任和持续参与方面取得了有意义的进展。通过将文化敏感性纳入人工智能聊天机器人设计的核心,crif - c为创造技术提供了路线图,该技术既具有技术能力,又具有社会意识,道德健全,又具有全球适应性。
{"title":"Culturally responsive AI chatbots: From framework to field evidence","authors":"Vik Naidoo ,&nbsp;Karman Kaur Chadha","doi":"10.1016/j.chbah.2025.100224","DOIUrl":"10.1016/j.chbah.2025.100224","url":null,"abstract":"<div><div>As AI systems become part of everyday life around the world, their failure to recognise and respond to cultural differences can erode trust, reduce engagement, and undermine legitimacy. This paper introduces the Culturally Responsive Artificial Intelligence (Chatbot) Framework (CRAIF-C), a practical, modular approach to building AI chatbots that understand and respect cultural diversity. CRAIF-C is novel in that it operationalises cultural responsiveness across the <em>entire</em> AI lifecycle, combining domain-specific technical methods with validated measurement tools and multi-context empirical testing. It addresses persistent limitations of earlier approaches, such as Value-Sensitive Design or Participatory AI, which often remain conceptual, sector-bound, or late-stage interventions. CRAIF-C works across four key domains: Enculturation, Adaptive Interaction, Explainability &amp; Transparency, and Governance &amp; Accountability. The framework's effectiveness is demonstrated through four complementary studies, which consistently show that AI chatbot systems using CRAIF-C achieve meaningful gains in cultural fit, natural communication, clear explanations, user trust, and sustained engagement. By incorporating cultural sensitivity into the core of AI chatbot design, CRAIF-C provides a roadmap for creating technology that is both technically capable, socially aware, ethically robust, and globally adaptable.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100224"},"PeriodicalIF":0.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and validation of the generative AI engagement scale 生成式人工智能参与量表的开发和验证
Pub Date : 2025-10-27 DOI: 10.1016/j.chbah.2025.100221
Da-Wei Zhang, Jia Yue Tan, Yu Yang Chew, Lisha Hew, Jia Yee Choo
As generative AI becomes more integrated into everyday life, understanding the behavioral impact of generative AI usage becomes increasingly important. However, research lacks validated tools for capturing both the frequency and quality of generative AI use. This study presents the Generative AI Engagement Scale (GAIES), a multidimensional instrument that was developed following best practices in scale construction and validation. GAIES consists of two subscales: the Use Frequency scale, which measures how often users interact with generative AI for self-interested and task-oriented purposes, and the Interaction Style scale, which assesses how users interact with generative AI through Questioningness, Expressiveness, and Preciseness. This study included 414 participants. Several psychometric evaluations were involved, including classical test theory, exploratory and confirmatory factor analyses, and item response theory. The subscales showed strong internal consistency, a clear factor structure, and a good fit. Besides validating GAIES, we demonstrated its practical utility through two case studies. An analysis of a structural equation model revealed that predictors from the Unified Theory of Acceptance and Use of Technology explained Self-interest- and Task-oriented usage differentially, indicating the predictability of the scale. Further, latent profile analysis revealed four distinct user subgroups, demonstrating the usefulness of the scale in identifying meaningful patterns of engagement. These findings establish GAIES as a psychometrically and theoretically sound method of measuring generative AI engagement. A key contribution of GAIES is its ability to go beyond generic usage metrics and offer a foundation for future research into the behavioral implications of generative AI usage.
随着生成式人工智能越来越多地融入日常生活,理解生成式人工智能使用对行为的影响变得越来越重要。然而,研究缺乏有效的工具来捕获生成人工智能使用的频率和质量。本研究介绍了生成式人工智能参与量表(GAIES),这是一种多维工具,是根据规模构建和验证的最佳实践开发的。GAIES由两个子量表组成:使用频率量表(Use Frequency scale)和交互风格量表(Interaction Style scale),前者衡量用户出于自利和任务导向目的与生成式AI交互的频率,后者评估用户如何通过提问性(Questioningness)、表达性(Expressiveness)和精确性(preceness)与生成式AI交互。这项研究包括414名参与者。本研究采用了经典测试理论、探索性和验证性因素分析、项目反应理论等心理测量学评价方法。量表内部一致性强,因子结构清晰,拟合良好。除了验证GAIES之外,我们还通过两个案例研究展示了它的实用性。对结构方程模型的分析表明,来自技术接受与使用统一理论的预测者对自利导向和任务导向使用的解释存在差异,表明量表具有可预测性。此外,潜在概况分析揭示了四个不同的用户子群体,证明了该量表在识别有意义的参与模式方面的有用性。这些发现使GAIES成为一种心理测量学和理论上可靠的测量生成人工智能参与度的方法。GAIES的一个关键贡献是它能够超越一般的使用指标,并为未来研究生成AI使用的行为含义提供基础。
{"title":"Development and validation of the generative AI engagement scale","authors":"Da-Wei Zhang,&nbsp;Jia Yue Tan,&nbsp;Yu Yang Chew,&nbsp;Lisha Hew,&nbsp;Jia Yee Choo","doi":"10.1016/j.chbah.2025.100221","DOIUrl":"10.1016/j.chbah.2025.100221","url":null,"abstract":"<div><div>As generative AI becomes more integrated into everyday life, understanding the behavioral impact of generative AI usage becomes increasingly important. However, research lacks validated tools for capturing both the frequency and quality of generative AI use. This study presents the Generative AI Engagement Scale (GAIES), a multidimensional instrument that was developed following best practices in scale construction and validation. GAIES consists of two subscales: the Use Frequency scale, which measures how often users interact with generative AI for self-interested and task-oriented purposes, and the Interaction Style scale, which assesses how users interact with generative AI through Questioningness, Expressiveness, and Preciseness. This study included 414 participants. Several psychometric evaluations were involved, including classical test theory, exploratory and confirmatory factor analyses, and item response theory. The subscales showed strong internal consistency, a clear factor structure, and a good fit. Besides validating GAIES, we demonstrated its practical utility through two case studies. An analysis of a structural equation model revealed that predictors from the Unified Theory of Acceptance and Use of Technology explained Self-interest- and Task-oriented usage differentially, indicating the predictability of the scale. Further, latent profile analysis revealed four distinct user subgroups, demonstrating the usefulness of the scale in identifying meaningful patterns of engagement. These findings establish GAIES as a psychometrically and theoretically sound method of measuring generative AI engagement. A key contribution of GAIES is its ability to go beyond generic usage metrics and offer a foundation for future research into the behavioral implications of generative AI usage.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100221"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1