首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Homogenizing effect of large language models (LLMs) on creative diversity: An empirical comparison of human and ChatGPT writing 大型语言模型(LLMs)对创造性多样性的均质化效应:人类和ChatGPT写作的实证比较
Pub Date : 2025-12-01 Epub Date: 2025-09-15 DOI: 10.1016/j.chbah.2025.100207
Kibum Moon, Adam E. Green, Kostadin Kushlev
Generative AI systems, especially Large Language Models (LLMs) such as ChatGPT, have recently emerged as significant contributors to creative processes. While LLMs can produce creative content that might be as good as or even better than human-created content, their widespread use risks reducing creative diversity across groups of people. In the present research, we aimed to quantify this homogenizing effect of LLMs on creative diversity, not only at the individual level but also at the collective level. Across three preregistered studies, we analyzed 2,200 college admissions essays. Using a novel measure—the diversity growth rate—we showed that each additional human-written essay contributed more new ideas than did each additional GPT-4 essay. Notably, this difference became more pronounced as more essays were included in the analysis and persisted despite efforts to enhance AI-generated content through both prompt and parameter modifications. Overall, our findings suggest that, despite their potential to enhance individual creativity, the widespread use of LLMs could diminish the collective diversity of creative ideas.
生成式人工智能系统,尤其是像ChatGPT这样的大型语言模型(llm),最近已经成为创造性过程的重要贡献者。虽然法学硕士可以产生与人类创造的内容一样好的创造性内容,甚至比人类创造的内容更好,但它们的广泛使用可能会减少群体之间创造性的多样性。在目前的研究中,我们旨在量化法学硕士对创造性多样性的同质化效应,不仅在个人层面,而且在集体层面。在三项预先注册的研究中,我们分析了2200份大学入学申请文书。使用一种新颖的测量方法——多样性增长率——我们发现,每一篇额外的人工写作论文比每一篇额外的GPT-4论文贡献了更多的新想法。值得注意的是,随着更多的文章被纳入分析,这种差异变得更加明显,尽管通过提示和参数修改来增强人工智能生成的内容,这种差异仍然存在。总的来说,我们的研究结果表明,尽管法学硕士有增强个人创造力的潜力,但法学硕士的广泛使用可能会减少创造性思想的集体多样性。
{"title":"Homogenizing effect of large language models (LLMs) on creative diversity: An empirical comparison of human and ChatGPT writing","authors":"Kibum Moon,&nbsp;Adam E. Green,&nbsp;Kostadin Kushlev","doi":"10.1016/j.chbah.2025.100207","DOIUrl":"10.1016/j.chbah.2025.100207","url":null,"abstract":"<div><div>Generative AI systems, especially Large Language Models (LLMs) such as ChatGPT, have recently emerged as significant contributors to creative processes. While LLMs can produce creative content that might be as good as or even better than human-created content, their widespread use risks reducing creative diversity across groups of people. In the present research, we aimed to quantify this homogenizing effect of LLMs on creative diversity, not only at the individual level but also at the collective level. Across three preregistered studies, we analyzed 2,200 college admissions essays. Using a novel measure—the diversity growth rate—we showed that each additional human-written essay contributed more new ideas than did each additional GPT-4 essay. Notably, this difference became more pronounced as more essays were included in the analysis and persisted despite efforts to enhance AI-generated content through both prompt and parameter modifications. Overall, our findings suggest that, despite their potential to enhance individual creativity, the widespread use of LLMs could diminish the collective diversity of creative ideas.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100207"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Whose agent are you? Relational norms shape expectation from algorithmic and human advisors in social decisions 你是谁的代理人?在社会决策中,关系规范塑造了算法和人类顾问的期望
Pub Date : 2025-12-01 Epub Date: 2025-10-10 DOI: 10.1016/j.chbah.2025.100218
Lior Gazit , Ofer Arazy , Uri Hertz
As technology companies develop AI agents designed to function as friends, therapists, and personal advisors, a fundamental question arises: can algorithms fulfill these intimate social roles? Relational Models Theory (RMT) suggests that relationships shape normative expectations in social decisions. Our research examines the perceived relationship between human/algorithmic advisors and advisee. Across two experiments (N = 492), participants reported their expectations from advisors that recommended splitting money between the advisee and an unknown other. Participants expected algorithmic advisors to exhibit higher consistency and higher sensitivity to others' payoffs, even when this resulted in smaller gains for the advisee, reflecting expectations of institutional fairness rather than personal favoritism. In contrast, participants anticipated that human advisors would prioritize their own welfare, consistent with personal relational norms. Seeking to validate that relational norms indeed drive expectations, in a follow-up experiment, we framed advisors as either "Institutional" or "Personal". Participants expected both human and algorithmic advisors to show higher sensitivity to others' payoffs and greater consistency when framed as Institutional, in line with RMT. However, regardless of framing, participants expected algorithmic advisors to exhibit higher sensitivity to others’ payoffs and greater consistency than the expectations from human advisors. Our findings extend Human-AI interaction literature by showing that people apply different normative standards to algorithmic versus human advisors. Results suggest that while relational framing can influence perceptions, attempts to position AI as replacements for humans must account for the persistent tendency to view algorithms through an institutional lens.
随着科技公司开发出可以充当朋友、治疗师和个人顾问的人工智能代理,一个基本问题出现了:算法能完成这些亲密的社会角色吗?关系模型理论(RMT)认为,关系塑造了社会决策中的规范性期望。我们的研究考察了人类/算法顾问和被顾问之间的感知关系。在两个实验中(N = 492),参与者报告了他们对建议在被建议者和另一个不认识的人之间分配金钱的顾问的期望。参与者期望算法顾问对其他人的回报表现出更高的一致性和更高的敏感性,即使这导致被顾问的收益较小,这反映了对制度公平而不是个人偏袒的期望。相比之下,参与者预期人类顾问将优先考虑他们自己的利益,符合个人关系规范。为了验证关系规范确实会驱动期望,在后续实验中,我们将顾问定义为“机构”或“个人”。参与者期望人力和算法顾问在与RMT一致的情况下,对其他人的回报表现出更高的敏感性和更大的一致性。然而,无论框架如何,参与者期望算法顾问比人类顾问表现出更高的对他人回报的敏感性和更高的一致性。我们的研究结果扩展了人类与人工智能交互的文献,表明人们对算法和人类顾问采用不同的规范标准。结果表明,虽然关系框架可以影响感知,但试图将人工智能定位为人类的替代品,必须考虑到通过制度视角看待算法的持续倾向。
{"title":"Whose agent are you? Relational norms shape expectation from algorithmic and human advisors in social decisions","authors":"Lior Gazit ,&nbsp;Ofer Arazy ,&nbsp;Uri Hertz","doi":"10.1016/j.chbah.2025.100218","DOIUrl":"10.1016/j.chbah.2025.100218","url":null,"abstract":"<div><div>As technology companies develop AI agents designed to function as friends, therapists, and personal advisors, a fundamental question arises: can algorithms fulfill these intimate social roles? Relational Models Theory (RMT) suggests that relationships shape normative expectations in social decisions. Our research examines the perceived relationship between human/algorithmic advisors and advisee. Across two experiments (N = 492), participants reported their expectations from advisors that recommended splitting money between the advisee and an unknown other. Participants expected algorithmic advisors to exhibit higher consistency and higher sensitivity to others' payoffs, even when this resulted in smaller gains for the advisee, reflecting expectations of institutional fairness rather than personal favoritism. In contrast, participants anticipated that human advisors would prioritize their own welfare, consistent with personal relational norms. Seeking to validate that relational norms indeed drive expectations, in a follow-up experiment, we framed advisors as either \"Institutional\" or \"Personal\". Participants expected both human and algorithmic advisors to show higher sensitivity to others' payoffs and greater consistency when framed as Institutional, in line with RMT. However, regardless of framing, participants expected algorithmic advisors to exhibit higher sensitivity to others’ payoffs and greater consistency than the expectations from human advisors. Our findings extend Human-AI interaction literature by showing that people apply different normative standards to algorithmic versus human advisors. Results suggest that while relational framing can influence perceptions, attempts to position AI as replacements for humans must account for the persistent tendency to view algorithms through an institutional lens.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100218"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145320647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why human mistakes hurt more? Emotional responses in human-AI errors 为什么人为的错误伤害更大?人类-人工智能错误中的情绪反应
Pub Date : 2025-12-01 Epub Date: 2025-11-19 DOI: 10.1016/j.chbah.2025.100238
Ying Qin, Wanhui Zhou, Bu Zhong
Understanding user responses to AI versus human errors is crucial, as they shape trust, acceptance, and interaction outcomes. This study investigates the emotional dynamics of human-AI interactions by examining how agent identity (human vs. AI) and error severity (low vs. high) influence negative emotional reactions. Using a 2 × 2 factorial design (N = 250), the findings reveal that human agents consistently elicit stronger negative emotions than AI agents, regardless of error severity. Moreover, perceived experience moderates this relationship under specific conditions: individuals who view AI less experienced than humans exhibit stronger negative emotions toward human errors, while this effect diminishes when AI is perceived as having higher experience. However, perceived agency does not significantly influence emotional responses. These findings highlight the critical role of agent identity and perceived experience in shaping emotional reactions to errors, adding insights into the dynamics of human-AI interactions. This research shows that developing effective AI systems needs to manage user emotional responses and trust, in which perceived experience and competency play pivotal roles in adoption. The findings can guide the design of AI systems that adjust user expectations and emotional responses in accordance with the AI's perceived level of experience.
理解用户对人工智能和人为错误的反应至关重要,因为它们会影响信任、接受和交互结果。本研究通过研究代理身份(人类与人工智能)和错误严重程度(低与高)如何影响负面情绪反应,调查了人类与人工智能互动的情感动态。使用2 × 2因子设计(N = 250),研究结果显示,无论错误严重程度如何,人类代理始终比人工智能代理引发更强烈的负面情绪。此外,感知经验在特定条件下调节了这种关系:认为人工智能经验不如人类的个体对人类的错误表现出更强烈的负面情绪,而当人工智能被认为具有更高的经验时,这种影响就会减弱。然而,感知代理对情绪反应没有显著影响。这些发现强调了代理身份和感知经验在塑造对错误的情绪反应方面的关键作用,增加了对人类与人工智能互动动态的见解。这项研究表明,开发有效的人工智能系统需要管理用户的情绪反应和信任,其中感知经验和能力在采用中起着关键作用。这些发现可以指导人工智能系统的设计,根据人工智能感知的经验水平调整用户的期望和情绪反应。
{"title":"Why human mistakes hurt more? Emotional responses in human-AI errors","authors":"Ying Qin,&nbsp;Wanhui Zhou,&nbsp;Bu Zhong","doi":"10.1016/j.chbah.2025.100238","DOIUrl":"10.1016/j.chbah.2025.100238","url":null,"abstract":"<div><div>Understanding user responses to AI versus human errors is crucial, as they shape trust, acceptance, and interaction outcomes. This study investigates the emotional dynamics of human-AI interactions by examining how agent identity (human vs. AI) and error severity (low vs. high) influence negative emotional reactions. Using a 2 × 2 factorial design (<em>N</em> = 250), the findings reveal that human agents consistently elicit stronger negative emotions than AI agents, regardless of error severity. Moreover, perceived experience moderates this relationship under specific conditions: individuals who view AI less experienced than humans exhibit stronger negative emotions toward human errors, while this effect diminishes when AI is perceived as having higher experience. However, perceived agency does not significantly influence emotional responses. These findings highlight the critical role of agent identity and perceived experience in shaping emotional reactions to errors, adding insights into the dynamics of human-AI interactions. This research shows that developing effective AI systems needs to manage user emotional responses and trust, in which perceived experience and competency play pivotal roles in adoption. The findings can guide the design of AI systems that adjust user expectations and emotional responses in accordance with the AI's perceived level of experience.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100238"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The early wave of ChatGPT research: A review and future agenda ChatGPT研究的早期浪潮:回顾与未来议程
Pub Date : 2025-12-01 Epub Date: 2025-10-04 DOI: 10.1016/j.chbah.2025.100213
Peter André Busch , Geir Inge Hausvik , Jeppe Agger Nielsen
Researchers and practitioners are increasingly engaged in discussions about the hopes and fears of artificial intelligence (AI). In this article, we critically examine the early scholarly response to one prominent form of generative and conversational AI: ChatGPT. The launch of ChatGPT has sparked a surge in research, resulting in a fast-growing but fragmented body of literature. Against this backdrop, we undertook a systematic literature review of 192 empirical articles about ChatGPT to examine, synthesize, and evaluate the foci and gaps in this early wave of research to capture the dominating and immediate scholarly reactions to ChatGPT's release. Our analytical focus covered the following main aspects: perspectives on the purpose, usage, attitudes, and impacts of ChatGPT, as well as the theories and methods scholars apply in studying ChatGPT. Most studies in our sample focus on performance tests of ChatGPT, highlighting its strengths in remembering, understanding, and analyzing content, while revealing limitations in its capacity to generate novel ideas and its hallucination habit. Although the initial wave of ChatGPT research has generated valuable first insights, much of this early research remains a-theoretical, descriptive, and narrowly scoped, with limited attention to broader social, ethical, and institutional implications. These patterns reflect both the rapid publication pace and the early stage of scholarly engagement with this emerging technology. In response, we propose a conceptual model that maps key focus areas of ChatGPT research and suggest ways of strengthening ChatGPT research by proposing a research agenda aimed at advancing more theoretically informed, contextually grounded, and socially responsive studies of generative and conversational AI.
研究人员和实践者越来越多地参与到关于人工智能(AI)的希望和恐惧的讨论中。在本文中,我们批判性地研究了早期学术界对一种突出的生成式和会话式人工智能的反应:ChatGPT。ChatGPT的推出引发了研究热潮,导致了一个快速增长但支离破碎的文献体系。在此背景下,我们对192篇关于ChatGPT的实证文章进行了系统的文献综述,以检查、综合和评估这一早期研究浪潮中的焦点和差距,以捕捉对ChatGPT发布的主要和直接的学术反应。我们的分析重点包括以下几个主要方面:对ChatGPT的目的、使用、态度和影响的看法,以及学者们研究ChatGPT的理论和方法。我们样本中的大多数研究都集中在ChatGPT的性能测试上,突出了它在记忆、理解和分析内容方面的优势,同时揭示了它在产生新想法和产生幻觉习惯方面的局限性。尽管ChatGPT研究的最初浪潮产生了有价值的初步见解,但这些早期研究的大部分仍然是理论性的、描述性的、范围狭窄的,对更广泛的社会、伦理和制度影响的关注有限。这些模式既反映了快速的出版速度,也反映了与这种新兴技术的早期学术接触。作为回应,我们提出了一个概念模型,该模型绘制了ChatGPT研究的关键重点领域,并提出了加强ChatGPT研究的方法,提出了一个研究议程,旨在推进生成式和会话式人工智能的更多理论依据、情境基础和社会响应性研究。
{"title":"The early wave of ChatGPT research: A review and future agenda","authors":"Peter André Busch ,&nbsp;Geir Inge Hausvik ,&nbsp;Jeppe Agger Nielsen","doi":"10.1016/j.chbah.2025.100213","DOIUrl":"10.1016/j.chbah.2025.100213","url":null,"abstract":"<div><div>Researchers and practitioners are increasingly engaged in discussions about the hopes and fears of artificial intelligence (AI). In this article, we critically examine the early scholarly response to one prominent form of generative and conversational AI: ChatGPT. The launch of ChatGPT has sparked a surge in research, resulting in a fast-growing but fragmented body of literature. Against this backdrop, we undertook a systematic literature review of 192 empirical articles about ChatGPT to examine, synthesize, and evaluate the foci and gaps in this early wave of research to capture the dominating and immediate scholarly reactions to ChatGPT's release. Our analytical focus covered the following main aspects: perspectives on the purpose, usage, attitudes, and impacts of ChatGPT, as well as the theories and methods scholars apply in studying ChatGPT. Most studies in our sample focus on performance tests of ChatGPT, highlighting its strengths in remembering, understanding, and analyzing content, while revealing limitations in its capacity to generate novel ideas and its hallucination habit. Although the initial wave of ChatGPT research has generated valuable first insights, much of this early research remains a-theoretical, descriptive, and narrowly scoped, with limited attention to broader social, ethical, and institutional implications. These patterns reflect both the rapid publication pace and the early stage of scholarly engagement with this emerging technology. In response, we propose a conceptual model that maps key focus areas of ChatGPT research and suggest ways of strengthening ChatGPT research by proposing a research agenda aimed at advancing more theoretically informed, contextually grounded, and socially responsive studies of generative and conversational AI.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100213"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The threat of synthetic harmony: The effects of AI vs. human origin beliefs on listeners' cognitive, emotional, and physiological responses to music 合成和声的威胁:人工智能与人类起源信念对听者对音乐的认知、情感和生理反应的影响
Pub Date : 2025-12-01 Epub Date: 2025-09-05 DOI: 10.1016/j.chbah.2025.100205
Rohan L. Dunham, Gerben A. van Kleef, Eftychia Stamkou
People generally evaluate music less favourably if they believe it is created by artificial intelligence (AI) rather than humans. But the psychological mechanisms underlying this tendency remain unclear. Prior research has relied entirely on self-reports that are vulnerable to bias. This leaves open the question as to whether negative reactions are a reflection of motivated reasoning – a controlled, cognitive process in which people justify their scepticism about AI's creative capacity – or whether they stem from deeper, embodied feelings of threat to human creative uniqueness manifested physiologically. We address this question across two lab-in-field studies, measuring participants' self-reported and physiological responses to the same piece of music framed either as having AI or human origins. Study 1 (N = 50) revealed that individuals in the AI condition appreciated music less, reported less intense emotions, and experienced decreased parasympathetic nervous system activity as compared to those in the human condition. Study 2 (N = 372) showed that these effects were more pronounced among individuals who more strongly endorsed the belief that creativity is uniquely human, and that this could largely be explained by the perceived threat posed by AI. Together, these findings suggest that unfavourable responses to AI-generated music are not driven solely by controlled cognitive justifications but also by automatic, embodied threat reactions in response to creative AI. They suggest that strategies addressing perceived threats posed by AI may be key to fostering more harmonious human-AI collaboration and acceptance.
如果人们认为音乐是由人工智能(AI)而不是人类创造的,那么人们通常会对音乐的评价不那么乐观。但这种倾向背后的心理机制尚不清楚。之前的研究完全依赖于易受偏见影响的自我报告。这就留下了一个问题,即负面反应是否反映了动机推理——一个受控的认知过程,人们在这个过程中证明了他们对人工智能创造力的怀疑——或者它们是否源于更深层次的、具体的、对人类创造性独特性的威胁的感觉,这种威胁表现在生理上。我们通过两项实验室现场研究来解决这个问题,测量参与者对同一段音乐的自我报告和生理反应,这些音乐要么是人工智能的,要么是人类的。研究1 (N = 50)显示,与人类条件下的个体相比,人工智能条件下的个体欣赏音乐的次数较少,报告的强烈情绪较少,副交感神经系统活动减少。研究2 (N = 372)表明,这些影响在那些更强烈地相信创造力是人类独有的个体中更为明显,这在很大程度上可以用人工智能带来的感知威胁来解释。总之,这些发现表明,对人工智能生成的音乐的不良反应不仅仅是由受控的认知理由驱动的,也是由对创造性人工智能的自动、具体的威胁反应驱动的。他们认为,解决人工智能带来的威胁的策略可能是促进人类与人工智能更和谐合作和接受的关键。
{"title":"The threat of synthetic harmony: The effects of AI vs. human origin beliefs on listeners' cognitive, emotional, and physiological responses to music","authors":"Rohan L. Dunham,&nbsp;Gerben A. van Kleef,&nbsp;Eftychia Stamkou","doi":"10.1016/j.chbah.2025.100205","DOIUrl":"10.1016/j.chbah.2025.100205","url":null,"abstract":"<div><div>People generally evaluate music less favourably if they believe it is created by artificial intelligence (AI) rather than humans. But the psychological mechanisms underlying this tendency remain unclear. Prior research has relied entirely on self-reports that are vulnerable to bias. This leaves open the question as to whether negative reactions are a reflection of motivated reasoning – a controlled, cognitive process in which people justify their scepticism about AI's creative capacity – or whether they stem from deeper, embodied feelings of threat to human creative uniqueness manifested physiologically. We address this question across two lab-in-field studies, measuring participants' self-reported and physiological responses to the same piece of music framed either as having AI or human origins. Study 1 (<em>N</em> = 50) revealed that individuals in the AI condition appreciated music less, reported less intense emotions, and experienced decreased parasympathetic nervous system activity as compared to those in the human condition. Study 2 (<em>N</em> = 372) showed that these effects were more pronounced among individuals who more strongly endorsed the belief that creativity is uniquely human, and that this could largely be explained by the perceived threat posed by AI. Together, these findings suggest that unfavourable responses to AI-generated music are not driven solely by controlled cognitive justifications but also by automatic, embodied threat reactions in response to creative AI. They suggest that strategies addressing perceived threats posed by AI may be key to fostering more harmonious human-AI collaboration and acceptance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100205"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145020444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Determinants of self-reported and behavioral trust in an AI advisor within a cooperative problem-solving game 合作解决问题游戏中AI顾问自我报告和行为信任的决定因素
Pub Date : 2025-12-01 Epub Date: 2025-10-30 DOI: 10.1016/j.chbah.2025.100235
Simon Schreibelmayr, Martina Mara
The widespread adoption of artificially intelligent advisory systems in everyday decision-making situations draws attention to the topic of user trust. Based on psychological theories of trust formation, several key determinants of Trust in Automation (TiA) have been proposed, though systematic empirical validation remains limited. To test them under highly controlled conditions, we implemented an immersive Virtual Reality trust game in which 165 participants solved riddles together with a voice-based AI assistant, evaluated it along multiple theoretically derived dimensions, and indicated how much they would rely on its advice. Largely consistent with the TiA model by Körber (2019), we found perceived system competence, understandability, assumed intentions of developers, and participants’ individual trust propensity to significantly predict user trust in the AI advisor, with the first having the largest influence. Additionally, familiarity moderated the relation between perceived system competence and trust. This model, derived from subjective trust measures (self-report scales), was then re-evaluated using behavioral reliance (i.e., the number of accepted in-game AI recommendations) as the outcome variable. Theoretical, empirical, and practical implications of the results are discussed.
人工智能咨询系统在日常决策情境中的广泛应用引起了人们对用户信任话题的关注。基于信任形成的心理学理论,提出了自动化信任的几个关键决定因素,但系统的实证验证仍然有限。为了在高度控制的条件下测试他们,我们实施了一个沉浸式虚拟现实信任游戏,在这个游戏中,165名参与者与基于语音的人工智能助手一起解决谜语,根据多个理论推导的维度对其进行评估,并表明他们对其建议的依赖程度。与Körber(2019)的TiA模型基本一致,我们发现感知到的系统能力、可理解性、开发人员的假设意图和参与者的个人信任倾向显著地预测了用户对AI顾问的信任,其中前者的影响最大。此外,熟悉程度调节了感知系统能力与信任之间的关系。该模型源自主观信任测量(自我报告量表),然后使用行为依赖(即接受的游戏内AI推荐的数量)作为结果变量重新评估。讨论了研究结果的理论、实证和实践意义。
{"title":"Determinants of self-reported and behavioral trust in an AI advisor within a cooperative problem-solving game","authors":"Simon Schreibelmayr,&nbsp;Martina Mara","doi":"10.1016/j.chbah.2025.100235","DOIUrl":"10.1016/j.chbah.2025.100235","url":null,"abstract":"<div><div>The widespread adoption of artificially intelligent advisory systems in everyday decision-making situations draws attention to the topic of user trust. Based on psychological theories of trust formation, several key determinants of Trust in Automation (TiA) have been proposed, though systematic empirical validation remains limited. To test them under highly controlled conditions, we implemented an immersive Virtual Reality trust game in which 165 participants solved riddles together with a voice-based AI assistant, evaluated it along multiple theoretically derived dimensions, and indicated how much they would rely on its advice. Largely consistent with the TiA model by Körber (2019), we found perceived system competence, understandability, assumed intentions of developers, and participants’ individual trust propensity to significantly predict user trust in the AI advisor, with the first having the largest influence. Additionally, familiarity moderated the relation between perceived system competence and trust. This model, derived from subjective trust measures (self-report scales), was then re-evaluated using behavioral reliance (i.e., the number of accepted in-game AI recommendations) as the outcome variable. Theoretical, empirical, and practical implications of the results are discussed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100235"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Navigating the human-AI divide: Boundary work in the age of generative AI 跨越人类与人工智能的鸿沟:生成式人工智能时代的边界工作
Pub Date : 2025-12-01 Epub Date: 2025-10-04 DOI: 10.1016/j.chbah.2025.100214
Young Ji Kim , Ceciley Xinyi Zhang , Chengyu Fang
Generative artificial intelligence (GenAI), such as ChatGPT, has recently attracted vast public attention for its remarkable ability to produce sophisticated, human-like content. As these technologies increasingly blur the boundaries between artificial and human intelligence, understanding how users perceive and manage this boundary becomes essential. Drawing on the concept of boundary work, this paper examines how GenAI users discursively and practically navigate the ontological boundaries between human intelligence and GenAI. Through a qualitative analysis of nine focus groups involving 45 college students from diverse academic backgrounds, this study identifies three types of human-GenAI boundaries: complementary, competitive, and co-evolving. Complementary boundaries highlight GenAI's supportive and instrumental role and competitive boundaries emphasize human superiority and concerns over GenAI's threats, while co-evolving boundaries acknowledge dynamic interplay and reflective collaboration between humans and GenAI. The paper contributes theoretically by demonstrating that human-machine boundaries are dynamic, multifaceted, and actively negotiated. Practically, it offers insights into user strategies and implications for responsible adoption of GenAI technologies in educational and organizational contexts.
像ChatGPT这样的生成式人工智能(GenAI)最近因其产生复杂的、类似人类的内容的卓越能力而引起了公众的广泛关注。随着这些技术日益模糊人工智能和人类智能之间的界限,了解用户如何感知和管理这一界限变得至关重要。利用边界工作的概念,本文研究了GenAI用户如何在人类智能和GenAI之间的本体论边界上进行论述和实际导航。通过对来自不同学术背景的45名大学生的9个焦点小组的定性分析,本研究确定了三种类型的人类-基因边界:互补、竞争和共同进化。互补边界强调GenAI的支持性和工具性作用,竞争边界强调人类的优势和对GenAI威胁的关注,而共同进化边界承认人类和GenAI之间的动态相互作用和反思性合作。本文从理论上证明了人机边界是动态的、多方面的、积极协商的。实际上,它为在教育和组织环境中负责任地采用GenAI技术的用户策略和含义提供了见解。
{"title":"Navigating the human-AI divide: Boundary work in the age of generative AI","authors":"Young Ji Kim ,&nbsp;Ceciley Xinyi Zhang ,&nbsp;Chengyu Fang","doi":"10.1016/j.chbah.2025.100214","DOIUrl":"10.1016/j.chbah.2025.100214","url":null,"abstract":"<div><div>Generative artificial intelligence (GenAI), such as ChatGPT, has recently attracted vast public attention for its remarkable ability to produce sophisticated, human-like content. As these technologies increasingly blur the boundaries between artificial and human intelligence, understanding how users perceive and manage this boundary becomes essential. Drawing on the concept of boundary work, this paper examines how GenAI users discursively and practically navigate the ontological boundaries between human intelligence and GenAI. Through a qualitative analysis of nine focus groups involving 45 college students from diverse academic backgrounds, this study identifies three types of human-GenAI boundaries: <em>complementary, competitive, and co-evolving</em>. Complementary boundaries highlight GenAI's supportive and instrumental role and competitive boundaries emphasize human superiority and concerns over GenAI's threats, while co-evolving boundaries acknowledge dynamic interplay and reflective collaboration between humans and GenAI. The paper contributes theoretically by demonstrating that human-machine boundaries are dynamic, multifaceted, and actively negotiated. Practically, it offers insights into user strategies and implications for responsible adoption of GenAI technologies in educational and organizational contexts.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100214"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing intercultural sensitivity in large language models: A comparative study of GPT-3.5 and GPT-4 across eight languages 评估大型语言模型中的跨文化敏感性:跨八种语言的GPT-3.5和GPT-4的比较研究
Pub Date : 2025-12-01 Epub Date: 2025-11-17 DOI: 10.1016/j.chbah.2025.100241
Yiwen Jin , Lies Sercu , Feng Guo
As large language models (LLMs) such as ChatGPT are increasingly used across cultures and languages, concerns have arisen about their ability to respond in culturally sensitive ways. This study evaluated the intercultural sensitivity of GPT-3.5 and GPT-4 using the Intercultural Sensitivity Scale (ISS) translated into eight languages. Each model completed ten randomized iterations of the 24-item ISS per language, and the results were analyzed using descriptive statistics and three-way ANOVA. GPT-4 achieved significantly higher intercultural sensitivity scores than GPT-3.5 across all dimensions, with “respect for cultural differences” scoring highest and “interaction confidence” lowest. Significant interactions were found between model version and language, and between model version and ISS dimensions, indicating that GPT-4's improvements vary by linguistic context. Nonetheless, the interaction between language and dimensions did not yield significant results. Future research should focus on increasing the amount of training data for the less spoken languages, as well as adding rich emotional and cultural background data to improve the model's understanding of cultural norms and nuances.
随着像ChatGPT这样的大型语言模型(llm)越来越多地跨文化和语言使用,人们开始关注它们以文化敏感方式做出反应的能力。本研究使用翻译成八种语言的跨文化敏感性量表(ISS)评估GPT-3.5和GPT-4的跨文化敏感性。每个模型完成了每种语言24项ISS的10次随机迭代,并使用描述性统计和三向方差分析对结果进行分析。在所有维度上,GPT-4的跨文化敏感性得分明显高于GPT-3.5,其中“尊重文化差异”得分最高,“互动信心”得分最低。模型版本与语言之间、模型版本与ISS维度之间存在显著的交互作用,表明GPT-4的改善因语言语境而异。然而,语言和维度之间的相互作用并没有产生显著的结果。未来的研究应侧重于增加较少使用语言的训练数据量,并增加丰富的情感和文化背景数据,以提高模型对文化规范和细微差别的理解。
{"title":"Assessing intercultural sensitivity in large language models: A comparative study of GPT-3.5 and GPT-4 across eight languages","authors":"Yiwen Jin ,&nbsp;Lies Sercu ,&nbsp;Feng Guo","doi":"10.1016/j.chbah.2025.100241","DOIUrl":"10.1016/j.chbah.2025.100241","url":null,"abstract":"<div><div>As large language models (LLMs) such as ChatGPT are increasingly used across cultures and languages, concerns have arisen about their ability to respond in culturally sensitive ways. This study evaluated the intercultural sensitivity of GPT-3.5 and GPT-4 using the Intercultural Sensitivity Scale (ISS) translated into eight languages. Each model completed ten randomized iterations of the 24-item ISS per language, and the results were analyzed using descriptive statistics and three-way ANOVA. GPT-4 achieved significantly higher intercultural sensitivity scores than GPT-3.5 across all dimensions, with “respect for cultural differences” scoring highest and “interaction confidence” lowest. Significant interactions were found between model version and language, and between model version and ISS dimensions, indicating that GPT-4's improvements vary by linguistic context. Nonetheless, the interaction between language and dimensions did not yield significant results. Future research should focus on increasing the amount of training data for the less spoken languages, as well as adding rich emotional and cultural background data to improve the model's understanding of cultural norms and nuances.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100241"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145618013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digitally created body positivity: The effects of virtual influencers with different body types on viewer perceptions 数字创造的身体积极性:不同体型的虚拟影响者对观众感知的影响
Pub Date : 2025-12-01 Epub Date: 2025-10-27 DOI: 10.1016/j.chbah.2025.100231
Jiyeon Yeo, Jan-Philipp Stein
Exposure to idealized body imagery on social media has been linked to lower body satisfaction/appreciation, negative mood effects, and mental health risks. Serving as a potential counterforce to these severe issues, body-positive content creators advocate for broader conceptualizations of beauty, more inclusivity, and self-acceptance among social media users. Amidst this on-going discourse, hyper-realistic virtual influencers (VIs) have emerged as novel social agents—some reinforcing traditional beauty ideals and others promoting more diversity. Experiment 1 (N = 337) examined how VIs with different body types (larger-sized versus thin-ideal) influence women’s state body appreciation and perceptions of ideal body shapes. Experiment 2 (N = 462) further investigated whether VIs elicit user responses in a way comparable to human influencers, considering ontological distinctions and perceived self-similarity. Across both experiments, neither body type nor influencer type significantly influenced women’s body appreciation or body-related ideals. Whereas several proposed moderating variables did not result in significant findings, perceptions of self-similarity were ultimately found to play a meaningful role: Human influencers were perceived as more self-similar, and this perception was positively linked to body appreciation. Taken together, our mixed findings indicate that VIs may exert a weaker impact on young women’s body perceptions than expected—at least in the short term. As such, future research might benefit from focusing more on potential long-term effects.
在社交媒体上接触理想化的身体形象与较低的身体满意度/欣赏度、负面情绪影响和心理健康风险有关。作为对这些严重问题的潜在反作用,身体积极的内容创作者倡导社交媒体用户对美的更广泛的概念,更包容和自我接受。在这种持续的讨论中,超现实的虚拟影响者(VIs)作为新的社会代理人出现了——一些人加强了传统的美丽理想,另一些人则促进了更多的多样性。实验1 (N = 337)考察了不同体型(大体型与瘦体型)的男性如何影响女性的状态身体欣赏和对理想体型的感知。实验2 (N = 462)在考虑本体差异和感知自相似性的情况下,进一步研究了虚拟形象是否以与人类影响者相当的方式引起用户反应。在这两个实验中,身体类型和影响者类型都没有显著影响女性对身体的欣赏或与身体相关的理想。虽然几个提出的调节变量并没有导致显著的发现,但自我相似性的感知最终被发现发挥了有意义的作用:人类影响者被认为是更自我相似的,这种感知与身体欣赏呈正相关。综上所述,我们的混合发现表明,VIs对年轻女性身体感知的影响可能比预期的要弱——至少在短期内是这样。因此,未来的研究可能会受益于更多地关注潜在的长期影响。
{"title":"Digitally created body positivity: The effects of virtual influencers with different body types on viewer perceptions","authors":"Jiyeon Yeo,&nbsp;Jan-Philipp Stein","doi":"10.1016/j.chbah.2025.100231","DOIUrl":"10.1016/j.chbah.2025.100231","url":null,"abstract":"<div><div>Exposure to idealized body imagery on social media has been linked to lower body satisfaction/appreciation, negative mood effects, and mental health risks. Serving as a potential counterforce to these severe issues, body-positive content creators advocate for broader conceptualizations of beauty, more inclusivity, and self-acceptance among social media users. Amidst this on-going discourse, hyper-realistic virtual influencers (VIs) have emerged as novel social agents—some reinforcing traditional beauty ideals and others promoting more diversity. Experiment 1 (<em>N</em> = 337) examined how VIs with different body types (larger-sized versus thin-ideal) influence women’s state body appreciation and perceptions of ideal body shapes. Experiment 2 (<em>N</em> = 462) further investigated whether VIs elicit user responses in a way comparable to human influencers, considering ontological distinctions and perceived self-similarity. Across both experiments, neither body type nor influencer type significantly influenced women’s body appreciation or body-related ideals. Whereas several proposed moderating variables did not result in significant findings, perceptions of self-similarity were ultimately found to play a meaningful role: Human influencers were perceived as more self-similar, and this perception was positively linked to body appreciation. Taken together, our mixed findings indicate that VIs may exert a weaker impact on young women’s body perceptions than expected—at least in the short term. As such, future research might benefit from focusing more on potential long-term effects.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100231"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the agreement between human preferences, GPT-4V and Gemini Pro Vision assessments: Can AI recognize what people might like? 评估人类偏好、GPT-4V和Gemini Pro Vision评估之间的一致性:人工智能能否识别出人们可能喜欢的东西?
Pub Date : 2025-12-01 Epub Date: 2025-10-27 DOI: 10.1016/j.chbah.2025.100234
Dino Krupić , Domagoj Matijević , Nenad Šuvak , Jurica Maltar , Domagoj Ševerdija
This study aims to introduce a methodology for assessing the agreement between AI and human ratings, specifically focusing on visual large language models (LLMs). This paper presents empirical findings on the alignment between ratings generated by GPT-4 Vision (GPT-4V) and Gemini Pro Vision with human subjective evaluations of environmental visuals. Using photographs of restaurant interior design and food, the study estimates the degree of agreement with human preferences. The intraclass correlation reveals that GPT-4V, unlike Gemini Pro Vision, achieves moderate agreement with participants’ general restaurant preferences. Similar results are observed for rating food photos. Additionally, there is good agreement in categorizing restaurants into low-cost, mid-range and exclusive categories based on interior quality. Finally, differences in ratings were observed at the subsample level based on age, gender, and socioeconomic status across the human sample and LLMs. The results of repeated-measures ANOVAs indicate varying degrees of alignment between humans and LLMs across different sociodemographic characteristics. Overall, GPT-4V currently demonstrates limited ability to provide meaningful ratings of visual stimuli compared to human ratings and performs better in this task compared to Gemini Pro Vision.
本研究旨在介绍一种评估人工智能和人类评级之间一致性的方法,特别关注视觉大型语言模型(llm)。本文介绍了由GPT-4 Vision (GPT-4V)和Gemini Pro Vision生成的评分与人类对环境视觉的主观评价之间的一致性的实证研究结果。该研究利用餐馆室内设计和食物的照片,估计了与人类偏好的一致程度。类内相关性表明,与Gemini Pro Vision不同,GPT-4V与参与者的一般餐厅偏好达成了适度的一致。在评价食物照片时也观察到类似的结果。此外,在根据内部质量将餐厅分为低成本、中档和独家三类方面,也存在很好的共识。最后,在基于年龄、性别和社会经济地位的子样本水平上,在人类样本和法学硕士中观察到评分的差异。重复测量方差分析的结果表明,不同社会人口特征的人类和法学硕士之间存在不同程度的一致性。总的来说,与人类相比,GPT-4V目前提供有意义的视觉刺激评级的能力有限,与Gemini Pro Vision相比,它在这项任务中的表现更好。
{"title":"Evaluating the agreement between human preferences, GPT-4V and Gemini Pro Vision assessments: Can AI recognize what people might like?","authors":"Dino Krupić ,&nbsp;Domagoj Matijević ,&nbsp;Nenad Šuvak ,&nbsp;Jurica Maltar ,&nbsp;Domagoj Ševerdija","doi":"10.1016/j.chbah.2025.100234","DOIUrl":"10.1016/j.chbah.2025.100234","url":null,"abstract":"<div><div>This study aims to introduce a methodology for assessing the agreement between AI and human ratings, specifically focusing on visual large language models (LLMs). This paper presents empirical findings on the alignment between ratings generated by GPT-4 Vision (GPT-4V) and Gemini Pro Vision with human subjective evaluations of environmental visuals. Using photographs of restaurant interior design and food, the study estimates the degree of agreement with human preferences. The intraclass correlation reveals that GPT-4V, unlike Gemini Pro Vision, achieves moderate agreement with participants’ general restaurant preferences. Similar results are observed for rating food photos. Additionally, there is good agreement in categorizing restaurants into low-cost, mid-range and exclusive categories based on interior quality. Finally, differences in ratings were observed at the subsample level based on age, gender, and socioeconomic status across the human sample and LLMs. The results of repeated-measures ANOVAs indicate varying degrees of alignment between humans and LLMs across different sociodemographic characteristics. Overall, GPT-4V currently demonstrates limited ability to provide meaningful ratings of visual stimuli compared to human ratings and performs better in this task compared to Gemini Pro Vision.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100234"},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1