首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Nonlinear transformation of probabilities by large language models 大型语言模型的非线性概率变换
Pub Date : 2025-10-31 DOI: 10.1016/j.chbah.2025.100227
Arend Hintze , Charu Bisht , Jory Schossau , Ralph Hertwig
Large Language Models (LLMs) such as ChatGPT and Claude demonstrate impressive abilities to generate meaningful text and mimic human-like responses. While they undoubtedly can boost human performance, there is also the risk that uninstructed users rely on them for direct advice without critical distance. A case in point is advice on economic choice. Choice tasks often involve probabilistic outcomes. In these tasks, human choice has been demonstrated to diverge from rational systematically, that is, linear weighting of probabilities, and reveals an inverse S-shaped weighting pattern in description-based choice (i.e., overweighting of small probabilities and underweighting of large ones), and an S-shaped weighting pattern in experience-based choice. We investigate how LLMs’ choices transform probabilities in simple economic tasks involving a sure outcome and a simple lottery with two probabilistic outcomes. LLMs’ choices do most often not yield an inverse S-shaped probability weighting pattern; instead, they display distinct nonlinearity-in-probabilities. Some models exhibited risk-seeking behavior, others a strong recency bias, and those who are more accurate underweighted small and overweighted large probabilities, resembling weighting patterns of decisions from experience rather than from description. These findings raise concerns about the quality of the advice users would receive on economic choice from LLMs, highlighting the necessity of critically using LLMs in decision-making contexts.
大型语言模型(llm),如ChatGPT和Claude,在生成有意义的文本和模仿类似人类的反应方面表现出了令人印象深刻的能力。虽然它们无疑可以提高人类的表现,但也存在风险,即未经指导的用户在没有临界距离的情况下依赖它们获得直接建议。关于经济选择的建议就是一个很好的例子。选择任务通常涉及概率结果。在这些任务中,人类的选择已经被证明系统性地偏离理性,即概率的线性加权,并且在基于描述的选择中显示出逆s形加权模式(即小概率的超重和大概率的低估),在基于经验的选择中显示出s形加权模式。我们研究了法学硕士的选择如何在涉及确定结果和具有两个概率结果的简单彩票的简单经济任务中转换概率。法学硕士的选择通常不会产生反s形的概率加权模式;相反,它们表现出明显的非线性概率。一些模型表现出寻求风险的行为,另一些则表现出强烈的近因偏差,而那些更准确的模型则低估了小概率和高估了大概率,类似于基于经验而非描述的决策加权模式。这些发现引起了人们对法学硕士在经济选择方面的建议质量的关注,强调了在决策环境中批判性地使用法学硕士的必要性。
{"title":"Nonlinear transformation of probabilities by large language models","authors":"Arend Hintze ,&nbsp;Charu Bisht ,&nbsp;Jory Schossau ,&nbsp;Ralph Hertwig","doi":"10.1016/j.chbah.2025.100227","DOIUrl":"10.1016/j.chbah.2025.100227","url":null,"abstract":"<div><div>Large Language Models (LLMs) such as ChatGPT and Claude demonstrate impressive abilities to generate meaningful text and mimic human-like responses. While they undoubtedly can boost human performance, there is also the risk that uninstructed users rely on them for direct advice without critical distance. A case in point is advice on economic choice. Choice tasks often involve probabilistic outcomes. In these tasks, human choice has been demonstrated to diverge from rational systematically, that is, linear weighting of probabilities, and reveals an inverse S-shaped weighting pattern in description-based choice (i.e., overweighting of small probabilities and underweighting of large ones), and an S-shaped weighting pattern in experience-based choice. We investigate how LLMs’ choices transform probabilities in simple economic tasks involving a sure outcome and a simple lottery with two probabilistic outcomes. LLMs’ choices do most often not yield an inverse S-shaped probability weighting pattern; instead, they display distinct nonlinearity-in-probabilities. Some models exhibited risk-seeking behavior, others a strong recency bias, and those who are more accurate underweighted small and overweighted large probabilities, resembling weighting patterns of decisions from experience rather than from description. These findings raise concerns about the quality of the advice users would receive on economic choice from LLMs, highlighting the necessity of critically using LLMs in decision-making contexts.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100227"},"PeriodicalIF":0.0,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Determinants of self-reported and behavioral trust in an AI advisor within a cooperative problem-solving game 合作解决问题游戏中AI顾问自我报告和行为信任的决定因素
Pub Date : 2025-10-30 DOI: 10.1016/j.chbah.2025.100235
Simon Schreibelmayr, Martina Mara
The widespread adoption of artificially intelligent advisory systems in everyday decision-making situations draws attention to the topic of user trust. Based on psychological theories of trust formation, several key determinants of Trust in Automation (TiA) have been proposed, though systematic empirical validation remains limited. To test them under highly controlled conditions, we implemented an immersive Virtual Reality trust game in which 165 participants solved riddles together with a voice-based AI assistant, evaluated it along multiple theoretically derived dimensions, and indicated how much they would rely on its advice. Largely consistent with the TiA model by Körber (2019), we found perceived system competence, understandability, assumed intentions of developers, and participants’ individual trust propensity to significantly predict user trust in the AI advisor, with the first having the largest influence. Additionally, familiarity moderated the relation between perceived system competence and trust. This model, derived from subjective trust measures (self-report scales), was then re-evaluated using behavioral reliance (i.e., the number of accepted in-game AI recommendations) as the outcome variable. Theoretical, empirical, and practical implications of the results are discussed.
人工智能咨询系统在日常决策情境中的广泛应用引起了人们对用户信任话题的关注。基于信任形成的心理学理论,提出了自动化信任的几个关键决定因素,但系统的实证验证仍然有限。为了在高度控制的条件下测试他们,我们实施了一个沉浸式虚拟现实信任游戏,在这个游戏中,165名参与者与基于语音的人工智能助手一起解决谜语,根据多个理论推导的维度对其进行评估,并表明他们对其建议的依赖程度。与Körber(2019)的TiA模型基本一致,我们发现感知到的系统能力、可理解性、开发人员的假设意图和参与者的个人信任倾向显著地预测了用户对AI顾问的信任,其中前者的影响最大。此外,熟悉程度调节了感知系统能力与信任之间的关系。该模型源自主观信任测量(自我报告量表),然后使用行为依赖(即接受的游戏内AI推荐的数量)作为结果变量重新评估。讨论了研究结果的理论、实证和实践意义。
{"title":"Determinants of self-reported and behavioral trust in an AI advisor within a cooperative problem-solving game","authors":"Simon Schreibelmayr,&nbsp;Martina Mara","doi":"10.1016/j.chbah.2025.100235","DOIUrl":"10.1016/j.chbah.2025.100235","url":null,"abstract":"<div><div>The widespread adoption of artificially intelligent advisory systems in everyday decision-making situations draws attention to the topic of user trust. Based on psychological theories of trust formation, several key determinants of Trust in Automation (TiA) have been proposed, though systematic empirical validation remains limited. To test them under highly controlled conditions, we implemented an immersive Virtual Reality trust game in which 165 participants solved riddles together with a voice-based AI assistant, evaluated it along multiple theoretically derived dimensions, and indicated how much they would rely on its advice. Largely consistent with the TiA model by Körber (2019), we found perceived system competence, understandability, assumed intentions of developers, and participants’ individual trust propensity to significantly predict user trust in the AI advisor, with the first having the largest influence. Additionally, familiarity moderated the relation between perceived system competence and trust. This model, derived from subjective trust measures (self-report scales), was then re-evaluated using behavioral reliance (i.e., the number of accepted in-game AI recommendations) as the outcome variable. Theoretical, empirical, and practical implications of the results are discussed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100235"},"PeriodicalIF":0.0,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Culturally responsive AI chatbots: From framework to field evidence 文化响应型人工智能聊天机器人:从框架到现场证据
Pub Date : 2025-10-28 DOI: 10.1016/j.chbah.2025.100224
Vik Naidoo , Karman Kaur Chadha
As AI systems become part of everyday life around the world, their failure to recognise and respond to cultural differences can erode trust, reduce engagement, and undermine legitimacy. This paper introduces the Culturally Responsive Artificial Intelligence (Chatbot) Framework (CRAIF-C), a practical, modular approach to building AI chatbots that understand and respect cultural diversity. CRAIF-C is novel in that it operationalises cultural responsiveness across the entire AI lifecycle, combining domain-specific technical methods with validated measurement tools and multi-context empirical testing. It addresses persistent limitations of earlier approaches, such as Value-Sensitive Design or Participatory AI, which often remain conceptual, sector-bound, or late-stage interventions. CRAIF-C works across four key domains: Enculturation, Adaptive Interaction, Explainability & Transparency, and Governance & Accountability. The framework's effectiveness is demonstrated through four complementary studies, which consistently show that AI chatbot systems using CRAIF-C achieve meaningful gains in cultural fit, natural communication, clear explanations, user trust, and sustained engagement. By incorporating cultural sensitivity into the core of AI chatbot design, CRAIF-C provides a roadmap for creating technology that is both technically capable, socially aware, ethically robust, and globally adaptable.
随着人工智能系统成为世界各地日常生活的一部分,它们无法识别和应对文化差异,可能会侵蚀信任、减少参与并破坏合法性。本文介绍了文化响应型人工智能(聊天机器人)框架(CRAIF-C),这是一种实用的模块化方法,用于构建理解和尊重文化多样性的人工智能聊天机器人。crif -c的新颖之处在于,它将特定领域的技术方法与经过验证的测量工具和多上下文经验测试相结合,在整个人工智能生命周期中实现文化响应性。它解决了早期方法的持续局限性,如价值敏感设计或参与式人工智能,这些方法通常仍然是概念性的,受部门限制的,或后期干预。crif - c在四个关键领域开展工作:文化适应、适应性互动、可解释性和透明度以及治理和问责制。该框架的有效性通过四项互补研究得到了证明,这些研究一致表明,使用CRAIF-C的人工智能聊天机器人系统在文化契合度、自然沟通、清晰解释、用户信任和持续参与方面取得了有意义的进展。通过将文化敏感性纳入人工智能聊天机器人设计的核心,crif - c为创造技术提供了路线图,该技术既具有技术能力,又具有社会意识,道德健全,又具有全球适应性。
{"title":"Culturally responsive AI chatbots: From framework to field evidence","authors":"Vik Naidoo ,&nbsp;Karman Kaur Chadha","doi":"10.1016/j.chbah.2025.100224","DOIUrl":"10.1016/j.chbah.2025.100224","url":null,"abstract":"<div><div>As AI systems become part of everyday life around the world, their failure to recognise and respond to cultural differences can erode trust, reduce engagement, and undermine legitimacy. This paper introduces the Culturally Responsive Artificial Intelligence (Chatbot) Framework (CRAIF-C), a practical, modular approach to building AI chatbots that understand and respect cultural diversity. CRAIF-C is novel in that it operationalises cultural responsiveness across the <em>entire</em> AI lifecycle, combining domain-specific technical methods with validated measurement tools and multi-context empirical testing. It addresses persistent limitations of earlier approaches, such as Value-Sensitive Design or Participatory AI, which often remain conceptual, sector-bound, or late-stage interventions. CRAIF-C works across four key domains: Enculturation, Adaptive Interaction, Explainability &amp; Transparency, and Governance &amp; Accountability. The framework's effectiveness is demonstrated through four complementary studies, which consistently show that AI chatbot systems using CRAIF-C achieve meaningful gains in cultural fit, natural communication, clear explanations, user trust, and sustained engagement. By incorporating cultural sensitivity into the core of AI chatbot design, CRAIF-C provides a roadmap for creating technology that is both technically capable, socially aware, ethically robust, and globally adaptable.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100224"},"PeriodicalIF":0.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and validation of the generative AI engagement scale 生成式人工智能参与量表的开发和验证
Pub Date : 2025-10-27 DOI: 10.1016/j.chbah.2025.100221
Da-Wei Zhang, Jia Yue Tan, Yu Yang Chew, Lisha Hew, Jia Yee Choo
As generative AI becomes more integrated into everyday life, understanding the behavioral impact of generative AI usage becomes increasingly important. However, research lacks validated tools for capturing both the frequency and quality of generative AI use. This study presents the Generative AI Engagement Scale (GAIES), a multidimensional instrument that was developed following best practices in scale construction and validation. GAIES consists of two subscales: the Use Frequency scale, which measures how often users interact with generative AI for self-interested and task-oriented purposes, and the Interaction Style scale, which assesses how users interact with generative AI through Questioningness, Expressiveness, and Preciseness. This study included 414 participants. Several psychometric evaluations were involved, including classical test theory, exploratory and confirmatory factor analyses, and item response theory. The subscales showed strong internal consistency, a clear factor structure, and a good fit. Besides validating GAIES, we demonstrated its practical utility through two case studies. An analysis of a structural equation model revealed that predictors from the Unified Theory of Acceptance and Use of Technology explained Self-interest- and Task-oriented usage differentially, indicating the predictability of the scale. Further, latent profile analysis revealed four distinct user subgroups, demonstrating the usefulness of the scale in identifying meaningful patterns of engagement. These findings establish GAIES as a psychometrically and theoretically sound method of measuring generative AI engagement. A key contribution of GAIES is its ability to go beyond generic usage metrics and offer a foundation for future research into the behavioral implications of generative AI usage.
随着生成式人工智能越来越多地融入日常生活,理解生成式人工智能使用对行为的影响变得越来越重要。然而,研究缺乏有效的工具来捕获生成人工智能使用的频率和质量。本研究介绍了生成式人工智能参与量表(GAIES),这是一种多维工具,是根据规模构建和验证的最佳实践开发的。GAIES由两个子量表组成:使用频率量表(Use Frequency scale)和交互风格量表(Interaction Style scale),前者衡量用户出于自利和任务导向目的与生成式AI交互的频率,后者评估用户如何通过提问性(Questioningness)、表达性(Expressiveness)和精确性(preceness)与生成式AI交互。这项研究包括414名参与者。本研究采用了经典测试理论、探索性和验证性因素分析、项目反应理论等心理测量学评价方法。量表内部一致性强,因子结构清晰,拟合良好。除了验证GAIES之外,我们还通过两个案例研究展示了它的实用性。对结构方程模型的分析表明,来自技术接受与使用统一理论的预测者对自利导向和任务导向使用的解释存在差异,表明量表具有可预测性。此外,潜在概况分析揭示了四个不同的用户子群体,证明了该量表在识别有意义的参与模式方面的有用性。这些发现使GAIES成为一种心理测量学和理论上可靠的测量生成人工智能参与度的方法。GAIES的一个关键贡献是它能够超越一般的使用指标,并为未来研究生成AI使用的行为含义提供基础。
{"title":"Development and validation of the generative AI engagement scale","authors":"Da-Wei Zhang,&nbsp;Jia Yue Tan,&nbsp;Yu Yang Chew,&nbsp;Lisha Hew,&nbsp;Jia Yee Choo","doi":"10.1016/j.chbah.2025.100221","DOIUrl":"10.1016/j.chbah.2025.100221","url":null,"abstract":"<div><div>As generative AI becomes more integrated into everyday life, understanding the behavioral impact of generative AI usage becomes increasingly important. However, research lacks validated tools for capturing both the frequency and quality of generative AI use. This study presents the Generative AI Engagement Scale (GAIES), a multidimensional instrument that was developed following best practices in scale construction and validation. GAIES consists of two subscales: the Use Frequency scale, which measures how often users interact with generative AI for self-interested and task-oriented purposes, and the Interaction Style scale, which assesses how users interact with generative AI through Questioningness, Expressiveness, and Preciseness. This study included 414 participants. Several psychometric evaluations were involved, including classical test theory, exploratory and confirmatory factor analyses, and item response theory. The subscales showed strong internal consistency, a clear factor structure, and a good fit. Besides validating GAIES, we demonstrated its practical utility through two case studies. An analysis of a structural equation model revealed that predictors from the Unified Theory of Acceptance and Use of Technology explained Self-interest- and Task-oriented usage differentially, indicating the predictability of the scale. Further, latent profile analysis revealed four distinct user subgroups, demonstrating the usefulness of the scale in identifying meaningful patterns of engagement. These findings establish GAIES as a psychometrically and theoretically sound method of measuring generative AI engagement. A key contribution of GAIES is its ability to go beyond generic usage metrics and offer a foundation for future research into the behavioral implications of generative AI usage.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100221"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digitally created body positivity: The effects of virtual influencers with different body types on viewer perceptions 数字创造的身体积极性:不同体型的虚拟影响者对观众感知的影响
Pub Date : 2025-10-27 DOI: 10.1016/j.chbah.2025.100231
Jiyeon Yeo, Jan-Philipp Stein
Exposure to idealized body imagery on social media has been linked to lower body satisfaction/appreciation, negative mood effects, and mental health risks. Serving as a potential counterforce to these severe issues, body-positive content creators advocate for broader conceptualizations of beauty, more inclusivity, and self-acceptance among social media users. Amidst this on-going discourse, hyper-realistic virtual influencers (VIs) have emerged as novel social agents—some reinforcing traditional beauty ideals and others promoting more diversity. Experiment 1 (N = 337) examined how VIs with different body types (larger-sized versus thin-ideal) influence women’s state body appreciation and perceptions of ideal body shapes. Experiment 2 (N = 462) further investigated whether VIs elicit user responses in a way comparable to human influencers, considering ontological distinctions and perceived self-similarity. Across both experiments, neither body type nor influencer type significantly influenced women’s body appreciation or body-related ideals. Whereas several proposed moderating variables did not result in significant findings, perceptions of self-similarity were ultimately found to play a meaningful role: Human influencers were perceived as more self-similar, and this perception was positively linked to body appreciation. Taken together, our mixed findings indicate that VIs may exert a weaker impact on young women’s body perceptions than expected—at least in the short term. As such, future research might benefit from focusing more on potential long-term effects.
在社交媒体上接触理想化的身体形象与较低的身体满意度/欣赏度、负面情绪影响和心理健康风险有关。作为对这些严重问题的潜在反作用,身体积极的内容创作者倡导社交媒体用户对美的更广泛的概念,更包容和自我接受。在这种持续的讨论中,超现实的虚拟影响者(VIs)作为新的社会代理人出现了——一些人加强了传统的美丽理想,另一些人则促进了更多的多样性。实验1 (N = 337)考察了不同体型(大体型与瘦体型)的男性如何影响女性的状态身体欣赏和对理想体型的感知。实验2 (N = 462)在考虑本体差异和感知自相似性的情况下,进一步研究了虚拟形象是否以与人类影响者相当的方式引起用户反应。在这两个实验中,身体类型和影响者类型都没有显著影响女性对身体的欣赏或与身体相关的理想。虽然几个提出的调节变量并没有导致显著的发现,但自我相似性的感知最终被发现发挥了有意义的作用:人类影响者被认为是更自我相似的,这种感知与身体欣赏呈正相关。综上所述,我们的混合发现表明,VIs对年轻女性身体感知的影响可能比预期的要弱——至少在短期内是这样。因此,未来的研究可能会受益于更多地关注潜在的长期影响。
{"title":"Digitally created body positivity: The effects of virtual influencers with different body types on viewer perceptions","authors":"Jiyeon Yeo,&nbsp;Jan-Philipp Stein","doi":"10.1016/j.chbah.2025.100231","DOIUrl":"10.1016/j.chbah.2025.100231","url":null,"abstract":"<div><div>Exposure to idealized body imagery on social media has been linked to lower body satisfaction/appreciation, negative mood effects, and mental health risks. Serving as a potential counterforce to these severe issues, body-positive content creators advocate for broader conceptualizations of beauty, more inclusivity, and self-acceptance among social media users. Amidst this on-going discourse, hyper-realistic virtual influencers (VIs) have emerged as novel social agents—some reinforcing traditional beauty ideals and others promoting more diversity. Experiment 1 (<em>N</em> = 337) examined how VIs with different body types (larger-sized versus thin-ideal) influence women’s state body appreciation and perceptions of ideal body shapes. Experiment 2 (<em>N</em> = 462) further investigated whether VIs elicit user responses in a way comparable to human influencers, considering ontological distinctions and perceived self-similarity. Across both experiments, neither body type nor influencer type significantly influenced women’s body appreciation or body-related ideals. Whereas several proposed moderating variables did not result in significant findings, perceptions of self-similarity were ultimately found to play a meaningful role: Human influencers were perceived as more self-similar, and this perception was positively linked to body appreciation. Taken together, our mixed findings indicate that VIs may exert a weaker impact on young women’s body perceptions than expected—at least in the short term. As such, future research might benefit from focusing more on potential long-term effects.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100231"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the agreement between human preferences, GPT-4V and Gemini Pro Vision assessments: Can AI recognize what people might like? 评估人类偏好、GPT-4V和Gemini Pro Vision评估之间的一致性:人工智能能否识别出人们可能喜欢的东西?
Pub Date : 2025-10-27 DOI: 10.1016/j.chbah.2025.100234
Dino Krupić , Domagoj Matijević , Nenad Šuvak , Jurica Maltar , Domagoj Ševerdija
This study aims to introduce a methodology for assessing the agreement between AI and human ratings, specifically focusing on visual large language models (LLMs). This paper presents empirical findings on the alignment between ratings generated by GPT-4 Vision (GPT-4V) and Gemini Pro Vision with human subjective evaluations of environmental visuals. Using photographs of restaurant interior design and food, the study estimates the degree of agreement with human preferences. The intraclass correlation reveals that GPT-4V, unlike Gemini Pro Vision, achieves moderate agreement with participants’ general restaurant preferences. Similar results are observed for rating food photos. Additionally, there is good agreement in categorizing restaurants into low-cost, mid-range and exclusive categories based on interior quality. Finally, differences in ratings were observed at the subsample level based on age, gender, and socioeconomic status across the human sample and LLMs. The results of repeated-measures ANOVAs indicate varying degrees of alignment between humans and LLMs across different sociodemographic characteristics. Overall, GPT-4V currently demonstrates limited ability to provide meaningful ratings of visual stimuli compared to human ratings and performs better in this task compared to Gemini Pro Vision.
本研究旨在介绍一种评估人工智能和人类评级之间一致性的方法,特别关注视觉大型语言模型(llm)。本文介绍了由GPT-4 Vision (GPT-4V)和Gemini Pro Vision生成的评分与人类对环境视觉的主观评价之间的一致性的实证研究结果。该研究利用餐馆室内设计和食物的照片,估计了与人类偏好的一致程度。类内相关性表明,与Gemini Pro Vision不同,GPT-4V与参与者的一般餐厅偏好达成了适度的一致。在评价食物照片时也观察到类似的结果。此外,在根据内部质量将餐厅分为低成本、中档和独家三类方面,也存在很好的共识。最后,在基于年龄、性别和社会经济地位的子样本水平上,在人类样本和法学硕士中观察到评分的差异。重复测量方差分析的结果表明,不同社会人口特征的人类和法学硕士之间存在不同程度的一致性。总的来说,与人类相比,GPT-4V目前提供有意义的视觉刺激评级的能力有限,与Gemini Pro Vision相比,它在这项任务中的表现更好。
{"title":"Evaluating the agreement between human preferences, GPT-4V and Gemini Pro Vision assessments: Can AI recognize what people might like?","authors":"Dino Krupić ,&nbsp;Domagoj Matijević ,&nbsp;Nenad Šuvak ,&nbsp;Jurica Maltar ,&nbsp;Domagoj Ševerdija","doi":"10.1016/j.chbah.2025.100234","DOIUrl":"10.1016/j.chbah.2025.100234","url":null,"abstract":"<div><div>This study aims to introduce a methodology for assessing the agreement between AI and human ratings, specifically focusing on visual large language models (LLMs). This paper presents empirical findings on the alignment between ratings generated by GPT-4 Vision (GPT-4V) and Gemini Pro Vision with human subjective evaluations of environmental visuals. Using photographs of restaurant interior design and food, the study estimates the degree of agreement with human preferences. The intraclass correlation reveals that GPT-4V, unlike Gemini Pro Vision, achieves moderate agreement with participants’ general restaurant preferences. Similar results are observed for rating food photos. Additionally, there is good agreement in categorizing restaurants into low-cost, mid-range and exclusive categories based on interior quality. Finally, differences in ratings were observed at the subsample level based on age, gender, and socioeconomic status across the human sample and LLMs. The results of repeated-measures ANOVAs indicate varying degrees of alignment between humans and LLMs across different sociodemographic characteristics. Overall, GPT-4V currently demonstrates limited ability to provide meaningful ratings of visual stimuli compared to human ratings and performs better in this task compared to Gemini Pro Vision.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100234"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human nature in a virtual world: The attribution of mind perception to avatars 虚拟世界中的人性:心灵感知对化身的归属
Pub Date : 2025-10-27 DOI: 10.1016/j.chbah.2025.100222
Komala Mazerant , Zeph M.C. van Berlo , Alexander P. Schouten , Lotte M. Willemsen
This study investigates how human resemblance in avatars shapes mind perception. Virtual worlds are often praised for their potential to transform how people collaborate, learn, and play. Yet this promise relies on our willingness to treat others in a genuinely human way. Mind perception theory defines humanness along two dimensions: agency (intentional action) and experience (capacity to feel). While prior work has examined mind perception across entities, little is known about whether this extends to avatars, particularly when individuals embody forms that differ in kind and in their degree of anatomical humanlikeness. Using a mixed-methods approach, 213 participants created 417 avatars and rated them on perceived agency and experience. Afterward, the avatars were content-analyzed to determine entity type and visual resemblance to human anatomy, distinguishing between sensory (e.g., eyes, skin) and motoric (e.g., limbs) human-like features. The results demonstrate that human and robot avatars were perceived as equally agentic, surpassing other avatar entity types, while human, animal, and fantasy avatars shared similar levels of experience. Moreover, sensory human-like features were more strongly associated with both agency and experience than motoric features. This may be due to the dual function of sensory features: signaling not only the capacity for action (e.g., speaking) but also serving as expressive cues of emotion (e.g., facial expressions). This study contributes theoretically by integrating mind perception theory with avatar research, advancing our understanding of how digital representations shape social cognition. In practice, the findings underscore the need for intentional avatar design, particularly regarding default representations.
这项研究调查了化身中人类的相似性如何影响心灵感知。虚拟世界常常因其改变人们协作、学习和游戏方式的潜力而受到称赞。然而,这一承诺依赖于我们以真正人性化的方式对待他人的意愿。心理知觉理论从两个维度来定义人性:能动性(有意的行为)和经验(感觉的能力)。虽然之前的工作已经研究了实体之间的心灵感知,但很少有人知道这是否延伸到化身,特别是当个体体现的形式在种类和解剖学上与人类相似的程度上不同时。213名参与者使用混合方法创建了417个虚拟形象,并对他们的感知代理和经验进行了评级。之后,对虚拟人物进行内容分析,以确定实体类型和与人类解剖结构的视觉相似性,区分感官(如眼睛、皮肤)和运动(如四肢)人类特征。结果表明,人类和机器人化身被认为具有同等的代理能力,超越了其他化身实体类型,而人类、动物和幻想化身的体验水平相似。此外,与运动特征相比,感觉类人特征与代理和经验的关联更强。这可能是由于感官特征的双重功能:不仅表明行动的能力(例如,说话),而且还作为表达情感的线索(例如,面部表情)。本研究通过将心灵感知理论与化身研究相结合,在理论上做出了贡献,促进了我们对数字表征如何塑造社会认知的理解。在实践中,研究结果强调了有意的虚拟形象设计的必要性,特别是在默认表示方面。
{"title":"Human nature in a virtual world: The attribution of mind perception to avatars","authors":"Komala Mazerant ,&nbsp;Zeph M.C. van Berlo ,&nbsp;Alexander P. Schouten ,&nbsp;Lotte M. Willemsen","doi":"10.1016/j.chbah.2025.100222","DOIUrl":"10.1016/j.chbah.2025.100222","url":null,"abstract":"<div><div>This study investigates how human resemblance in avatars shapes mind perception. Virtual worlds are often praised for their potential to transform how people collaborate, learn, and play. Yet this promise relies on our willingness to treat others in a genuinely human way. Mind perception theory defines humanness along two dimensions: agency (intentional action) and experience (capacity to feel). While prior work has examined mind perception across entities, little is known about whether this extends to avatars, particularly when individuals embody forms that differ in kind and in their degree of anatomical humanlikeness. Using a mixed-methods approach, 213 participants created 417 avatars and rated them on perceived agency and experience. Afterward, the avatars were content-analyzed to determine entity type and visual resemblance to human anatomy, distinguishing between sensory (e.g., eyes, skin) and motoric (e.g., limbs) human-like features. The results demonstrate that human and robot avatars were perceived as equally agentic, surpassing other avatar entity types, while human, animal, and fantasy avatars shared similar levels of experience. Moreover, sensory human-like features were more strongly associated with both agency and experience than motoric features. This may be due to the dual function of sensory features: signaling not only the capacity for action (e.g., speaking) but also serving as expressive cues of emotion (e.g., facial expressions). This study contributes theoretically by integrating mind perception theory with avatar research, advancing our understanding of how digital representations shape social cognition. In practice, the findings underscore the need for intentional avatar design, particularly regarding default representations.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100222"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards social superintelligence? AI infers diverse psychological traits from text without specific training, outperforming human judges 走向社会超级智能?人工智能无需经过特殊训练就能从文本中推断出多种心理特征,其表现优于人类法官
Pub Date : 2025-10-27 DOI: 10.1016/j.chbah.2025.100228
Ariel Rosenfelder, Maor Daniel Levitin, Michael Gilead
Large Language Models (LLMs) have recently demonstrated impressive capabilities in domains requiring higher-order cognition. This study investigates whether LLMs can also perform a core social-cognitive function: forming predictive models of individuals' psychological traits from minimal input (“trait inference”). Extending earlier work that has focused almost exclusively on Big Five personality factors, we asked GPT-4 to anticipate responses on a battery of 30 validated scales spanning personality, affect, values, and interpersonal style. A total of 1,011 participants wrote short self-descriptive texts and completed the questionnaires. An LLM was tasked with predicting participants' questionnaire responses solely from their self-descriptions, without any task-specific training. Human judges attempted the same task, providing a direct benchmark. The LLM's predictions correlated with participants' self-reports (r = 0.35; disattenuated r = 0.41)—accuracy comparable to that typically observed among real-world friends and substantially higher than that of human judges (r = 0.20; disattenuated r = 0.23). Across scales, the performance of the LLM and human judges was moderately correlated. These findings highlight LLMs' emerging capacity for sophisticated social inference, opening new avenues for computational psychology while raising important ethical concerns about large-scale psychological profiling.
大型语言模型(llm)最近在需要高阶认知的领域展示了令人印象深刻的能力。本研究探讨llm是否也可以执行核心的社会认知功能:从最小的输入形成个体心理特征的预测模型(“特征推断”)。我们扩展了早期几乎只关注五大人格因素的工作,要求GPT-4预测在30个有效量表上的反应,包括人格、情感、价值观和人际关系风格。共有1011名参与者写了简短的自我描述文本并完成了调查问卷。法学硕士的任务是仅根据参与者的自我描述来预测他们的问卷回答,而不接受任何特定任务的培训。人类裁判也尝试了同样的任务,提供了一个直接的基准。法学硕士的预测与参与者的自我报告相关(r = 0.35;去衰减r = 0.41)——准确度与现实世界中朋友的典型观察结果相当,大大高于人类法官的预测(r = 0.20;去衰减r = 0.23)。在各个尺度上,法学硕士和人类法官的表现适度相关。这些发现突出了法学硕士在复杂社会推理方面的新兴能力,为计算心理学开辟了新的途径,同时也引起了对大规模心理分析的重要伦理关注。
{"title":"Towards social superintelligence? AI infers diverse psychological traits from text without specific training, outperforming human judges","authors":"Ariel Rosenfelder,&nbsp;Maor Daniel Levitin,&nbsp;Michael Gilead","doi":"10.1016/j.chbah.2025.100228","DOIUrl":"10.1016/j.chbah.2025.100228","url":null,"abstract":"<div><div>Large Language Models (LLMs) have recently demonstrated impressive capabilities in domains requiring higher-order cognition. This study investigates whether LLMs can also perform a core social-cognitive function: forming predictive models of individuals' psychological traits from minimal input (“trait inference”). Extending earlier work that has focused almost exclusively on Big Five personality factors, we asked GPT-4 to anticipate responses on a battery of 30 validated scales spanning personality, affect, values, and interpersonal style. A total of 1,011 participants wrote short self-descriptive texts and completed the questionnaires. An LLM was tasked with predicting participants' questionnaire responses solely from their self-descriptions, without any task-specific training. Human judges attempted the same task, providing a direct benchmark. The LLM's predictions correlated with participants' self-reports (r = 0.35; disattenuated r = 0.41)—accuracy comparable to that typically observed among real-world friends and substantially higher than that of human judges (r = 0.20; disattenuated r = 0.23). Across scales, the performance of the LLM and human judges was moderately correlated. These findings highlight LLMs' emerging capacity for sophisticated social inference, opening new avenues for computational psychology while raising important ethical concerns about large-scale psychological profiling.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100228"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
First interactions with generative chatbots shape local but not global sentiments about AI 与生成式聊天机器人的第一次互动塑造了当地而非全球对人工智能的看法
Pub Date : 2025-10-27 DOI: 10.1016/j.chbah.2025.100223
Eva-Madeleine Schmidt , Clara Bersch , Nils Köbis , Jean-François Bonnefon , Iyad Rahwan , Mengchen Dong
As artificial intelligence (AI) chatbots become increasingly integrated into everyday life, it is important to understand how direct interaction with such systems shapes public sentiment toward AI more broadly. Leveraging a unique window in April 2023—when many individuals still had little or no experience with such systems—we combined experimental manipulation (chatbot exposure vs. no exposure) with natural variation in real-world AI usage. In a preregistered proof-of-concept experiment (N = 220), we investigated whether a short conversation with a GPT-3.5-based chatbot influenced participants' sentiments across multiple dimensions of AI perception. We assessed system-specific fear, user engagement, anthropomorphization, and potential spillover effects to other domains, including AI in medicine, recruitment and governance. Results show that direct interaction reduced fear and increased enjoyment of the chatbot itself, while fostering a more critical, realistic understanding of its abilities. However, spillover effects were limited: exposure led to reduced fear of AI in familiar, concrete domains (e.g., medical applications), but not in more abstract or speculative areas. Hope about AI's societal potential remained unaffected. Our findings highlight that sentiments toward AI are multidimensional and context dependent. Exposure to AI chatbots can shift immediate attitudes but does not necessarily generalize to broader AI perceptions, underscoring the need for more targeted engagement strategies in shaping public understanding and trust.
随着人工智能(AI)聊天机器人越来越多地融入日常生活,了解与此类系统的直接互动如何更广泛地影响公众对人工智能的看法非常重要。利用2023年4月的一个独特的窗口,当许多人仍然很少或没有这样的系统经验时,我们将实验操作(聊天机器人暴露vs.没有暴露)与现实世界人工智能使用的自然变化结合起来。在一项预先注册的概念验证实验(N = 220)中,我们调查了与基于gpt -3.5的聊天机器人的简短对话是否会影响参与者在人工智能感知的多个维度上的情绪。我们评估了系统特定的恐惧、用户参与、拟人化以及对其他领域的潜在溢出效应,包括医学、招聘和治理中的人工智能。结果表明,直接互动减少了恐惧,增加了聊天机器人本身的乐趣,同时培养了对其能力更挑剔、更现实的理解。然而,溢出效应是有限的:暴露导致在熟悉的具体领域(例如医疗应用)减少对人工智能的恐惧,但在更抽象或推测的领域却没有。对人工智能社会潜力的希望并未受到影响。我们的研究结果强调,对人工智能的情绪是多维的,并且依赖于上下文。接触人工智能聊天机器人可以改变人们对人工智能的直接态度,但不一定能推广到更广泛的人工智能认知,这突显了在塑造公众理解和信任方面需要更有针对性的参与策略。
{"title":"First interactions with generative chatbots shape local but not global sentiments about AI","authors":"Eva-Madeleine Schmidt ,&nbsp;Clara Bersch ,&nbsp;Nils Köbis ,&nbsp;Jean-François Bonnefon ,&nbsp;Iyad Rahwan ,&nbsp;Mengchen Dong","doi":"10.1016/j.chbah.2025.100223","DOIUrl":"10.1016/j.chbah.2025.100223","url":null,"abstract":"<div><div>As artificial intelligence (AI) chatbots become increasingly integrated into everyday life, it is important to understand how direct interaction with such systems shapes public sentiment toward AI more broadly. Leveraging a unique window in April 2023—when many individuals still had little or no experience with such systems—we combined experimental manipulation (chatbot exposure vs. no exposure) with natural variation in real-world AI usage. In a preregistered proof-of-concept experiment (N = 220), we investigated whether a short conversation with a GPT-3.5-based chatbot influenced participants' sentiments across multiple dimensions of AI perception. We assessed system-specific fear, user engagement, anthropomorphization, and potential spillover effects to other domains, including AI in medicine, recruitment and governance. Results show that direct interaction reduced fear and increased enjoyment of the chatbot itself, while fostering a more critical, realistic understanding of its abilities. However, spillover effects were limited: exposure led to reduced fear of AI in familiar, concrete domains (e.g., medical applications), but not in more abstract or speculative areas. Hope about AI's societal potential remained unaffected. Our findings highlight that sentiments toward AI are multidimensional and context dependent. Exposure to AI chatbots can shift immediate attitudes but does not necessarily generalize to broader AI perceptions, underscoring the need for more targeted engagement strategies in shaping public understanding and trust.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100223"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transforming the self: Individual-level changes arising from collaboration with generative AI 改变自我:与生成式人工智能合作带来的个人层面的变化
Pub Date : 2025-10-27 DOI: 10.1016/j.chbah.2025.100232
Siddharth Nandagopal
The rapid integration of Generative Artificial Intelligence (GenAI) into daily activities has prompted significant interest in understanding its impact on individuals. This paper addresses the critical gap in research concerning individual-level changes resulting from direct collaboration with GenAI systems. A novel theoretical framework is proposed, encompassing three primary constructs: Cognitive Dependency, Emotional Appraisal, and Behavioral Shift. These constructs are grounded in established theories such as Social Cognitive Theory, Cognitive Load Theory, and the Technology Acceptance Model, providing a comprehensive perspective on the mechanisms driving human transformation through GenAI collaboration. Empirical evidence is drawn from diverse case studies across education, professional environments, creative industries, social media, and the medical field, illustrating how increased cognitive dependency on GenAI leads to significant behavioral shifts, moderated by Emotional Appraisal. The analysis confirms the presence of feedback loops, where behavioral shifts further reinforce cognitive dependency, highlighting the sustained impact of GenAI on individuals. Key findings indicate that while GenAI enhances efficiency and creativity, it also poses risks such as skill degradation and reduced critical thinking. The implications extend to theoretical advancements in human-AI interaction research and practical applications for educators, organizations, and policymakers. Recommendations include integrating Artificial Intelligence literacy in education, developing balanced professional practices, and establishing ethical guidelines to mitigate biases and foster trust in GenAI systems. This paper underscores the necessity for ongoing research and ethical considerations to ensure that GenAI serves as a tool for human enhancement, promoting positive individual and societal outcomes.
生成式人工智能(GenAI)在日常活动中的快速整合,促使人们对理解其对个人的影响产生了极大的兴趣。本文解决了与GenAI系统直接合作导致的个人层面变化研究中的关键差距。提出了一个新的理论框架,包括三个主要构念:认知依赖、情绪评价和行为转变。这些构建以社会认知理论、认知负荷理论和技术接受模型等已建立的理论为基础,为通过GenAI协作驱动人类转型的机制提供了一个全面的视角。从教育、专业环境、创意产业、社交媒体和医疗领域的不同案例研究中得出的经验证据表明,对GenAI的认知依赖增加如何导致重大的行为转变,并由情绪评估调节。分析证实了反馈循环的存在,行为转变进一步强化了认知依赖,突出了GenAI对个体的持续影响。关键发现表明,基因ai虽然提高了效率和创造力,但也带来了技能退化和批判性思维减少等风险。其影响延伸到人类与人工智能交互研究的理论进步,以及教育工作者、组织和政策制定者的实际应用。建议包括将人工智能素养纳入教育,发展平衡的专业实践,以及建立道德准则以减轻偏见并促进对GenAI系统的信任。这篇论文强调了正在进行的研究和伦理考虑的必要性,以确保基因人工智能作为人类增强的工具,促进积极的个人和社会结果。
{"title":"Transforming the self: Individual-level changes arising from collaboration with generative AI","authors":"Siddharth Nandagopal","doi":"10.1016/j.chbah.2025.100232","DOIUrl":"10.1016/j.chbah.2025.100232","url":null,"abstract":"<div><div>The rapid integration of Generative Artificial Intelligence (GenAI) into daily activities has prompted significant interest in understanding its impact on individuals. This paper addresses the critical gap in research concerning individual-level changes resulting from direct collaboration with GenAI systems. A novel theoretical framework is proposed, encompassing three primary constructs: Cognitive Dependency, Emotional Appraisal, and Behavioral Shift. These constructs are grounded in established theories such as Social Cognitive Theory, Cognitive Load Theory, and the Technology Acceptance Model, providing a comprehensive perspective on the mechanisms driving human transformation through GenAI collaboration. Empirical evidence is drawn from diverse case studies across education, professional environments, creative industries, social media, and the medical field, illustrating how increased cognitive dependency on GenAI leads to significant behavioral shifts, moderated by Emotional Appraisal. The analysis confirms the presence of feedback loops, where behavioral shifts further reinforce cognitive dependency, highlighting the sustained impact of GenAI on individuals. Key findings indicate that while GenAI enhances efficiency and creativity, it also poses risks such as skill degradation and reduced critical thinking. The implications extend to theoretical advancements in human-AI interaction research and practical applications for educators, organizations, and policymakers. Recommendations include integrating Artificial Intelligence literacy in education, developing balanced professional practices, and establishing ethical guidelines to mitigate biases and foster trust in GenAI systems. This paper underscores the necessity for ongoing research and ethical considerations to ensure that GenAI serves as a tool for human enhancement, promoting positive individual and societal outcomes.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100232"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1