首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Digitally created body positivity: The effects of virtual influencers with different body types on viewer perceptions 数字创造的身体积极性:不同体型的虚拟影响者对观众感知的影响
Pub Date : 2025-10-27 DOI: 10.1016/j.chbah.2025.100231
Jiyeon Yeo, Jan-Philipp Stein
Exposure to idealized body imagery on social media has been linked to lower body satisfaction/appreciation, negative mood effects, and mental health risks. Serving as a potential counterforce to these severe issues, body-positive content creators advocate for broader conceptualizations of beauty, more inclusivity, and self-acceptance among social media users. Amidst this on-going discourse, hyper-realistic virtual influencers (VIs) have emerged as novel social agents—some reinforcing traditional beauty ideals and others promoting more diversity. Experiment 1 (N = 337) examined how VIs with different body types (larger-sized versus thin-ideal) influence women’s state body appreciation and perceptions of ideal body shapes. Experiment 2 (N = 462) further investigated whether VIs elicit user responses in a way comparable to human influencers, considering ontological distinctions and perceived self-similarity. Across both experiments, neither body type nor influencer type significantly influenced women’s body appreciation or body-related ideals. Whereas several proposed moderating variables did not result in significant findings, perceptions of self-similarity were ultimately found to play a meaningful role: Human influencers were perceived as more self-similar, and this perception was positively linked to body appreciation. Taken together, our mixed findings indicate that VIs may exert a weaker impact on young women’s body perceptions than expected—at least in the short term. As such, future research might benefit from focusing more on potential long-term effects.
在社交媒体上接触理想化的身体形象与较低的身体满意度/欣赏度、负面情绪影响和心理健康风险有关。作为对这些严重问题的潜在反作用,身体积极的内容创作者倡导社交媒体用户对美的更广泛的概念,更包容和自我接受。在这种持续的讨论中,超现实的虚拟影响者(VIs)作为新的社会代理人出现了——一些人加强了传统的美丽理想,另一些人则促进了更多的多样性。实验1 (N = 337)考察了不同体型(大体型与瘦体型)的男性如何影响女性的状态身体欣赏和对理想体型的感知。实验2 (N = 462)在考虑本体差异和感知自相似性的情况下,进一步研究了虚拟形象是否以与人类影响者相当的方式引起用户反应。在这两个实验中,身体类型和影响者类型都没有显著影响女性对身体的欣赏或与身体相关的理想。虽然几个提出的调节变量并没有导致显著的发现,但自我相似性的感知最终被发现发挥了有意义的作用:人类影响者被认为是更自我相似的,这种感知与身体欣赏呈正相关。综上所述,我们的混合发现表明,VIs对年轻女性身体感知的影响可能比预期的要弱——至少在短期内是这样。因此,未来的研究可能会受益于更多地关注潜在的长期影响。
{"title":"Digitally created body positivity: The effects of virtual influencers with different body types on viewer perceptions","authors":"Jiyeon Yeo,&nbsp;Jan-Philipp Stein","doi":"10.1016/j.chbah.2025.100231","DOIUrl":"10.1016/j.chbah.2025.100231","url":null,"abstract":"<div><div>Exposure to idealized body imagery on social media has been linked to lower body satisfaction/appreciation, negative mood effects, and mental health risks. Serving as a potential counterforce to these severe issues, body-positive content creators advocate for broader conceptualizations of beauty, more inclusivity, and self-acceptance among social media users. Amidst this on-going discourse, hyper-realistic virtual influencers (VIs) have emerged as novel social agents—some reinforcing traditional beauty ideals and others promoting more diversity. Experiment 1 (<em>N</em> = 337) examined how VIs with different body types (larger-sized versus thin-ideal) influence women’s state body appreciation and perceptions of ideal body shapes. Experiment 2 (<em>N</em> = 462) further investigated whether VIs elicit user responses in a way comparable to human influencers, considering ontological distinctions and perceived self-similarity. Across both experiments, neither body type nor influencer type significantly influenced women’s body appreciation or body-related ideals. Whereas several proposed moderating variables did not result in significant findings, perceptions of self-similarity were ultimately found to play a meaningful role: Human influencers were perceived as more self-similar, and this perception was positively linked to body appreciation. Taken together, our mixed findings indicate that VIs may exert a weaker impact on young women’s body perceptions than expected—at least in the short term. As such, future research might benefit from focusing more on potential long-term effects.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100231"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the agreement between human preferences, GPT-4V and Gemini Pro Vision assessments: Can AI recognize what people might like? 评估人类偏好、GPT-4V和Gemini Pro Vision评估之间的一致性:人工智能能否识别出人们可能喜欢的东西?
Pub Date : 2025-10-27 DOI: 10.1016/j.chbah.2025.100234
Dino Krupić , Domagoj Matijević , Nenad Šuvak , Jurica Maltar , Domagoj Ševerdija
This study aims to introduce a methodology for assessing the agreement between AI and human ratings, specifically focusing on visual large language models (LLMs). This paper presents empirical findings on the alignment between ratings generated by GPT-4 Vision (GPT-4V) and Gemini Pro Vision with human subjective evaluations of environmental visuals. Using photographs of restaurant interior design and food, the study estimates the degree of agreement with human preferences. The intraclass correlation reveals that GPT-4V, unlike Gemini Pro Vision, achieves moderate agreement with participants’ general restaurant preferences. Similar results are observed for rating food photos. Additionally, there is good agreement in categorizing restaurants into low-cost, mid-range and exclusive categories based on interior quality. Finally, differences in ratings were observed at the subsample level based on age, gender, and socioeconomic status across the human sample and LLMs. The results of repeated-measures ANOVAs indicate varying degrees of alignment between humans and LLMs across different sociodemographic characteristics. Overall, GPT-4V currently demonstrates limited ability to provide meaningful ratings of visual stimuli compared to human ratings and performs better in this task compared to Gemini Pro Vision.
本研究旨在介绍一种评估人工智能和人类评级之间一致性的方法,特别关注视觉大型语言模型(llm)。本文介绍了由GPT-4 Vision (GPT-4V)和Gemini Pro Vision生成的评分与人类对环境视觉的主观评价之间的一致性的实证研究结果。该研究利用餐馆室内设计和食物的照片,估计了与人类偏好的一致程度。类内相关性表明,与Gemini Pro Vision不同,GPT-4V与参与者的一般餐厅偏好达成了适度的一致。在评价食物照片时也观察到类似的结果。此外,在根据内部质量将餐厅分为低成本、中档和独家三类方面,也存在很好的共识。最后,在基于年龄、性别和社会经济地位的子样本水平上,在人类样本和法学硕士中观察到评分的差异。重复测量方差分析的结果表明,不同社会人口特征的人类和法学硕士之间存在不同程度的一致性。总的来说,与人类相比,GPT-4V目前提供有意义的视觉刺激评级的能力有限,与Gemini Pro Vision相比,它在这项任务中的表现更好。
{"title":"Evaluating the agreement between human preferences, GPT-4V and Gemini Pro Vision assessments: Can AI recognize what people might like?","authors":"Dino Krupić ,&nbsp;Domagoj Matijević ,&nbsp;Nenad Šuvak ,&nbsp;Jurica Maltar ,&nbsp;Domagoj Ševerdija","doi":"10.1016/j.chbah.2025.100234","DOIUrl":"10.1016/j.chbah.2025.100234","url":null,"abstract":"<div><div>This study aims to introduce a methodology for assessing the agreement between AI and human ratings, specifically focusing on visual large language models (LLMs). This paper presents empirical findings on the alignment between ratings generated by GPT-4 Vision (GPT-4V) and Gemini Pro Vision with human subjective evaluations of environmental visuals. Using photographs of restaurant interior design and food, the study estimates the degree of agreement with human preferences. The intraclass correlation reveals that GPT-4V, unlike Gemini Pro Vision, achieves moderate agreement with participants’ general restaurant preferences. Similar results are observed for rating food photos. Additionally, there is good agreement in categorizing restaurants into low-cost, mid-range and exclusive categories based on interior quality. Finally, differences in ratings were observed at the subsample level based on age, gender, and socioeconomic status across the human sample and LLMs. The results of repeated-measures ANOVAs indicate varying degrees of alignment between humans and LLMs across different sociodemographic characteristics. Overall, GPT-4V currently demonstrates limited ability to provide meaningful ratings of visual stimuli compared to human ratings and performs better in this task compared to Gemini Pro Vision.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100234"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human nature in a virtual world: The attribution of mind perception to avatars 虚拟世界中的人性:心灵感知对化身的归属
Pub Date : 2025-10-27 DOI: 10.1016/j.chbah.2025.100222
Komala Mazerant , Zeph M.C. van Berlo , Alexander P. Schouten , Lotte M. Willemsen
This study investigates how human resemblance in avatars shapes mind perception. Virtual worlds are often praised for their potential to transform how people collaborate, learn, and play. Yet this promise relies on our willingness to treat others in a genuinely human way. Mind perception theory defines humanness along two dimensions: agency (intentional action) and experience (capacity to feel). While prior work has examined mind perception across entities, little is known about whether this extends to avatars, particularly when individuals embody forms that differ in kind and in their degree of anatomical humanlikeness. Using a mixed-methods approach, 213 participants created 417 avatars and rated them on perceived agency and experience. Afterward, the avatars were content-analyzed to determine entity type and visual resemblance to human anatomy, distinguishing between sensory (e.g., eyes, skin) and motoric (e.g., limbs) human-like features. The results demonstrate that human and robot avatars were perceived as equally agentic, surpassing other avatar entity types, while human, animal, and fantasy avatars shared similar levels of experience. Moreover, sensory human-like features were more strongly associated with both agency and experience than motoric features. This may be due to the dual function of sensory features: signaling not only the capacity for action (e.g., speaking) but also serving as expressive cues of emotion (e.g., facial expressions). This study contributes theoretically by integrating mind perception theory with avatar research, advancing our understanding of how digital representations shape social cognition. In practice, the findings underscore the need for intentional avatar design, particularly regarding default representations.
这项研究调查了化身中人类的相似性如何影响心灵感知。虚拟世界常常因其改变人们协作、学习和游戏方式的潜力而受到称赞。然而,这一承诺依赖于我们以真正人性化的方式对待他人的意愿。心理知觉理论从两个维度来定义人性:能动性(有意的行为)和经验(感觉的能力)。虽然之前的工作已经研究了实体之间的心灵感知,但很少有人知道这是否延伸到化身,特别是当个体体现的形式在种类和解剖学上与人类相似的程度上不同时。213名参与者使用混合方法创建了417个虚拟形象,并对他们的感知代理和经验进行了评级。之后,对虚拟人物进行内容分析,以确定实体类型和与人类解剖结构的视觉相似性,区分感官(如眼睛、皮肤)和运动(如四肢)人类特征。结果表明,人类和机器人化身被认为具有同等的代理能力,超越了其他化身实体类型,而人类、动物和幻想化身的体验水平相似。此外,与运动特征相比,感觉类人特征与代理和经验的关联更强。这可能是由于感官特征的双重功能:不仅表明行动的能力(例如,说话),而且还作为表达情感的线索(例如,面部表情)。本研究通过将心灵感知理论与化身研究相结合,在理论上做出了贡献,促进了我们对数字表征如何塑造社会认知的理解。在实践中,研究结果强调了有意的虚拟形象设计的必要性,特别是在默认表示方面。
{"title":"Human nature in a virtual world: The attribution of mind perception to avatars","authors":"Komala Mazerant ,&nbsp;Zeph M.C. van Berlo ,&nbsp;Alexander P. Schouten ,&nbsp;Lotte M. Willemsen","doi":"10.1016/j.chbah.2025.100222","DOIUrl":"10.1016/j.chbah.2025.100222","url":null,"abstract":"<div><div>This study investigates how human resemblance in avatars shapes mind perception. Virtual worlds are often praised for their potential to transform how people collaborate, learn, and play. Yet this promise relies on our willingness to treat others in a genuinely human way. Mind perception theory defines humanness along two dimensions: agency (intentional action) and experience (capacity to feel). While prior work has examined mind perception across entities, little is known about whether this extends to avatars, particularly when individuals embody forms that differ in kind and in their degree of anatomical humanlikeness. Using a mixed-methods approach, 213 participants created 417 avatars and rated them on perceived agency and experience. Afterward, the avatars were content-analyzed to determine entity type and visual resemblance to human anatomy, distinguishing between sensory (e.g., eyes, skin) and motoric (e.g., limbs) human-like features. The results demonstrate that human and robot avatars were perceived as equally agentic, surpassing other avatar entity types, while human, animal, and fantasy avatars shared similar levels of experience. Moreover, sensory human-like features were more strongly associated with both agency and experience than motoric features. This may be due to the dual function of sensory features: signaling not only the capacity for action (e.g., speaking) but also serving as expressive cues of emotion (e.g., facial expressions). This study contributes theoretically by integrating mind perception theory with avatar research, advancing our understanding of how digital representations shape social cognition. In practice, the findings underscore the need for intentional avatar design, particularly regarding default representations.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100222"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards social superintelligence? AI infers diverse psychological traits from text without specific training, outperforming human judges 走向社会超级智能?人工智能无需经过特殊训练就能从文本中推断出多种心理特征,其表现优于人类法官
Pub Date : 2025-10-27 DOI: 10.1016/j.chbah.2025.100228
Ariel Rosenfelder, Maor Daniel Levitin, Michael Gilead
Large Language Models (LLMs) have recently demonstrated impressive capabilities in domains requiring higher-order cognition. This study investigates whether LLMs can also perform a core social-cognitive function: forming predictive models of individuals' psychological traits from minimal input (“trait inference”). Extending earlier work that has focused almost exclusively on Big Five personality factors, we asked GPT-4 to anticipate responses on a battery of 30 validated scales spanning personality, affect, values, and interpersonal style. A total of 1,011 participants wrote short self-descriptive texts and completed the questionnaires. An LLM was tasked with predicting participants' questionnaire responses solely from their self-descriptions, without any task-specific training. Human judges attempted the same task, providing a direct benchmark. The LLM's predictions correlated with participants' self-reports (r = 0.35; disattenuated r = 0.41)—accuracy comparable to that typically observed among real-world friends and substantially higher than that of human judges (r = 0.20; disattenuated r = 0.23). Across scales, the performance of the LLM and human judges was moderately correlated. These findings highlight LLMs' emerging capacity for sophisticated social inference, opening new avenues for computational psychology while raising important ethical concerns about large-scale psychological profiling.
大型语言模型(llm)最近在需要高阶认知的领域展示了令人印象深刻的能力。本研究探讨llm是否也可以执行核心的社会认知功能:从最小的输入形成个体心理特征的预测模型(“特征推断”)。我们扩展了早期几乎只关注五大人格因素的工作,要求GPT-4预测在30个有效量表上的反应,包括人格、情感、价值观和人际关系风格。共有1011名参与者写了简短的自我描述文本并完成了调查问卷。法学硕士的任务是仅根据参与者的自我描述来预测他们的问卷回答,而不接受任何特定任务的培训。人类裁判也尝试了同样的任务,提供了一个直接的基准。法学硕士的预测与参与者的自我报告相关(r = 0.35;去衰减r = 0.41)——准确度与现实世界中朋友的典型观察结果相当,大大高于人类法官的预测(r = 0.20;去衰减r = 0.23)。在各个尺度上,法学硕士和人类法官的表现适度相关。这些发现突出了法学硕士在复杂社会推理方面的新兴能力,为计算心理学开辟了新的途径,同时也引起了对大规模心理分析的重要伦理关注。
{"title":"Towards social superintelligence? AI infers diverse psychological traits from text without specific training, outperforming human judges","authors":"Ariel Rosenfelder,&nbsp;Maor Daniel Levitin,&nbsp;Michael Gilead","doi":"10.1016/j.chbah.2025.100228","DOIUrl":"10.1016/j.chbah.2025.100228","url":null,"abstract":"<div><div>Large Language Models (LLMs) have recently demonstrated impressive capabilities in domains requiring higher-order cognition. This study investigates whether LLMs can also perform a core social-cognitive function: forming predictive models of individuals' psychological traits from minimal input (“trait inference”). Extending earlier work that has focused almost exclusively on Big Five personality factors, we asked GPT-4 to anticipate responses on a battery of 30 validated scales spanning personality, affect, values, and interpersonal style. A total of 1,011 participants wrote short self-descriptive texts and completed the questionnaires. An LLM was tasked with predicting participants' questionnaire responses solely from their self-descriptions, without any task-specific training. Human judges attempted the same task, providing a direct benchmark. The LLM's predictions correlated with participants' self-reports (r = 0.35; disattenuated r = 0.41)—accuracy comparable to that typically observed among real-world friends and substantially higher than that of human judges (r = 0.20; disattenuated r = 0.23). Across scales, the performance of the LLM and human judges was moderately correlated. These findings highlight LLMs' emerging capacity for sophisticated social inference, opening new avenues for computational psychology while raising important ethical concerns about large-scale psychological profiling.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100228"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
First interactions with generative chatbots shape local but not global sentiments about AI 与生成式聊天机器人的第一次互动塑造了当地而非全球对人工智能的看法
Pub Date : 2025-10-27 DOI: 10.1016/j.chbah.2025.100223
Eva-Madeleine Schmidt , Clara Bersch , Nils Köbis , Jean-François Bonnefon , Iyad Rahwan , Mengchen Dong
As artificial intelligence (AI) chatbots become increasingly integrated into everyday life, it is important to understand how direct interaction with such systems shapes public sentiment toward AI more broadly. Leveraging a unique window in April 2023—when many individuals still had little or no experience with such systems—we combined experimental manipulation (chatbot exposure vs. no exposure) with natural variation in real-world AI usage. In a preregistered proof-of-concept experiment (N = 220), we investigated whether a short conversation with a GPT-3.5-based chatbot influenced participants' sentiments across multiple dimensions of AI perception. We assessed system-specific fear, user engagement, anthropomorphization, and potential spillover effects to other domains, including AI in medicine, recruitment and governance. Results show that direct interaction reduced fear and increased enjoyment of the chatbot itself, while fostering a more critical, realistic understanding of its abilities. However, spillover effects were limited: exposure led to reduced fear of AI in familiar, concrete domains (e.g., medical applications), but not in more abstract or speculative areas. Hope about AI's societal potential remained unaffected. Our findings highlight that sentiments toward AI are multidimensional and context dependent. Exposure to AI chatbots can shift immediate attitudes but does not necessarily generalize to broader AI perceptions, underscoring the need for more targeted engagement strategies in shaping public understanding and trust.
随着人工智能(AI)聊天机器人越来越多地融入日常生活,了解与此类系统的直接互动如何更广泛地影响公众对人工智能的看法非常重要。利用2023年4月的一个独特的窗口,当许多人仍然很少或没有这样的系统经验时,我们将实验操作(聊天机器人暴露vs.没有暴露)与现实世界人工智能使用的自然变化结合起来。在一项预先注册的概念验证实验(N = 220)中,我们调查了与基于gpt -3.5的聊天机器人的简短对话是否会影响参与者在人工智能感知的多个维度上的情绪。我们评估了系统特定的恐惧、用户参与、拟人化以及对其他领域的潜在溢出效应,包括医学、招聘和治理中的人工智能。结果表明,直接互动减少了恐惧,增加了聊天机器人本身的乐趣,同时培养了对其能力更挑剔、更现实的理解。然而,溢出效应是有限的:暴露导致在熟悉的具体领域(例如医疗应用)减少对人工智能的恐惧,但在更抽象或推测的领域却没有。对人工智能社会潜力的希望并未受到影响。我们的研究结果强调,对人工智能的情绪是多维的,并且依赖于上下文。接触人工智能聊天机器人可以改变人们对人工智能的直接态度,但不一定能推广到更广泛的人工智能认知,这突显了在塑造公众理解和信任方面需要更有针对性的参与策略。
{"title":"First interactions with generative chatbots shape local but not global sentiments about AI","authors":"Eva-Madeleine Schmidt ,&nbsp;Clara Bersch ,&nbsp;Nils Köbis ,&nbsp;Jean-François Bonnefon ,&nbsp;Iyad Rahwan ,&nbsp;Mengchen Dong","doi":"10.1016/j.chbah.2025.100223","DOIUrl":"10.1016/j.chbah.2025.100223","url":null,"abstract":"<div><div>As artificial intelligence (AI) chatbots become increasingly integrated into everyday life, it is important to understand how direct interaction with such systems shapes public sentiment toward AI more broadly. Leveraging a unique window in April 2023—when many individuals still had little or no experience with such systems—we combined experimental manipulation (chatbot exposure vs. no exposure) with natural variation in real-world AI usage. In a preregistered proof-of-concept experiment (N = 220), we investigated whether a short conversation with a GPT-3.5-based chatbot influenced participants' sentiments across multiple dimensions of AI perception. We assessed system-specific fear, user engagement, anthropomorphization, and potential spillover effects to other domains, including AI in medicine, recruitment and governance. Results show that direct interaction reduced fear and increased enjoyment of the chatbot itself, while fostering a more critical, realistic understanding of its abilities. However, spillover effects were limited: exposure led to reduced fear of AI in familiar, concrete domains (e.g., medical applications), but not in more abstract or speculative areas. Hope about AI's societal potential remained unaffected. Our findings highlight that sentiments toward AI are multidimensional and context dependent. Exposure to AI chatbots can shift immediate attitudes but does not necessarily generalize to broader AI perceptions, underscoring the need for more targeted engagement strategies in shaping public understanding and trust.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100223"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transforming the self: Individual-level changes arising from collaboration with generative AI 改变自我:与生成式人工智能合作带来的个人层面的变化
Pub Date : 2025-10-27 DOI: 10.1016/j.chbah.2025.100232
Siddharth Nandagopal
The rapid integration of Generative Artificial Intelligence (GenAI) into daily activities has prompted significant interest in understanding its impact on individuals. This paper addresses the critical gap in research concerning individual-level changes resulting from direct collaboration with GenAI systems. A novel theoretical framework is proposed, encompassing three primary constructs: Cognitive Dependency, Emotional Appraisal, and Behavioral Shift. These constructs are grounded in established theories such as Social Cognitive Theory, Cognitive Load Theory, and the Technology Acceptance Model, providing a comprehensive perspective on the mechanisms driving human transformation through GenAI collaboration. Empirical evidence is drawn from diverse case studies across education, professional environments, creative industries, social media, and the medical field, illustrating how increased cognitive dependency on GenAI leads to significant behavioral shifts, moderated by Emotional Appraisal. The analysis confirms the presence of feedback loops, where behavioral shifts further reinforce cognitive dependency, highlighting the sustained impact of GenAI on individuals. Key findings indicate that while GenAI enhances efficiency and creativity, it also poses risks such as skill degradation and reduced critical thinking. The implications extend to theoretical advancements in human-AI interaction research and practical applications for educators, organizations, and policymakers. Recommendations include integrating Artificial Intelligence literacy in education, developing balanced professional practices, and establishing ethical guidelines to mitigate biases and foster trust in GenAI systems. This paper underscores the necessity for ongoing research and ethical considerations to ensure that GenAI serves as a tool for human enhancement, promoting positive individual and societal outcomes.
生成式人工智能(GenAI)在日常活动中的快速整合,促使人们对理解其对个人的影响产生了极大的兴趣。本文解决了与GenAI系统直接合作导致的个人层面变化研究中的关键差距。提出了一个新的理论框架,包括三个主要构念:认知依赖、情绪评价和行为转变。这些构建以社会认知理论、认知负荷理论和技术接受模型等已建立的理论为基础,为通过GenAI协作驱动人类转型的机制提供了一个全面的视角。从教育、专业环境、创意产业、社交媒体和医疗领域的不同案例研究中得出的经验证据表明,对GenAI的认知依赖增加如何导致重大的行为转变,并由情绪评估调节。分析证实了反馈循环的存在,行为转变进一步强化了认知依赖,突出了GenAI对个体的持续影响。关键发现表明,基因ai虽然提高了效率和创造力,但也带来了技能退化和批判性思维减少等风险。其影响延伸到人类与人工智能交互研究的理论进步,以及教育工作者、组织和政策制定者的实际应用。建议包括将人工智能素养纳入教育,发展平衡的专业实践,以及建立道德准则以减轻偏见并促进对GenAI系统的信任。这篇论文强调了正在进行的研究和伦理考虑的必要性,以确保基因人工智能作为人类增强的工具,促进积极的个人和社会结果。
{"title":"Transforming the self: Individual-level changes arising from collaboration with generative AI","authors":"Siddharth Nandagopal","doi":"10.1016/j.chbah.2025.100232","DOIUrl":"10.1016/j.chbah.2025.100232","url":null,"abstract":"<div><div>The rapid integration of Generative Artificial Intelligence (GenAI) into daily activities has prompted significant interest in understanding its impact on individuals. This paper addresses the critical gap in research concerning individual-level changes resulting from direct collaboration with GenAI systems. A novel theoretical framework is proposed, encompassing three primary constructs: Cognitive Dependency, Emotional Appraisal, and Behavioral Shift. These constructs are grounded in established theories such as Social Cognitive Theory, Cognitive Load Theory, and the Technology Acceptance Model, providing a comprehensive perspective on the mechanisms driving human transformation through GenAI collaboration. Empirical evidence is drawn from diverse case studies across education, professional environments, creative industries, social media, and the medical field, illustrating how increased cognitive dependency on GenAI leads to significant behavioral shifts, moderated by Emotional Appraisal. The analysis confirms the presence of feedback loops, where behavioral shifts further reinforce cognitive dependency, highlighting the sustained impact of GenAI on individuals. Key findings indicate that while GenAI enhances efficiency and creativity, it also poses risks such as skill degradation and reduced critical thinking. The implications extend to theoretical advancements in human-AI interaction research and practical applications for educators, organizations, and policymakers. Recommendations include integrating Artificial Intelligence literacy in education, developing balanced professional practices, and establishing ethical guidelines to mitigate biases and foster trust in GenAI systems. This paper underscores the necessity for ongoing research and ethical considerations to ensure that GenAI serves as a tool for human enhancement, promoting positive individual and societal outcomes.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100232"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sharing motives shape interface preferences for social sharing of emotion with conversational AI 分享动机塑造了与会话AI进行社交情感分享的界面偏好
Pub Date : 2025-10-27 DOI: 10.1016/j.chbah.2025.100229
Yuki Nozaki
Social sharing of emotion with conversational AI is a growing phenomenon. While social presence theory suggests richer, more human-like interfaces enhance social connection, how interface design influences users’ willingness to share emotions with conversational AI depending on their motives remains underexplored. Using an experimental vignette methodology, this study examines the influence of interface type (text vs. voice; without vs. with visual presence) and user motives (seeking cognitive support, social-affective support, or capitalization) on the willingness to share emotions with conversational AI, drawing a comparison with human partners. Based on data from 195 Japanese university students, the results revealed distinct user preferences. For the cognitive and social-affective support motives, users preferred a text-based interface, especially without visual presence (i.e., no avatar). Conversely, for the capitalization motive, an interface featuring visual presence was preferred. Moreover, perceived warmth was positively related to the willingness to share for social-affective support and capitalization motives, whereas perceived competence was positively related to it for cognitive and social-affective support motives. These patterns differed from those found in mediated communication with human partners. These findings refine social presence theory by suggesting that richer, more human-like interfaces are not always superior and underscore the importance of designing conversational AI tailored to user motives from a human-centered perspective.
通过对话式人工智能进行情感社交分享是一种日益增长的现象。虽然社交存在理论认为,更丰富、更人性化的界面可以增强社交联系,但界面设计如何影响用户根据动机与会话人工智能分享情感的意愿,仍未得到充分探讨。本研究使用实验小插图方法,研究了界面类型(文本与语音;无视觉存在与有视觉存在)和用户动机(寻求认知支持、社会情感支持或资本化)对与会话AI分享情感的意愿的影响,并与人类伙伴进行了比较。基于195名日本大学生的数据,结果显示出不同的用户偏好。对于认知和社会情感支持动机,用户更喜欢基于文本的界面,特别是没有视觉呈现(即没有头像)。相反,对于大写的动机,一个具有视觉存在感的界面是首选。此外,在社会情感支持和资本化动机下,感知温暖与分享意愿呈正相关,而在认知和社会情感支持动机下,感知能力与分享意愿呈正相关。这些模式不同于在与人类伴侣的中介交流中发现的模式。这些发现完善了社交存在理论,表明更丰富、更人性化的界面并不总是更好,并强调了从以人为中心的角度设计适合用户动机的对话AI的重要性。
{"title":"Sharing motives shape interface preferences for social sharing of emotion with conversational AI","authors":"Yuki Nozaki","doi":"10.1016/j.chbah.2025.100229","DOIUrl":"10.1016/j.chbah.2025.100229","url":null,"abstract":"<div><div>Social sharing of emotion with conversational AI is a growing phenomenon. While social presence theory suggests richer, more human-like interfaces enhance social connection, how interface design influences users’ willingness to share emotions with conversational AI depending on their motives remains underexplored. Using an experimental vignette methodology, this study examines the influence of interface type (text vs. voice; without vs. with visual presence) and user motives (seeking cognitive support, social-affective support, or capitalization) on the willingness to share emotions with conversational AI, drawing a comparison with human partners. Based on data from 195 Japanese university students, the results revealed distinct user preferences. For the cognitive and social-affective support motives, users preferred a text-based interface, especially without visual presence (i.e., no avatar). Conversely, for the capitalization motive, an interface featuring visual presence was preferred. Moreover, perceived warmth was positively related to the willingness to share for social-affective support and capitalization motives, whereas perceived competence was positively related to it for cognitive and social-affective support motives. These patterns differed from those found in mediated communication with human partners. These findings refine social presence theory by suggesting that richer, more human-like interfaces are not always superior and underscore the importance of designing conversational AI tailored to user motives from a human-centered perspective.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100229"},"PeriodicalIF":0.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145416392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Whose agent are you? Relational norms shape expectation from algorithmic and human advisors in social decisions 你是谁的代理人?在社会决策中,关系规范塑造了算法和人类顾问的期望
Pub Date : 2025-10-10 DOI: 10.1016/j.chbah.2025.100218
Lior Gazit , Ofer Arazy , Uri Hertz
As technology companies develop AI agents designed to function as friends, therapists, and personal advisors, a fundamental question arises: can algorithms fulfill these intimate social roles? Relational Models Theory (RMT) suggests that relationships shape normative expectations in social decisions. Our research examines the perceived relationship between human/algorithmic advisors and advisee. Across two experiments (N = 492), participants reported their expectations from advisors that recommended splitting money between the advisee and an unknown other. Participants expected algorithmic advisors to exhibit higher consistency and higher sensitivity to others' payoffs, even when this resulted in smaller gains for the advisee, reflecting expectations of institutional fairness rather than personal favoritism. In contrast, participants anticipated that human advisors would prioritize their own welfare, consistent with personal relational norms. Seeking to validate that relational norms indeed drive expectations, in a follow-up experiment, we framed advisors as either "Institutional" or "Personal". Participants expected both human and algorithmic advisors to show higher sensitivity to others' payoffs and greater consistency when framed as Institutional, in line with RMT. However, regardless of framing, participants expected algorithmic advisors to exhibit higher sensitivity to others’ payoffs and greater consistency than the expectations from human advisors. Our findings extend Human-AI interaction literature by showing that people apply different normative standards to algorithmic versus human advisors. Results suggest that while relational framing can influence perceptions, attempts to position AI as replacements for humans must account for the persistent tendency to view algorithms through an institutional lens.
随着科技公司开发出可以充当朋友、治疗师和个人顾问的人工智能代理,一个基本问题出现了:算法能完成这些亲密的社会角色吗?关系模型理论(RMT)认为,关系塑造了社会决策中的规范性期望。我们的研究考察了人类/算法顾问和被顾问之间的感知关系。在两个实验中(N = 492),参与者报告了他们对建议在被建议者和另一个不认识的人之间分配金钱的顾问的期望。参与者期望算法顾问对其他人的回报表现出更高的一致性和更高的敏感性,即使这导致被顾问的收益较小,这反映了对制度公平而不是个人偏袒的期望。相比之下,参与者预期人类顾问将优先考虑他们自己的利益,符合个人关系规范。为了验证关系规范确实会驱动期望,在后续实验中,我们将顾问定义为“机构”或“个人”。参与者期望人力和算法顾问在与RMT一致的情况下,对其他人的回报表现出更高的敏感性和更大的一致性。然而,无论框架如何,参与者期望算法顾问比人类顾问表现出更高的对他人回报的敏感性和更高的一致性。我们的研究结果扩展了人类与人工智能交互的文献,表明人们对算法和人类顾问采用不同的规范标准。结果表明,虽然关系框架可以影响感知,但试图将人工智能定位为人类的替代品,必须考虑到通过制度视角看待算法的持续倾向。
{"title":"Whose agent are you? Relational norms shape expectation from algorithmic and human advisors in social decisions","authors":"Lior Gazit ,&nbsp;Ofer Arazy ,&nbsp;Uri Hertz","doi":"10.1016/j.chbah.2025.100218","DOIUrl":"10.1016/j.chbah.2025.100218","url":null,"abstract":"<div><div>As technology companies develop AI agents designed to function as friends, therapists, and personal advisors, a fundamental question arises: can algorithms fulfill these intimate social roles? Relational Models Theory (RMT) suggests that relationships shape normative expectations in social decisions. Our research examines the perceived relationship between human/algorithmic advisors and advisee. Across two experiments (N = 492), participants reported their expectations from advisors that recommended splitting money between the advisee and an unknown other. Participants expected algorithmic advisors to exhibit higher consistency and higher sensitivity to others' payoffs, even when this resulted in smaller gains for the advisee, reflecting expectations of institutional fairness rather than personal favoritism. In contrast, participants anticipated that human advisors would prioritize their own welfare, consistent with personal relational norms. Seeking to validate that relational norms indeed drive expectations, in a follow-up experiment, we framed advisors as either \"Institutional\" or \"Personal\". Participants expected both human and algorithmic advisors to show higher sensitivity to others' payoffs and greater consistency when framed as Institutional, in line with RMT. However, regardless of framing, participants expected algorithmic advisors to exhibit higher sensitivity to others’ payoffs and greater consistency than the expectations from human advisors. Our findings extend Human-AI interaction literature by showing that people apply different normative standards to algorithmic versus human advisors. Results suggest that while relational framing can influence perceptions, attempts to position AI as replacements for humans must account for the persistent tendency to view algorithms through an institutional lens.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100218"},"PeriodicalIF":0.0,"publicationDate":"2025-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145320647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The construction and validation of the AI mindset scale (AIMS) 人工智能思维量表(AIMS)的构建与验证
Pub Date : 2025-10-08 DOI: 10.1016/j.chbah.2025.100220
Fabio Ibrahim , Nils-Torge Telle , Philipp Yorck Herzberg , Johann Christoph Münscher
As the dawn of artificial intelligence (AI) reshapes our future, a better understanding of the individual beliefs and attitudes toward AI becomes pivotal in harnessing its full potential. Therefore, this study aimed to develop and validate the AI Mindset Scale (AIMS), assessing the belief that AI usage enhances one's abilities and skills. The German sample (N = 921; 58 % female; Mage = 30.90; SDage = 8.71 years), was randomly split into two subsamples for EFA (n = 368) and CFA (n = 553). EFA resulted in a two-factor solution with four items per factor. CFA supported the model fit of the hierarchical model, including an AIMS total score and the subscales growth and non-deskilling (CFI = .982; TLI = .973; RMSEA = .072; SRMR = .043), showing good reliability (total score, α = .82; ω = .91; growth, α = .91; ω = .92; non-deskilling, α = .91; ω = .92). The nomological network analysis revealed that the AIMS captures distinct facets, with growth primarily predicted by AI acceptance and openness, and non-deskilling primarily by AI fear and locus of control.
随着人工智能(AI)的曙光重塑我们的未来,更好地理解个人对人工智能的信仰和态度对于充分利用其潜力至关重要。因此,本研究旨在开发和验证人工智能思维量表(AIMS),评估使用人工智能可以提高能力和技能的信念。德国样本(N = 921; 58%为女性;年龄= 30.90;年龄= 8.71岁),随机分为EFA (N = 368)和CFA (N = 553)两个亚样本。EFA产生了一个双因素解决方案,每个因素有四个项目。CFA支持分层模型的模型拟合,包括AIMS总分和子量表生长和非去技能(CFI = 0.982; TLI = 0.973; RMSEA = 0.072; SRMR = 0.043),具有良好的信度(总分,α = 0.82; ω = 0.91;生长,α = 0.91; ω = 0.92;非去技能,α = 0.91; ω = 0.92)。法理学网络分析显示,AIMS捕获了不同的方面,增长主要由人工智能的接受和开放来预测,而非去技能化主要由人工智能的恐惧和控制点来预测。
{"title":"The construction and validation of the AI mindset scale (AIMS)","authors":"Fabio Ibrahim ,&nbsp;Nils-Torge Telle ,&nbsp;Philipp Yorck Herzberg ,&nbsp;Johann Christoph Münscher","doi":"10.1016/j.chbah.2025.100220","DOIUrl":"10.1016/j.chbah.2025.100220","url":null,"abstract":"<div><div>As the dawn of artificial intelligence (AI) reshapes our future, a better understanding of the individual beliefs and attitudes toward AI becomes pivotal in harnessing its full potential. Therefore, this study aimed to develop and validate the AI Mindset Scale (AIMS), assessing the belief that AI usage enhances one's abilities and skills. The German sample (<em>N</em> = 921; 58 % female; <em>M</em><sub><em>age</em></sub> = 30.90; <em>SD</em><sub><em>age</em></sub> = 8.71 years), was randomly split into two subsamples for EFA (<em>n</em> = 368) and CFA (<em>n</em> = 553). EFA resulted in a two-factor solution with four items per factor. CFA supported the model fit of the hierarchical model, including an AIMS total score and the subscales growth and non-deskilling (CFI = .982; TLI = .973; RMSEA = .072; SRMR = .043), showing good reliability (total score, α = .82; ω = .91; growth, α = .91; ω = .92; non-deskilling, α = .91; ω = .92). The nomological network analysis revealed that the AIMS captures distinct facets, with growth primarily predicted by AI acceptance and openness, and non-deskilling primarily by AI fear and locus of control.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100220"},"PeriodicalIF":0.0,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Do listeners devalue AI-generated pop music? Exploring negative biases in listeners' responses to AI-labelled vs human-labelled pop music 听众会贬低人工智能生成的流行音乐吗?探索听众对人工智能标签与人类标签流行音乐的反应中的负面偏见
Pub Date : 2025-10-08 DOI: 10.1016/j.chbah.2025.100217
Suqi Chia , Andree Hartanto , Eddie M.W. Tong
The advancement of Artificial Intelligence (AI) in creative domains has sparked discussions regarding how listeners perceive and engage with AI-generated music. This study investigated listeners' emotions, perceptions, and attitudes toward AI-generated versus human-composed pop music. The study hypothesized that listeners would rate music labelled as AI-generated lower in terms of liking, quality, positive emotions, sensorial, imaginal, and experiential responses, as well as need for re-experience and purchase intention, compared to music labelled as human-composed. Participants listened to eight AI-generated pop songs, four labelled as AI-generated and four labelled as human-composed. They then rated each song on various dimensions. To ensure a balanced design, label assignment on composer identity was fully randomized across both participants and songs. Contrary to the hypotheses, the participants rated pop songs labelled as AI-generated more highly in positive emotions, including happiness, interest, awe, and energy, compared to those labelled as human-composed. No significant differences were found between purported composer identity in the remaining dimensions. These results suggest that while the perception of AI authorship does influence listeners, the effects are primarily affective rather than sensorial, imaginal, experiential, or behavioural. Notably, considering that listeners rated pop songs labelled as AI-generated more positively in emotions, the findings imply that AI-generated music may be more readily accepted than previously assumed.
人工智能(AI)在创意领域的进步引发了关于听众如何感知和参与人工智能生成的音乐的讨论。这项研究调查了听众对人工智能生成的流行音乐和人类创作的流行音乐的情绪、看法和态度。该研究假设,与人类作曲的音乐相比,听众在喜欢度、质量、积极情绪、感官、想象和体验反应,以及重新体验的需求和购买意愿方面,对人工智能创作的音乐的评价较低。参与者听了八首人工智能生成的流行歌曲,四首被标记为人工智能生成的,四首被标记为人类创作的。然后,他们从不同的维度对每首歌进行评分。为了确保平衡设计,作曲家身份的标签分配在参与者和歌曲中都是完全随机的。与假设相反,与人类创作的流行歌曲相比,参与者认为人工智能创作的流行歌曲更能表达积极情绪,包括快乐、兴趣、敬畏和精力。在其余维度中,声称的作曲家身份之间没有显着差异。这些结果表明,虽然对人工智能作者身份的感知确实会影响听众,但这种影响主要是情感上的,而不是感官上的、想象上的、经验上的或行为上的。值得注意的是,考虑到听众对被标记为人工智能生成的流行歌曲的情绪评价更为积极,研究结果意味着人工智能生成的音乐可能比之前假设的更容易被接受。
{"title":"Do listeners devalue AI-generated pop music? Exploring negative biases in listeners' responses to AI-labelled vs human-labelled pop music","authors":"Suqi Chia ,&nbsp;Andree Hartanto ,&nbsp;Eddie M.W. Tong","doi":"10.1016/j.chbah.2025.100217","DOIUrl":"10.1016/j.chbah.2025.100217","url":null,"abstract":"<div><div>The advancement of Artificial Intelligence (AI) in creative domains has sparked discussions regarding how listeners perceive and engage with AI-generated music. This study investigated listeners' emotions, perceptions, and attitudes toward AI-generated versus human-composed pop music. The study hypothesized that listeners would rate music labelled as AI-generated lower in terms of liking, quality, positive emotions, sensorial, imaginal, and experiential responses, as well as need for re-experience and purchase intention, compared to music labelled as human-composed. Participants listened to eight AI-generated pop songs, four labelled as AI-generated and four labelled as human-composed. They then rated each song on various dimensions. To ensure a balanced design, label assignment on composer identity was fully randomized across both participants and songs. Contrary to the hypotheses, the participants rated pop songs labelled as AI-generated more highly in positive emotions, including happiness, interest, awe, and energy, compared to those labelled as human-composed. No significant differences were found between purported composer identity in the remaining dimensions. These results suggest that while the perception of AI authorship does influence listeners, the effects are primarily affective rather than sensorial, imaginal, experiential, or behavioural. Notably, considering that listeners rated pop songs labelled as AI-generated more positively in emotions, the findings imply that AI-generated music may be more readily accepted than previously assumed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100217"},"PeriodicalIF":0.0,"publicationDate":"2025-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145320648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1