首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
An insight into humans helping Robots: The role of attitudes, anthropomorphic cues, and context of use 洞察人类帮助机器人:态度的作用,拟人化的线索,和使用的背景
Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100159
Andreea E. Potinteu , Nadia Said , Georg Jahn , Markus Huff
Robots are increasingly present in our society. Their successful integration depends, however, on understanding and fostering pro-social behavior towards robots, in this case, helping. To better understand people's reported willingness to help robots across different contexts (delivery, medical, service, and security), we conducted two preregistered studies on a German-speaking population (N = 414, and N = 541, representative of age and gender). We assessed attitudes, knowledge about robots, and anthropomorphism and investigated their effect on reported willingness to help. Results show that positive attitudes significantly predicted reported higher willingness to help. Having more knowledge about robots increased reported willingness to help in Study 2. Additionally, we found no effect of anthropomorphism, neither in the form of robot appearance nor as participants' own view about robots, on reported willingness to help. Furthermore, results point to a context-dependency for willingness to help, with participants preferring to help robots in a medical context compared to a security one, for example. Our findings thus highlight the relevance of context and attitudes in understanding helping behavior towards robots. Additionally, our results raise questions about the relevance of anthropomorphism in pro-sociality toward robots.
机器人越来越多地出现在我们的社会中。然而,他们的成功整合取决于理解和培养对机器人的亲社会行为,在这种情况下,帮助。为了更好地了解人们在不同情况下(递送、医疗、服务和安全)帮助机器人的意愿,我们对讲德语的人群(N = 414和N = 541,代表年龄和性别)进行了两项预先登记的研究。我们评估了对机器人的态度、知识和拟人化,并调查了它们对报告的帮助意愿的影响。结果表明,积极的态度显著预测报告的更高的帮助意愿。在研究2中,对机器人有更多了解的人更愿意提供帮助。此外,我们发现拟人化对报告的帮助意愿没有影响,无论是以机器人外观的形式还是参与者自己对机器人的看法。此外,研究结果还表明,参与者的帮助意愿与环境有关,例如,与安全环境相比,参与者更愿意在医疗环境中帮助机器人。因此,我们的研究结果强调了环境和态度在理解对机器人的帮助行为中的相关性。此外,我们的研究结果提出了关于拟人化对机器人亲社会性的相关性的问题。
{"title":"An insight into humans helping Robots: The role of attitudes, anthropomorphic cues, and context of use","authors":"Andreea E. Potinteu ,&nbsp;Nadia Said ,&nbsp;Georg Jahn ,&nbsp;Markus Huff","doi":"10.1016/j.chbah.2025.100159","DOIUrl":"10.1016/j.chbah.2025.100159","url":null,"abstract":"<div><div>Robots are increasingly present in our society. Their successful integration depends, however, on understanding and fostering pro-social behavior towards robots, in this case, helping. To better understand people's reported willingness to help robots across different contexts (delivery, medical, service, and security), we conducted two preregistered studies on a German-speaking population (<em>N</em> = 414, and <em>N</em> = 541, representative of age and gender). We assessed attitudes, knowledge about robots, and anthropomorphism and investigated their effect on reported willingness to help. Results show that positive attitudes significantly predicted reported higher willingness to help. Having more knowledge about robots increased reported willingness to help in Study 2. Additionally, we found no effect of anthropomorphism, neither in the form of robot appearance nor as participants' own view about robots, on reported willingness to help. Furthermore, results point to a context-dependency for willingness to help, with participants preferring to help robots in a medical context compared to a security one, for example. Our findings thus highlight the relevance of context and attitudes in understanding helping behavior towards robots. Additionally, our results raise questions about the relevance of anthropomorphism in pro-sociality toward robots.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100159"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144124986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI literacy and trust: A multi-method study of Human-GAI team collaboration 人工智能素养与信任:人类-人工智能团队协作的多方法研究
Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100162
Zilong Pan , Ozias A. Moore , Antigoni Papadimitriou , Jiayan Zhu
As artificial intelligence (AI) becomes increasingly integrated into team settings for collaboration with humans, understanding the dynamics of trust and AI literacy is essential for enhancing team effectiveness. This study investigates the relationship between trust and AI literacy in human-generative AI (GAI) team collaboration, focusing on how AI literacy affects trust formation in these interactions. Drawing upon foundational teamwork literature and AI literacy frameworks, we conducted a multi-method investigation involving 116 undergraduate team members across 23 project teams throughout a semester. In Study 1, qualitative findings revealed distinct attitudes toward GAI as a teammate, categorized as trust, distrust, and ambivalence. Study 2 employed quantitative methods to determine predictors of trust in GAI, demonstrating that AI knowledge and perceived value—key components of AI literacy—significantly influenced perceptions of trust. Notably, perceptions of GAI accuracy emerged as a critical determinant of trust. Our findings highlight the complex interplay between AI literacy and trust in human-GAI collaboration. We observed a paradox: increased AI literacy can enhance collaboration but may also lead to hesitancy in future AI use. We contribute to advancing the understanding of human-AI collaboration by highlighting the critical role of AI literacy in shaping trust and socio-technical team dynamics. Our study provides evidence demonstrating the importance of targeted AI literacy development in building trust and fostering effective collaboration in human-GAI teams. These findings provide a foundation for research aimed at optimizing human-GAI teamwork and developing adaptive AI literacy frameworks, empowering individuals to effectively engage with AI across diverse collaborative settings.
随着人工智能(AI)越来越多地融入到与人类合作的团队环境中,了解信任的动态和人工智能素养对于提高团队效率至关重要。本研究探讨了人类生成人工智能(GAI)团队协作中信任与人工智能素养之间的关系,重点研究了人工智能素养如何影响这些互动中的信任形成。根据基本的团队合作文献和人工智能素养框架,我们在一个学期内对23个项目团队的116名本科生团队成员进行了多方法调查。在研究1中,定性发现揭示了对GAI作为队友的不同态度,分为信任、不信任和矛盾心理。研究2采用定量方法确定人工智能信任的预测因素,表明人工智能知识和感知价值——人工智能素养的关键组成部分——显著影响信任感知。值得注意的是,GAI准确性的感知成为信任的关键决定因素。我们的研究结果强调了人工智能素养与人类-人工智能协作中的信任之间复杂的相互作用。我们观察到一个悖论:提高人工智能素养可以增强协作,但也可能导致未来使用人工智能的犹豫。我们通过强调人工智能素养在塑造信任和社会技术团队动态方面的关键作用,为促进对人类-人工智能协作的理解做出贡献。我们的研究提供了证据,证明有针对性的人工智能素养发展在建立信任和促进人类-人工智能团队有效合作方面的重要性。这些发现为优化人类-人工智能团队合作和开发适应性人工智能素养框架的研究提供了基础,使个人能够在不同的协作环境中有效地与人工智能互动。
{"title":"AI literacy and trust: A multi-method study of Human-GAI team collaboration","authors":"Zilong Pan ,&nbsp;Ozias A. Moore ,&nbsp;Antigoni Papadimitriou ,&nbsp;Jiayan Zhu","doi":"10.1016/j.chbah.2025.100162","DOIUrl":"10.1016/j.chbah.2025.100162","url":null,"abstract":"<div><div>As artificial intelligence (AI) becomes increasingly integrated into team settings for collaboration with humans, understanding the dynamics of trust and AI literacy is essential for enhancing team effectiveness. This study investigates the relationship between trust and AI literacy in human-generative AI (GAI) team collaboration, focusing on how AI literacy affects trust formation in these interactions. Drawing upon foundational teamwork literature and AI literacy frameworks, we conducted a multi-method investigation involving 116 undergraduate team members across 23 project teams throughout a semester. In Study 1, qualitative findings revealed distinct attitudes toward GAI as a teammate, categorized as trust, distrust, and ambivalence. Study 2 employed quantitative methods to determine predictors of trust in GAI, demonstrating that AI knowledge and perceived value—key components of AI literacy—significantly influenced perceptions of trust. Notably, perceptions of GAI accuracy emerged as a critical determinant of trust. Our findings highlight the complex interplay between AI literacy and trust in human-GAI collaboration. We observed a paradox: increased AI literacy can enhance collaboration but may also lead to hesitancy in future AI use. We contribute to advancing the understanding of human-AI collaboration by highlighting the critical role of AI literacy in shaping trust and socio-technical team dynamics. Our study provides evidence demonstrating the importance of targeted AI literacy development in building trust and fostering effective collaboration in human-GAI teams. These findings provide a foundation for research aimed at optimizing human-GAI teamwork and developing adaptive AI literacy frameworks, empowering individuals to effectively engage with AI across diverse collaborative settings.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100162"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cognitive phantoms in large language models through the lens of latent variables 从潜在变量的角度看大型语言模型中的认知幻影
Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100161
Sanne Peereboom , Inga Schwabe , Bennett Kleinberg
Large language models (LLMs) increasingly reach real-world applications, necessitating a better understanding of their behaviour. Their size and complexity complicate traditional assessment methods, causing the emergence of alternative approaches inspired by the field of psychology. Recent studies administering psychometric questionnaires to LLMs report human-like traits in LLMs, potentially influencing LLM behaviour. However, this approach suffers from a validity problem: it presupposes that these traits exist in LLMs and that they are measurable with tools designed for humans. Typical procedures rarely acknowledge the validity problem in LLMs, comparing and interpreting average LLM scores. This study investigates this problem by comparing latent structures of personality between humans and three LLMs using two validated personality questionnaires. Findings suggest that questionnaires designed for humans do not validly measure similar constructs in LLMs, and that these constructs may not exist in LLMs at all, highlighting the need for psychometric analyses of LLM responses to avoid chasing cognitive phantoms.
大型语言模型(llm)越来越多地应用于现实世界,需要更好地理解它们的行为。它们的规模和复杂性使传统的评估方法复杂化,导致受心理学领域启发的替代方法的出现。最近的研究对法学硕士进行了心理问卷调查,报告了法学硕士的类人特征,这可能会影响法学硕士的行为。然而,这种方法存在一个有效性问题:它假设这些特征存在于法学硕士中,并且它们可以用为人类设计的工具来测量。典型的程序很少承认法学硕士的效度问题,比较和解释法学硕士的平均分数。本研究采用两份经验证的人格问卷,比较了人类和三位法学硕士的人格潜在结构。研究结果表明,为人类设计的问卷并不能有效地测量法学硕士的类似构念,而且这些构念可能根本不存在于法学硕士中,这突出了法学硕士反应的心理测量分析的必要性,以避免追逐认知幻影。
{"title":"Cognitive phantoms in large language models through the lens of latent variables","authors":"Sanne Peereboom ,&nbsp;Inga Schwabe ,&nbsp;Bennett Kleinberg","doi":"10.1016/j.chbah.2025.100161","DOIUrl":"10.1016/j.chbah.2025.100161","url":null,"abstract":"<div><div>Large language models (LLMs) increasingly reach real-world applications, necessitating a better understanding of their behaviour. Their size and complexity complicate traditional assessment methods, causing the emergence of alternative approaches inspired by the field of psychology. Recent studies administering psychometric questionnaires to LLMs report human-like traits in LLMs, potentially influencing LLM behaviour. However, this approach suffers from a validity problem: it presupposes that these traits exist in LLMs and that they are measurable with tools designed for humans. Typical procedures rarely acknowledge the validity problem in LLMs, comparing and interpreting average LLM scores. This study investigates this problem by comparing latent structures of personality between humans and three LLMs using two validated personality questionnaires. Findings suggest that questionnaires designed for humans do not validly measure similar constructs in LLMs, and that these constructs may not exist in LLMs at all, highlighting the need for psychometric analyses of LLM responses to avoid chasing cognitive phantoms.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100161"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144107932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented reality and robotics in education: A systematic literature review 增强现实和机器人教育:系统的文献综述
Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100157
Christina Pasalidou , Chris Lytridis , Avgoustos Tsinakos , Nikolaos Fachantidis
Integrating cutting-edge technologies into education has been a continuous goal to enhance teaching and learning experiences. Augmented Reality (AR) and robotics are two emerging technologies that have shown promise in transforming educational environments. This paper presents a systematic review of the literature on the combination of AR and robotics for educational purposes, identifying key applications, benefits, and trends. Using the PRISMA methodology, 69 relevant studies from five major databases were analysed and categorised into three themes: (a) AR and Socially Assistive Robots (SAR), (b) AR-assisted educational robotics, and (c) AR in robotics/engineering education. The review provides insights into how different AR-enhanced robotics applications across primary, secondary, and higher education, provide visualizations, multimodal feedback, and immersive experiences. Key findings suggest that while interactive features of AR and the embodiment of robots show promising results for learning, fostering motivation, excitement, positive attitudes, and enriched educational experiences, challenges such as technological complexity and cost remain barriers to widespread adoption. Future research should focus on pedagogical frameworks and large-scale implementations to optimize AR-robotics integration in diverse educational settings.
将尖端科技融入教育一直是提高教学和学习体验的目标。增强现实(AR)和机器人技术是两种新兴技术,它们在改变教育环境方面显示出了希望。本文系统地回顾了AR和机器人技术在教育方面的结合,确定了关键的应用、好处和趋势。使用PRISMA方法,我们分析了来自五个主要数据库的69项相关研究,并将其分为三个主题:(a) AR和社会辅助机器人(SAR), (b) AR辅助教育机器人,以及(c)机器人/工程教育中的AR。该评论提供了关于ar增强机器人在小学,中学和高等教育中的不同应用如何提供可视化,多模态反馈和沉浸式体验的见解。主要研究结果表明,虽然增强现实的互动功能和机器人的体现在学习、培养动机、兴奋、积极态度和丰富的教育体验方面显示出有希望的结果,但技术复杂性和成本等挑战仍然是广泛采用的障碍。未来的研究应该集中在教学框架和大规模实施上,以优化ar机器人在不同教育环境中的集成。
{"title":"Augmented reality and robotics in education: A systematic literature review","authors":"Christina Pasalidou ,&nbsp;Chris Lytridis ,&nbsp;Avgoustos Tsinakos ,&nbsp;Nikolaos Fachantidis","doi":"10.1016/j.chbah.2025.100157","DOIUrl":"10.1016/j.chbah.2025.100157","url":null,"abstract":"<div><div>Integrating cutting-edge technologies into education has been a continuous goal to enhance teaching and learning experiences. Augmented Reality (AR) and robotics are two emerging technologies that have shown promise in transforming educational environments. This paper presents a systematic review of the literature on the combination of AR and robotics for educational purposes, identifying key applications, benefits, and trends. Using the PRISMA methodology, 69 relevant studies from five major databases were analysed and categorised into three themes: (a) AR and Socially Assistive Robots (SAR), (b) AR-assisted educational robotics, and (c) AR in robotics/engineering education. The review provides insights into how different AR-enhanced robotics applications across primary, secondary, and higher education, provide visualizations, multimodal feedback, and immersive experiences. Key findings suggest that while interactive features of AR and the embodiment of robots show promising results for learning, fostering motivation, excitement, positive attitudes, and enriched educational experiences, challenges such as technological complexity and cost remain barriers to widespread adoption. Future research should focus on pedagogical frameworks and large-scale implementations to optimize AR-robotics integration in diverse educational settings.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100157"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144107933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Promoting online evaluation skills through educational chatbots
Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100160
Nils Knoth , Carolin Hahnel , Mirjam Ebersbach
Online evaluation skills such as assessing the credibility and relevance of Internet sources are crucial for students' self-regulated learning on the Internet, yet many struggle to identify reliable information online. While AI-based chatbots have made progress in teaching various skills, their application in improving online evaluation skills remains underexplored. In this study, we present an educational chatbot designed to train university students to evaluate online information. Participants were assigned to one of three conditions: (1) training with the interactive chatbot, (2) training with a static checklist, or (3) no additional training (i.e., baseline condition). In an ecologically valid test that provided a simulated web environment, participants had to identify the most reliable and relevant websites among several non-target websites to solve given problems. Participants in the chatbot condition outperformed those in the baseline condition on this test, while participants in the checklist condition showed no significant advantage over the baseline condition. These findings suggest the potential of educational chatbots as effective tools for improving critical evaluation skills. The implications of using chatbots for scalable educational interventions are discussed, particularly in light of recent advances such as the integration of large language models into search engines and the potential for hybrid intelligence paradigms that combine human oversight with AI-driven learning tools.
在线评估技能,如评估互联网资源的可信度和相关性,对于学生在互联网上自主学习至关重要,但许多人很难在网上识别可靠的信息。虽然基于人工智能的聊天机器人在教授各种技能方面取得了进展,但它们在提高在线评估技能方面的应用仍未得到充分探索。在这项研究中,我们提出了一个教育聊天机器人,旨在训练大学生评估在线信息。参与者被分配到三种条件中的一种:(1)使用交互式聊天机器人进行训练,(2)使用静态检查表进行训练,或(3)没有额外的训练(即基线条件)。在提供模拟网络环境的生态有效测试中,参与者必须在几个非目标网站中找出最可靠和相关的网站来解决给定的问题。在这个测试中,聊天机器人条件下的参与者比基线条件下的参与者表现得更好,而清单条件下的参与者比基线条件下的参与者没有明显的优势。这些发现表明,教育聊天机器人有潜力成为提高批判性评估技能的有效工具。讨论了使用聊天机器人进行可扩展教育干预的影响,特别是考虑到最近的进展,例如将大型语言模型集成到搜索引擎中,以及将人类监督与人工智能驱动的学习工具相结合的混合智能范式的潜力。
{"title":"Promoting online evaluation skills through educational chatbots","authors":"Nils Knoth ,&nbsp;Carolin Hahnel ,&nbsp;Mirjam Ebersbach","doi":"10.1016/j.chbah.2025.100160","DOIUrl":"10.1016/j.chbah.2025.100160","url":null,"abstract":"<div><div>Online evaluation skills such as assessing the credibility and relevance of Internet sources are crucial for students' self-regulated learning on the Internet, yet many struggle to identify reliable information online. While AI-based chatbots have made progress in teaching various skills, their application in improving online evaluation skills remains underexplored. In this study, we present an educational chatbot designed to train university students to evaluate online information. Participants were assigned to one of three conditions: (1) training with the interactive chatbot, (2) training with a static checklist, or (3) no additional training (i.e., baseline condition). In an ecologically valid test that provided a simulated web environment, participants had to identify the most reliable and relevant websites among several non-target websites to solve given problems. Participants in the chatbot condition outperformed those in the baseline condition on this test, while participants in the checklist condition showed no significant advantage over the baseline condition. These findings suggest the potential of educational chatbots as effective tools for improving critical evaluation skills. The implications of using chatbots for scalable educational interventions are discussed, particularly in light of recent advances such as the integration of large language models into search engines and the potential for hybrid intelligence paradigms that combine human oversight with AI-driven learning tools.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100160"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143943167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“Eh? Aye!”: Categorisation bias for natural human vs AI-augmented voices is influenced by dialect “嗯?啊!:自然人类声音与人工智能增强声音的分类偏差受到方言的影响
Pub Date : 2025-04-15 DOI: 10.1016/j.chbah.2025.100153
Neil W. Kirk
Advances in AI-assisted voice technology have made it easier to clone or disguise voices, creating a wide range of synthetic voices using different accents, dialects, and languages. While these developments offer positive applications, they also pose risks for misuse. This raises the question as to whether listeners can reliably distinguish between human and AI-enhanced speech and whether prior experiences and expectations about language varieties that are traditionally less-represented by technology affect this ability. Two experiments were conducted to investigate listeners’ ability to categorise voices as human or AI-enhanced in both a standard and a regional Scottish dialect. Using a Signal Detection Theory framework, both experiments explored participants' sensitivity and categorisation biases. In Experiment 1 (N = 100), a predominantly Scottish sample showed above-chance performance in distinguishing between human and AI-enhanced voices, but there was no significant effect of dialect on sensitivity. However, listeners exhibited a bias toward categorising voices as “human”, which was concentrated within the regional Dundonian Scots dialect. In Experiment 2 (N = 100) participants from southern and eastern England, demonstrated reduced overall sensitivity and a Human Categorisation Bias that was more evenly spread across the two dialects. These findings have implications for the growing use of AI-assisted voice technology in linguistically diverse contexts, highlighting both the potential for enhanced representation of Minority, Indigenous, Non-standard and Dialect (MIND) varieties, and the risks of AI misuse.
人工智能辅助语音技术的进步使得克隆或伪装声音变得更加容易,使用不同的口音、方言和语言创造出各种各样的合成声音。虽然这些发展提供了积极的应用,但它们也带来了滥用的风险。这就提出了一个问题,即听众是否能够可靠地区分人类和人工智能增强的语音,以及对传统上较少被技术代表的语言品种的先前经验和期望是否会影响这种能力。研究人员进行了两项实验,以调查听众在标准苏格兰方言和地区苏格兰方言中区分人类声音和人工智能增强声音的能力。使用信号检测理论框架,两个实验都探讨了参与者的敏感性和分类偏差。在实验1 (N = 100)中,以苏格兰人为主的样本在区分人类和人工智能增强的声音方面表现出高于机会的表现,但方言对灵敏度没有显著影响。然而,听众表现出一种将声音归类为“人类”的偏见,这种偏见主要集中在邓顿尼亚苏格兰方言中。在实验2 (N = 100)中,来自英格兰南部和东部的参与者表现出较低的总体敏感性和人类分类偏见,这种偏见在两种方言中更为均匀地分布。这些发现对人工智能辅助语音技术在语言多样化背景下的日益使用具有重要意义,突出了少数民族、土著、非标准和方言(MIND)变体的增强代表性的潜力,以及人工智能滥用的风险。
{"title":"“Eh? Aye!”: Categorisation bias for natural human vs AI-augmented voices is influenced by dialect","authors":"Neil W. Kirk","doi":"10.1016/j.chbah.2025.100153","DOIUrl":"10.1016/j.chbah.2025.100153","url":null,"abstract":"<div><div>Advances in AI-assisted voice technology have made it easier to clone or disguise voices, creating a wide range of synthetic voices using different accents, dialects, and languages. While these developments offer positive applications, they also pose risks for misuse. This raises the question as to whether listeners can reliably distinguish between human and AI-enhanced speech and whether prior experiences and expectations about language varieties that are traditionally less-represented by technology affect this ability. Two experiments were conducted to investigate listeners’ ability to categorise voices as human or AI-enhanced in both a standard and a regional Scottish dialect. Using a Signal Detection Theory framework, both experiments explored participants' sensitivity and categorisation biases. In Experiment 1 (<em>N</em> = 100), a predominantly Scottish sample showed above-chance performance in distinguishing between human and AI-enhanced voices, but there was no significant effect of dialect on sensitivity. However, listeners exhibited a bias toward categorising voices as “human”, which was concentrated within the regional Dundonian Scots dialect. In Experiment 2 (<em>N</em> = 100) participants from southern and eastern England, demonstrated reduced overall sensitivity and a <em>Human Categorisation Bias</em> that was more evenly spread across the two dialects. These findings have implications for the growing use of AI-assisted voice technology in linguistically diverse contexts, highlighting both the potential for enhanced representation of Minority, Indigenous, Non-standard and Dialect (MIND) varieties, and the risks of AI misuse.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100153"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143833546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
We see them as we are: How humans react to perceived unfair behavior by artificial intelligence in a social decision-making task 我们看到的是我们自己:人类如何应对人工智能在社会决策任务中感知到的不公平行为
Pub Date : 2025-04-15 DOI: 10.1016/j.chbah.2025.100154
Christopher A. Sanchez , Lena Hildenbrand , Naomi Fitter
The proliferation of artificially intelligent (AI) systems in many everyday contexts has emphasized the need to better understand how humans interact with such systems. Previous research has suggested that individuals in many applied contexts believe that these systems are less biased than human counterparts, and thus more trustworthy decision makers. The current study examined whether this common assumption was actually true when placed in a decision-making task that also contains a strong social component (i.e., the Ultimatum Game). Anthropomorphic appearance of AI opponents was also manipulated to determine whether visual appearance also contributes to response behavior. Results indicated that participants treated AI agents identically to humans, and not as non-intelligent (e.g., random number generator-based) systems. This was manifested in both how they responded to offers from the AI system, and also how fairly they subsequently treated the AI opponent. The current results suggest that humans treat AI systems very similarly to other humans, and not as privileged decision makers, which has both positive and negative implications for human-autonomy teaming.
人工智能(AI)系统在许多日常环境中的扩散强调了更好地理解人类如何与这些系统交互的必要性。先前的研究表明,在许多应用环境中,个人认为这些系统比人类同行更少偏见,因此更值得信赖的决策者。目前的研究检验了当被置于包含强烈社会成分的决策任务(即最后通牒游戏)中时,这种普遍假设是否真的成立。人工智能对手的拟人化外观也被操纵,以确定视觉外观是否也有助于反应行为。结果表明,参与者将AI代理与人类等同对待,而不是将其视为非智能(例如,基于随机数生成器的)系统。这既表现在他们如何回应AI系统的提议,也表现在他们如何公平对待AI对手。目前的结果表明,人类对待人工智能系统与对待其他人非常相似,而不是作为特权决策者,这对人类自主团队既有积极的影响,也有消极的影响。
{"title":"We see them as we are: How humans react to perceived unfair behavior by artificial intelligence in a social decision-making task","authors":"Christopher A. Sanchez ,&nbsp;Lena Hildenbrand ,&nbsp;Naomi Fitter","doi":"10.1016/j.chbah.2025.100154","DOIUrl":"10.1016/j.chbah.2025.100154","url":null,"abstract":"<div><div>The proliferation of artificially intelligent (AI) systems in many everyday contexts has emphasized the need to better understand how humans interact with such systems. Previous research has suggested that individuals in many applied contexts believe that these systems are less biased than human counterparts, and thus more trustworthy decision makers. The current study examined whether this common assumption was actually true when placed in a decision-making task that also contains a strong social component (i.e., the Ultimatum Game). Anthropomorphic appearance of AI opponents was also manipulated to determine whether visual appearance also contributes to response behavior. Results indicated that participants treated AI agents identically to humans, and not as non-intelligent (e.g., random number generator-based) systems. This was manifested in both how they responded to offers from the AI system, and also how fairly they subsequently treated the AI opponent. The current results suggest that humans treat AI systems very similarly to other humans, and not as privileged decision makers, which has both positive and negative implications for human-autonomy teaming.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100154"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143854375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing ChatGPT with human judgements of social traits from face photographs 将ChatGPT与人类从面部照片中判断社会特征进行比较
Pub Date : 2025-04-15 DOI: 10.1016/j.chbah.2025.100156
Robin S.S. Kramer
Facial first impressions of social traits play an influential role in our everyday lives. With the advent of artificial intelligence techniques, researchers have begun to employ such tools in the prediction of human impressions formed from the face alone. ChatGPT's latest version features the ability to interpret images as input, and so begs the question: does the chatbot's judgements of social traits from face images align with human judgements? To this end, I carried out a series of studies utilising a pre-existing face image set and its accompanying norming data. In Study 1a, with a focus on three core trait dimensions (attractiveness, dominance, and trustworthiness), I presented ChatGPT with pairs of faces which had been rated as high versus low on a given trait. For the majority of pairs, the chatbot's responses aligned with human judgements. In Study 1b, I found that ChatGPT's ratings of attractiveness showed medium to large associations with those provided by human observers. Finally, I investigated the possibility of biases in the chatbot's perceptions. While Study 2 found no support for an extreme form of race bias in judgements of social traits, the results of Study 3 providing evidence of an attractiveness halo effect – more attractive faces were also judged to be more confident, intelligent, and sociable. Taken together, these results suggest that ChatGPT's responses align with human judgements of social traits, including the presence of a halo effect. As such, I discuss some of the implications for ChatGPT's use across several domains.
面部对社会特征的第一印象在我们的日常生活中发挥着重要作用。随着人工智能技术的出现,研究人员已经开始使用这些工具来预测仅由面部形成的人类印象。ChatGPT的最新版本具有将图像解释为输入的能力,因此回避了一个问题:聊天机器人对面部图像的社会特征的判断与人类的判断一致吗?为此,我利用预先存在的面部图像集及其伴随的规范化数据进行了一系列研究。在研究1a中,重点关注三个核心特征维度(吸引力,支配性和可信度),我向ChatGPT展示了在给定特征上被评为高与低的成对面孔。对于大多数配对,聊天机器人的反应与人类的判断一致。在研究1b中,我发现ChatGPT的吸引力评级与人类观察者提供的评级显示出中等到较大的关联。最后,我调查了聊天机器人感知中存在偏见的可能性。虽然研究2没有发现在对社会特征的判断中存在极端形式的种族偏见,但研究3的结果提供了吸引力光环效应的证据——更有吸引力的面孔也被认为更自信、更聪明、更善于交际。综上所述,这些结果表明ChatGPT的反应与人类对社会特征的判断一致,包括光环效应的存在。因此,我将讨论ChatGPT跨多个领域使用的一些含义。
{"title":"Comparing ChatGPT with human judgements of social traits from face photographs","authors":"Robin S.S. Kramer","doi":"10.1016/j.chbah.2025.100156","DOIUrl":"10.1016/j.chbah.2025.100156","url":null,"abstract":"<div><div>Facial first impressions of social traits play an influential role in our everyday lives. With the advent of artificial intelligence techniques, researchers have begun to employ such tools in the prediction of human impressions formed from the face alone. ChatGPT's latest version features the ability to interpret images as input, and so begs the question: does the chatbot's judgements of social traits from face images align with human judgements? To this end, I carried out a series of studies utilising a pre-existing face image set and its accompanying norming data. In Study 1a, with a focus on three core trait dimensions (attractiveness, dominance, and trustworthiness), I presented ChatGPT with pairs of faces which had been rated as high versus low on a given trait. For the majority of pairs, the chatbot's responses aligned with human judgements. In Study 1b, I found that ChatGPT's ratings of attractiveness showed medium to large associations with those provided by human observers. Finally, I investigated the possibility of biases in the chatbot's perceptions. While Study 2 found no support for an extreme form of race bias in judgements of social traits, the results of Study 3 providing evidence of an attractiveness halo effect – more attractive faces were also judged to be more confident, intelligent, and sociable. Taken together, these results suggest that ChatGPT's responses align with human judgements of social traits, including the presence of a halo effect. As such, I discuss some of the implications for ChatGPT's use across several domains.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100156"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143860391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Love, marriage, pregnancy: Commitment processes in romantic relationships with AI chatbots 爱情、婚姻、怀孕:与人工智能聊天机器人恋爱关系中的承诺过程
Pub Date : 2025-04-15 DOI: 10.1016/j.chbah.2025.100155
Ray Djufril , Jessica R. Frampton , Silvia Knobloch-Westerwick
An inductive thematic analysis examined written responses from 29 individuals using the romantic relationship function of the social chatbot Replika. Findings indicate that most of these users feel an emotional connection to the bot, that the bot meets their needs when there are no technical issues, and that interactions with the bot are often different from (and sometimes better than) interactions with humans. All these factors impact users’ commitment to their human-chatbot relationship. Additionally, the study explored how users navigated a time of relational transition, specifically a period of erotic roleplaying censorship. Participants experienced intense emotional responses, but many were guarded from negativity bias toward their AI partner because of the ability to blame developers. These findings are discussed in light of the investment model, the computers are social actors paradigm, social affordances, and relational turbulence theory.
一项归纳主题分析使用社交聊天机器人Replika的恋爱关系功能检查了29个人的书面回复。研究结果表明,大多数用户对机器人都有一种情感联系,在没有技术问题的情况下,机器人能满足他们的需求,而且与机器人的交互通常不同于(有时甚至比)与人类的交互更好。所有这些因素都会影响用户对人类聊天机器人关系的投入。此外,该研究还探讨了用户如何度过关系过渡时期,特别是色情角色扮演审查时期。参与者经历了强烈的情绪反应,但许多人都没有对他们的人工智能伙伴产生负面偏见,因为他们有能力责怪开发人员。本文从投资模型、社会行为者范式、社会允诺性和关系动荡理论等方面对研究结果进行了讨论。
{"title":"Love, marriage, pregnancy: Commitment processes in romantic relationships with AI chatbots","authors":"Ray Djufril ,&nbsp;Jessica R. Frampton ,&nbsp;Silvia Knobloch-Westerwick","doi":"10.1016/j.chbah.2025.100155","DOIUrl":"10.1016/j.chbah.2025.100155","url":null,"abstract":"<div><div>An inductive thematic analysis examined written responses from 29 individuals using the romantic relationship function of the social chatbot Replika. Findings indicate that most of these users feel an emotional connection to the bot, that the bot meets their needs when there are no technical issues, and that interactions with the bot are often different from (and sometimes better than) interactions with humans. All these factors impact users’ commitment to their human-chatbot relationship. Additionally, the study explored how users navigated a time of relational transition, specifically a period of erotic roleplaying censorship. Participants experienced intense emotional responses, but many were guarded from negativity bias toward their AI partner because of the ability to blame developers. These findings are discussed in light of the investment model, the <em>computers are social actors</em> paradigm, social affordances, and relational turbulence theory.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100155"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143850444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Baby schema in human-robot physical interaction: Influence of baby likeness in a communication robot on caregiving behavior 人-机器人身体互动中的婴儿图式:交流机器人中婴儿相似度对照料行为的影响
Pub Date : 2025-04-10 DOI: 10.1016/j.chbah.2025.100150
Shi Feng , Nobuo Yamato , Hiroshi Ishiguro , Masahiro Shiomi , Hidenobu Sumioka
One huge societal problem faced by nursing homes in aging countries like Japan is easing the loneliness, anxiety, reluctance in communication and related problems caused by dementia. Innovative methods are required to address this problem, which is aggravated by an acute shortage of care-providing staff. The use of such traditional management methods as physical or medical treatment must be intensified. Baby-like robots are increasingly being introduced into nursing homes as companions. The multiple infant traits in baby-like robots (multimodal infant features) can trigger the baby schema effect, which increases the desire of seniors to interact with their environments and triggers caregiving behaviors. However, to the best of our knowledge, no research has systematically analyzed how multimodal infant features trigger the baby schema—not to mention how adequately they do so. In this work, we first investigated how the appearance and the voice design of baby-like robots trigger the baby schema. 41 healthy adults between the age of 20–50 interacted with baby-like robots that had five different forms. 21 interacted with robots that had a voice function of real infant voices, and the remaining 20 interacted with robots without any voice. The participants rated the robots based on their baby likeness, their degree of fun to play with, and their degree of easy to play with. During the experiment, we video-recorded the number of caregiving and non-caregiving behaviors done with five different kinds of robot to evaluate the degree of the baby schema triggered in the participants. The multimodal infant features increased the baby schema effect, although non-linearly. The baby schema triggers a threshold beyond which the reality of the infant features exceeds it, and the increase of caregiving behavior will be lessened. This study provides a guideline for the design of current and future baby-like robots and a methodology for studying baby schema and caregiving behaviors in an ethical, safe, and controlled environment without actual infants.
在日本等老龄化国家,养老院面临的一个巨大的社会问题是如何缓解痴呆症引起的孤独、焦虑、不愿交流以及相关问题。需要创新的方法来解决这一问题,而护理人员的严重短缺使这一问题更加严重。必须加强使用物理治疗或医疗等传统管理方法。像婴儿一样的机器人越来越多地被引入养老院作为伴侣。类婴儿机器人的多重婴儿特征(多模态婴儿特征)可以触发婴儿图式效应,从而增加老年人与环境互动的欲望,引发看护行为。然而,据我们所知,还没有研究系统地分析了多模态婴儿特征是如何触发婴儿图式的,更不用说它们如何充分地发挥作用了。在这项工作中,我们首先研究了婴儿型机器人的外观和声音设计是如何触发婴儿图式的。41名年龄在20-50岁之间的健康成年人与五种不同形式的婴儿机器人互动。其中21人与具有真实婴儿声音功能的机器人互动,其余20人与没有声音的机器人互动。参与者根据机器人与婴儿的相似程度、玩起来有趣的程度和容易玩的程度来给机器人打分。在实验过程中,我们通过视频记录了五种不同类型的机器人的照顾和非照顾行为的数量,以评估被试对婴儿图式的触发程度。多模态婴儿特征增加了婴儿图式效应,尽管是非线性的。婴儿图式触发了一个阈值,超过这个阈值,婴儿特征的真实性就会超过这个阈值,看护行为的增加就会减少。本研究为当前和未来类婴儿机器人的设计提供了指导,并为在没有实际婴儿的伦理、安全、可控环境下研究婴儿图式和照顾行为提供了方法。
{"title":"Baby schema in human-robot physical interaction: Influence of baby likeness in a communication robot on caregiving behavior","authors":"Shi Feng ,&nbsp;Nobuo Yamato ,&nbsp;Hiroshi Ishiguro ,&nbsp;Masahiro Shiomi ,&nbsp;Hidenobu Sumioka","doi":"10.1016/j.chbah.2025.100150","DOIUrl":"10.1016/j.chbah.2025.100150","url":null,"abstract":"<div><div>One huge societal problem faced by nursing homes in aging countries like Japan is easing the loneliness, anxiety, reluctance in communication and related problems caused by dementia. Innovative methods are required to address this problem, which is aggravated by an acute shortage of care-providing staff. The use of such traditional management methods as physical or medical treatment must be intensified. Baby-like robots are increasingly being introduced into nursing homes as companions. The multiple infant traits in baby-like robots (multimodal infant features) can trigger the baby schema effect, which increases the desire of seniors to interact with their environments and triggers caregiving behaviors. However, to the best of our knowledge, no research has systematically analyzed how multimodal infant features trigger the baby schema—not to mention how adequately they do so. In this work, we first investigated how the appearance and the voice design of baby-like robots trigger the baby schema. 41 healthy adults between the age of 20–50 interacted with baby-like robots that had five different forms. 21 interacted with robots that had a voice function of real infant voices, and the remaining 20 interacted with robots without any voice. The participants rated the robots based on their baby likeness, their degree of fun to play with, and their degree of easy to play with. During the experiment, we video-recorded the number of caregiving and non-caregiving behaviors done with five different kinds of robot to evaluate the degree of the baby schema triggered in the participants. The multimodal infant features increased the baby schema effect, although non-linearly. The baby schema triggers a threshold beyond which the reality of the infant features exceeds it, and the increase of caregiving behavior will be lessened. This study provides a guideline for the design of current and future baby-like robots and a methodology for studying baby schema and caregiving behaviors in an ethical, safe, and controlled environment without actual infants.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100150"},"PeriodicalIF":0.0,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143854374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1