首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Visual deception in online dating: How gender shapes AI-generated image detection 在线约会中的视觉欺骗:性别如何影响人工智能生成的图像检测
Pub Date : 2025-09-12 DOI: 10.1016/j.chbah.2025.100208
Lidor Ivan
The rise of AI-generated images is reshaping online interactions, particularly in dating contexts where visual authenticity plays a central role. While prior research has focused on textual deception, less is known about users’ ability to detect synthetic images. Grounded in Truth-Default Theory and the notion of visual realism, this study explores how users evaluate authenticity in images that challenge conventional expectations of photographic trust.
An online experiment was conducted with 831 American heterosexual online daters. Participants were shown both real and AI-generated profile photos, rated their perceived origin, and provided open-ended justifications. Overall, AI-generated images detection accuracy was low, falling below chance. Women outperformed men in identifying AI-generated images, but were also more likely to misclassify real ones—suggesting heightened, but sometimes misplaced, skepticism. Participants relied on three main strategies: identifying visual inconsistencies, signs of perfection, and technical flaws. These heuristics often failed to keep pace with improving AI realism. To conceptualize this process, the study introduces the “Learning Loop”—a dynamic cycle in which users develop detection strategies, AI systems adapt to those strategies, and users must recalibrate once again. As synthetic deception becomes more seamless, the findings underscore the instability of visual trust and the need to understand how users adapt (or fail to adapt) to rapidly evolving visual technologies.
人工智能生成图像的兴起正在重塑在线互动,特别是在视觉真实性起着核心作用的约会环境中。虽然之前的研究主要集中在文本欺骗上,但对用户检测合成图像的能力知之甚少。基于事实默认理论和视觉现实主义的概念,本研究探讨了用户如何评估挑战摄影信任传统期望的图像真实性。一项针对831名美国异性恋在线约会者的在线实验。研究人员向参与者展示了真实的和人工智能生成的个人资料照片,对照片的来源进行了评分,并提供了开放式的理由。总体而言,人工智能生成的图像检测精度较低,低于机会。女性在识别人工智能生成的图像方面表现得比男性好,但也更有可能对真实图像进行错误分类,这表明怀疑情绪加剧,但有时是错误的。参与者主要依靠三种策略:识别视觉上的不一致、完美的迹象和技术上的缺陷。这些启发式方法往往无法跟上AI现实主义的发展步伐。为了将这一过程概念化,该研究引入了“学习循环”——一个动态循环,在这个循环中,用户制定检测策略,人工智能系统适应这些策略,用户必须再次重新校准。随着合成欺骗变得更加无缝,研究结果强调了视觉信任的不稳定性,以及了解用户如何适应(或不适应)快速发展的视觉技术的必要性。
{"title":"Visual deception in online dating: How gender shapes AI-generated image detection","authors":"Lidor Ivan","doi":"10.1016/j.chbah.2025.100208","DOIUrl":"10.1016/j.chbah.2025.100208","url":null,"abstract":"<div><div>The rise of AI-generated images is reshaping online interactions, particularly in dating contexts where visual authenticity plays a central role. While prior research has focused on textual deception, less is known about users’ ability to detect synthetic images. Grounded in Truth-Default Theory and the notion of visual realism, this study explores how users evaluate authenticity in images that challenge conventional expectations of photographic trust.</div><div>An online experiment was conducted with 831 American heterosexual online daters. Participants were shown both real and AI-generated profile photos, rated their perceived origin, and provided open-ended justifications. Overall, AI-generated images detection accuracy was low, falling below chance. Women outperformed men in identifying AI-generated images, but were also more likely to misclassify real ones—suggesting heightened, but sometimes misplaced, skepticism. Participants relied on three main strategies: identifying <em>visual inconsistencies</em>, signs of <em>perfection</em>, and <em>technical flaws</em>. These heuristics often failed to keep pace with improving AI realism. To conceptualize this process, the study introduces the “<em>Learning Loop</em>”—a dynamic cycle in which users develop detection strategies, AI systems adapt to those strategies, and users must recalibrate once again. As synthetic deception becomes more seamless, the findings underscore the instability of visual trust and the need to understand how users adapt (or fail to adapt) to rapidly evolving visual technologies.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100208"},"PeriodicalIF":0.0,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Factors influencing users' intention to adopt ChatGPT based on the extended technology acceptance model 基于扩展技术接受模型的用户采用ChatGPT意愿的影响因素
Pub Date : 2025-09-11 DOI: 10.1016/j.chbah.2025.100204
Md Nazmus Sakib , Muhaiminul Islam , Mochammad Fahlevi , Md Siddikur Rahman , Mohammad Younus , Md Mizanur Rahman
ChatGPT, a transformative conversational agent, has exhibited significant impact across diverse domains, particularly in revolutionizing customer service within the e-commerce sector and aiding content development professionals. Despite its broad applications, a dearth of comprehensive studies exists on user attitudes and actions regarding ChatGPT adoption. This study addresses this gap by investigating the key factors influencing ChatGPT usage through the conceptual lens of the Technology Acceptance Model (TAM). Employing PLS-SEM modeling on data collected from 313 ChatGPT users globally, spanning various professions and consistent platform use for a minimum of six months, the research identifies perceived cost, perceived enjoyment, perceived usefulness, facilitating conditions, and social influence as pivotal factors determining ChatGPT usage. Notably, perceived ease of use, perceived trust, and perceived compatibility emerge as negligible determinants. However, trust and compatibility exert an indirect influence on usage via social influence, while ease of use indirectly affects ChatGPT usage through facilitating conditions. Thus, this study revolutionizes TAM research, identifying critical factors for ChatGPT adoption and providing actionable insights for organizations to strategically enhance AI utilization, transforming customer service and content development across industries.
ChatGPT是一种变革性的会话代理,在不同的领域表现出了重大的影响,特别是在电子商务领域的客户服务革命和帮助内容开发专业人员方面。尽管其广泛的应用,缺乏全面的研究存在的用户态度和行动关于ChatGPT的采用。本研究通过技术接受模型(TAM)的概念视角调查了影响ChatGPT使用的关键因素,从而解决了这一差距。该研究采用PLS-SEM模型对全球313名ChatGPT用户收集的数据进行建模,这些用户跨越不同的行业,并且至少使用了6个月的平台,研究确定了感知成本、感知享受、感知有用性、便利条件和社会影响是决定ChatGPT使用的关键因素。值得注意的是,感知易用性、感知信任和感知兼容性是可以忽略不计的决定因素。然而,信任和兼容性通过社会影响间接影响使用,而易用性通过便利条件间接影响ChatGPT的使用。因此,这项研究彻底改变了TAM研究,确定了ChatGPT采用的关键因素,并为组织提供了可操作的见解,以战略性地提高人工智能的利用,改变跨行业的客户服务和内容开发。
{"title":"Factors influencing users' intention to adopt ChatGPT based on the extended technology acceptance model","authors":"Md Nazmus Sakib ,&nbsp;Muhaiminul Islam ,&nbsp;Mochammad Fahlevi ,&nbsp;Md Siddikur Rahman ,&nbsp;Mohammad Younus ,&nbsp;Md Mizanur Rahman","doi":"10.1016/j.chbah.2025.100204","DOIUrl":"10.1016/j.chbah.2025.100204","url":null,"abstract":"<div><div>ChatGPT, a transformative conversational agent, has exhibited significant impact across diverse domains, particularly in revolutionizing customer service within the e-commerce sector and aiding content development professionals. Despite its broad applications, a dearth of comprehensive studies exists on user attitudes and actions regarding ChatGPT adoption. This study addresses this gap by investigating the key factors influencing ChatGPT usage through the conceptual lens of the Technology Acceptance Model (TAM). Employing PLS-SEM modeling on data collected from 313 ChatGPT users globally, spanning various professions and consistent platform use for a minimum of six months, the research identifies perceived cost, perceived enjoyment, perceived usefulness, facilitating conditions, and social influence as pivotal factors determining ChatGPT usage. Notably, perceived ease of use, perceived trust, and perceived compatibility emerge as negligible determinants. However, trust and compatibility exert an indirect influence on usage via social influence, while ease of use indirectly affects ChatGPT usage through facilitating conditions. Thus, this study revolutionizes TAM research, identifying critical factors for ChatGPT adoption and providing actionable insights for organizations to strategically enhance AI utilization, transforming customer service and content development across industries.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100204"},"PeriodicalIF":0.0,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Primatology as an integrative framework to study social robots 灵长类学作为研究社交机器人的综合框架
Pub Date : 2025-09-05 DOI: 10.1016/j.chbah.2025.100206
Miquel Llorente , Matthieu J. Guitton , Thomas Castelain
{"title":"Primatology as an integrative framework to study social robots","authors":"Miquel Llorente ,&nbsp;Matthieu J. Guitton ,&nbsp;Thomas Castelain","doi":"10.1016/j.chbah.2025.100206","DOIUrl":"10.1016/j.chbah.2025.100206","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100206"},"PeriodicalIF":0.0,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145096682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The threat of synthetic harmony: The effects of AI vs. human origin beliefs on listeners' cognitive, emotional, and physiological responses to music 合成和声的威胁:人工智能与人类起源信念对听者对音乐的认知、情感和生理反应的影响
Pub Date : 2025-09-05 DOI: 10.1016/j.chbah.2025.100205
Rohan L. Dunham, Gerben A. van Kleef, Eftychia Stamkou
People generally evaluate music less favourably if they believe it is created by artificial intelligence (AI) rather than humans. But the psychological mechanisms underlying this tendency remain unclear. Prior research has relied entirely on self-reports that are vulnerable to bias. This leaves open the question as to whether negative reactions are a reflection of motivated reasoning – a controlled, cognitive process in which people justify their scepticism about AI's creative capacity – or whether they stem from deeper, embodied feelings of threat to human creative uniqueness manifested physiologically. We address this question across two lab-in-field studies, measuring participants' self-reported and physiological responses to the same piece of music framed either as having AI or human origins. Study 1 (N = 50) revealed that individuals in the AI condition appreciated music less, reported less intense emotions, and experienced decreased parasympathetic nervous system activity as compared to those in the human condition. Study 2 (N = 372) showed that these effects were more pronounced among individuals who more strongly endorsed the belief that creativity is uniquely human, and that this could largely be explained by the perceived threat posed by AI. Together, these findings suggest that unfavourable responses to AI-generated music are not driven solely by controlled cognitive justifications but also by automatic, embodied threat reactions in response to creative AI. They suggest that strategies addressing perceived threats posed by AI may be key to fostering more harmonious human-AI collaboration and acceptance.
如果人们认为音乐是由人工智能(AI)而不是人类创造的,那么人们通常会对音乐的评价不那么乐观。但这种倾向背后的心理机制尚不清楚。之前的研究完全依赖于易受偏见影响的自我报告。这就留下了一个问题,即负面反应是否反映了动机推理——一个受控的认知过程,人们在这个过程中证明了他们对人工智能创造力的怀疑——或者它们是否源于更深层次的、具体的、对人类创造性独特性的威胁的感觉,这种威胁表现在生理上。我们通过两项实验室现场研究来解决这个问题,测量参与者对同一段音乐的自我报告和生理反应,这些音乐要么是人工智能的,要么是人类的。研究1 (N = 50)显示,与人类条件下的个体相比,人工智能条件下的个体欣赏音乐的次数较少,报告的强烈情绪较少,副交感神经系统活动减少。研究2 (N = 372)表明,这些影响在那些更强烈地相信创造力是人类独有的个体中更为明显,这在很大程度上可以用人工智能带来的感知威胁来解释。总之,这些发现表明,对人工智能生成的音乐的不良反应不仅仅是由受控的认知理由驱动的,也是由对创造性人工智能的自动、具体的威胁反应驱动的。他们认为,解决人工智能带来的威胁的策略可能是促进人类与人工智能更和谐合作和接受的关键。
{"title":"The threat of synthetic harmony: The effects of AI vs. human origin beliefs on listeners' cognitive, emotional, and physiological responses to music","authors":"Rohan L. Dunham,&nbsp;Gerben A. van Kleef,&nbsp;Eftychia Stamkou","doi":"10.1016/j.chbah.2025.100205","DOIUrl":"10.1016/j.chbah.2025.100205","url":null,"abstract":"<div><div>People generally evaluate music less favourably if they believe it is created by artificial intelligence (AI) rather than humans. But the psychological mechanisms underlying this tendency remain unclear. Prior research has relied entirely on self-reports that are vulnerable to bias. This leaves open the question as to whether negative reactions are a reflection of motivated reasoning – a controlled, cognitive process in which people justify their scepticism about AI's creative capacity – or whether they stem from deeper, embodied feelings of threat to human creative uniqueness manifested physiologically. We address this question across two lab-in-field studies, measuring participants' self-reported and physiological responses to the same piece of music framed either as having AI or human origins. Study 1 (<em>N</em> = 50) revealed that individuals in the AI condition appreciated music less, reported less intense emotions, and experienced decreased parasympathetic nervous system activity as compared to those in the human condition. Study 2 (<em>N</em> = 372) showed that these effects were more pronounced among individuals who more strongly endorsed the belief that creativity is uniquely human, and that this could largely be explained by the perceived threat posed by AI. Together, these findings suggest that unfavourable responses to AI-generated music are not driven solely by controlled cognitive justifications but also by automatic, embodied threat reactions in response to creative AI. They suggest that strategies addressing perceived threats posed by AI may be key to fostering more harmonious human-AI collaboration and acceptance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100205"},"PeriodicalIF":0.0,"publicationDate":"2025-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145020444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The influence of persuasive techniques on large language models: A scenario-based study 说服技巧对大型语言模型的影响:基于场景的研究
Pub Date : 2025-09-02 DOI: 10.1016/j.chbah.2025.100197
Sonali Uttam Singh, Akbar Siami Namin
Large Language Models (LLMs), such as CHATGPT-4, have introduced comprehensive capabilities in generating human-like text. However, they also raise significant ethical concerns due to their potential to produce misleading or manipulative content. This paper investigates the intersection of LLM functionalities and Cialdini’s six principles of persuasion: reciprocity, commitment and consistency, social proof, authority, liking, and scarcity. We explore how these principles can be exploited to deceive LLMs, particularly in scenarios designed to manipulate these models into providing misleading or harmful outputs. Through a scenario-based approach, over 30 prompts were crafted to test the susceptibility of LLMs to various persuasion principles. The study analyzes the success or failure of these prompts using interaction analysis, identifying different stages of deception ranging from spontaneous deception to more advanced, socially complex forms.
Results indicate that LLMs are highly susceptible to manipulation, with 15 scenarios achieving advanced, socially aware deceptions (Stage 3), particularly through principles like liking and scarcity. Early stage manipulations (Stage 1) were also common, driven by reciprocity and authority, while intermediate efforts (Stage 2) highlighted in-stage tactics such as social proof. These findings underscore the urgent need for robust mitigation strategies, including resistance mechanisms at lower stages and training LLMs with counter persuasive strategies to prevent their exploitation. More than technical details, it raises important concerns about how AI might be used to mislead people. From online scams to the spread of misinformation, persuasive content generated by LLMs has the potential to impact both individual safety and public trust. These tools can shape how people think, what they believe, and even how they act often without users realizing it. With this work, we hope to open up a broader conversation across disciplines about these risks and encourage the development of practical, ethical safeguards that ensure language models remain helpful, transparent, and trustworthy. This research contributes to the broader discourse on AI ethics, highlighting the vulnerabilities of LLMs and advocating for stronger responsibility measures to prevent their misuse in producing deceptive content. The results describe the importance of developing secure, transparent AI technologies that maintain integrity in human–machine interactions.
大型语言模型(llm),如CHATGPT-4,已经引入了生成类人文本的综合功能。然而,由于它们可能产生误导或操纵内容,它们也引起了重大的伦理问题。本文研究了法学硕士功能与Cialdini的六个说服原则的交集:互惠、承诺和一致性、社会证明、权威、喜欢和稀缺性。我们将探讨如何利用这些原则来欺骗法学硕士,特别是在设计操纵这些模型以提供误导性或有害输出的场景中。通过基于场景的方法,设计了30多个提示,以测试法学硕士对各种说服原则的敏感性。该研究使用交互分析分析了这些提示的成功或失败,确定了欺骗的不同阶段,从自发的欺骗到更高级的、社会复杂的形式。结果表明,法学硕士极易受到操纵,有15个场景实现了高级的、社会意识的欺骗(阶段3),特别是通过喜欢和稀缺等原则。早期阶段的操纵(阶段1)也很常见,由互惠和权威驱动,而中期努力(阶段2)强调了阶段内的策略,如社会证明。这些研究结果强调,迫切需要制定强有力的缓解战略,包括在较低阶段建立抵制机制,并对法学硕士进行反说服战略培训,以防止其被利用。除了技术细节之外,它还引发了人们对人工智能可能被用来误导人们的重要担忧。从网络诈骗到错误信息的传播,法学硕士产生的有说服力的内容有可能影响个人安全和公众信任。这些工具可以塑造人们的思维方式,他们的信仰,甚至是他们在用户没有意识到的情况下的行为。通过这项工作,我们希望就这些风险展开更广泛的跨学科对话,并鼓励开发实用的、道德的保障措施,以确保语言模型保持有用、透明和值得信赖。这项研究有助于更广泛地讨论人工智能伦理,强调法学硕士的脆弱性,并倡导采取更强有力的责任措施,以防止法学硕士在生产欺骗性内容时被滥用。研究结果描述了开发安全、透明的人工智能技术在人机交互中保持完整性的重要性。
{"title":"The influence of persuasive techniques on large language models: A scenario-based study","authors":"Sonali Uttam Singh,&nbsp;Akbar Siami Namin","doi":"10.1016/j.chbah.2025.100197","DOIUrl":"10.1016/j.chbah.2025.100197","url":null,"abstract":"<div><div>Large Language Models (LLMs), such as CHATGPT-4, have introduced comprehensive capabilities in generating human-like text. However, they also raise significant ethical concerns due to their potential to produce misleading or manipulative content. This paper investigates the intersection of LLM functionalities and Cialdini’s six principles of persuasion: reciprocity, commitment and consistency, social proof, authority, liking, and scarcity. We explore how these principles can be exploited to deceive LLMs, particularly in scenarios designed to manipulate these models into providing misleading or harmful outputs. Through a scenario-based approach, over 30 prompts were crafted to test the susceptibility of LLMs to various persuasion principles. The study analyzes the success or failure of these prompts using interaction analysis, identifying different stages of deception ranging from spontaneous deception to more advanced, socially complex forms.</div><div>Results indicate that LLMs are highly susceptible to manipulation, with 15 scenarios achieving advanced, socially aware deceptions (Stage 3), particularly through principles like liking and scarcity. Early stage manipulations (Stage 1) were also common, driven by reciprocity and authority, while intermediate efforts (Stage 2) highlighted in-stage tactics such as social proof. These findings underscore the urgent need for robust mitigation strategies, including resistance mechanisms at lower stages and training LLMs with counter persuasive strategies to prevent their exploitation. More than technical details, it raises important concerns about how AI might be used to mislead people. From online scams to the spread of misinformation, persuasive content generated by LLMs has the potential to impact both individual safety and public trust. These tools can shape how people think, what they believe, and even how they act often without users realizing it. With this work, we hope to open up a broader conversation across disciplines about these risks and encourage the development of practical, ethical safeguards that ensure language models remain helpful, transparent, and trustworthy. This research contributes to the broader discourse on AI ethics, highlighting the vulnerabilities of LLMs and advocating for stronger responsibility measures to prevent their misuse in producing deceptive content. The results describe the importance of developing secure, transparent AI technologies that maintain integrity in human–machine interactions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100197"},"PeriodicalIF":0.0,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145010733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collaborative human-AI trust (CHAI-T): A process framework for active management of trust in human-AI collaboration 协作的人类-人工智能信任(CHAI-T):在人类-人工智能协作中主动管理信任的过程框架
Pub Date : 2025-08-26 DOI: 10.1016/j.chbah.2025.100200
Melanie J. McGrath , Andreas Duenser , Justine Lacey , Cécile Paris
Collaborative human-AI (HAI) teaming combines the unique skills and capabilities of humans and machines in sustained teaming interactions leveraging the strengths of each. In tasks involving regular exposure to novelty and uncertainty, collaboration between adaptive, creative humans and powerful, precise artificial intelligence (AI) promises new solutions and efficiencies. User trust is essential to creating and maintaining these collaborative relationships. Established models of trust in traditional forms of AI typically recognize the contribution of three primary categories of trust antecedents: characteristics of the human user, characteristics of the technology, and environmental factors. The emergence of HAI teams, however, requires an understanding of human trust that accounts for the specificity of task contexts and goals, integrates processes of interaction, and captures how trust evolves in a teaming environment over time. Drawing on both the psychological and computer science literature, the process framework of trust in collaborative HAI teams (CHAI-T) presented in this paper adopts the tripartite structure of antecedents established by earlier models, while incorporating team processes and performance phases to capture the dynamism inherent to trust in teaming contexts. These features enable active management of trust in collaborative AI systems, with practical implications for the design and deployment of collaborative HAI teams.
协作式人类-人工智能(HAI)团队结合了人类和机器在持续的团队互动中的独特技能和能力,利用了各自的优势。在涉及经常接触新奇事物和不确定性的任务中,适应性强、富有创造力的人类与强大、精确的人工智能(AI)之间的合作有望带来新的解决方案和效率。用户信任对于创建和维护这些协作关系至关重要。在传统形式的人工智能中建立的信任模型通常承认信任前因的三个主要类别的贡献:人类用户的特征、技术的特征和环境因素。然而,HAI团队的出现需要理解人类信任,理解任务上下文和目标的特殊性,集成交互过程,并捕捉信任在团队环境中如何随时间演变。借鉴心理学和计算机科学文献,本文提出的协作性人工智能团队(CHAI-T)的信任过程框架采用了早期模型建立的前因的三方结构,同时结合了团队过程和绩效阶段,以捕捉团队环境中信任的内在动力。这些特性能够对协作AI系统中的信任进行主动管理,对协作AI团队的设计和部署具有实际意义。
{"title":"Collaborative human-AI trust (CHAI-T): A process framework for active management of trust in human-AI collaboration","authors":"Melanie J. McGrath ,&nbsp;Andreas Duenser ,&nbsp;Justine Lacey ,&nbsp;Cécile Paris","doi":"10.1016/j.chbah.2025.100200","DOIUrl":"10.1016/j.chbah.2025.100200","url":null,"abstract":"<div><div>Collaborative human-AI (HAI) teaming combines the unique skills and capabilities of humans and machines in sustained teaming interactions leveraging the strengths of each. In tasks involving regular exposure to novelty and uncertainty, collaboration between adaptive, creative humans and powerful, precise artificial intelligence (AI) promises new solutions and efficiencies. User trust is essential to creating and maintaining these collaborative relationships. Established models of trust in traditional forms of AI typically recognize the contribution of three primary categories of trust antecedents: characteristics of the human user, characteristics of the technology, and environmental factors. The emergence of HAI teams, however, requires an understanding of human trust that accounts for the specificity of task contexts and goals, integrates processes of interaction, and captures how trust evolves in a teaming environment over time. Drawing on both the psychological and computer science literature, the process framework of trust in collaborative HAI teams (CHAI-T) presented in this paper adopts the tripartite structure of antecedents established by earlier models, while incorporating team processes and performance phases to capture the dynamism inherent to trust in teaming contexts. These features enable active management of trust in collaborative AI systems, with practical implications for the design and deployment of collaborative HAI teams.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100200"},"PeriodicalIF":0.0,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trusting the machine: Exploring participant perceptions of AI-driven summaries in virtual focus groups with and without human oversight 信任机器:在有和没有人类监督的情况下,探索参与者对虚拟焦点小组中人工智能驱动的摘要的看法
Pub Date : 2025-08-26 DOI: 10.1016/j.chbah.2025.100198
Ye Wang , Huan Chen , Xiaofan Wei , Cheng Chang , Xinyi Zuo
This study explores the use of AI-assisted summarization as part of a proposed AI moderation assistant for virtual focus group (VFG) settings, focusing on the calibration of trust through human oversight and transparency. To understand participant perspectives, this study employed a mixed-method approach: Study 1 conducted a focus group to gather initial data for the stimulus design of Study 2, and Study 2 was an online experiment that collected both quantitative and qualitative measures of perceptions of AI summarization across three groups—a control group, and two treatment groups (with vs. without human oversight). ANOVA and AI-assisted thematic analyses were performed. The findings indicate that AI summaries, with or without human oversight, were positively received by participants. However, no notable differences were observed in participants' satisfaction with the VFG application attributable to AI summaries. Qualitative findings reveal that participants appreciate AI's efficiency in summarization but express concerns about accuracy, authenticity, and the potential for AI to lack genuine human understanding. The findings contribute to the literature on trust in AI by demonstrating that trust can be achieved through transparency. By revealing the coexistence of AI appreciation and aversion, the study offers nuanced insights into trust calibration within socially and emotionally sensitive communication contexts. These results also inform the integration of AI summarization into qualitative research workflows.
本研究探讨了人工智能辅助摘要的使用,作为虚拟焦点小组(VFG)设置的人工智能调节助手的一部分,重点是通过人为监督和透明度来校准信任。为了了解参与者的观点,本研究采用了混合方法:研究1进行了一个焦点小组,为研究2的刺激设计收集初始数据,研究2是一个在线实验,收集了三组(对照组和两个治疗组)对人工智能总结的看法的定量和定性测量。进行方差分析和人工智能辅助的专题分析。研究结果表明,无论有没有人为监督,参与者都积极接受了人工智能摘要。然而,在参与者对人工智能摘要的VFG应用的满意度方面没有观察到显着差异。定性研究结果显示,参与者对人工智能的效率表示赞赏,但对准确性、真实性以及人工智能缺乏真正人类理解的可能性表示担忧。这些研究结果表明,信任可以通过透明度来实现,从而为人工智能领域的信任文献做出了贡献。通过揭示对人工智能的欣赏和厌恶并存,该研究为在社交和情感敏感的沟通环境中进行信任校准提供了细致入微的见解。这些结果也为将人工智能总结整合到定性研究工作流程中提供了信息。
{"title":"Trusting the machine: Exploring participant perceptions of AI-driven summaries in virtual focus groups with and without human oversight","authors":"Ye Wang ,&nbsp;Huan Chen ,&nbsp;Xiaofan Wei ,&nbsp;Cheng Chang ,&nbsp;Xinyi Zuo","doi":"10.1016/j.chbah.2025.100198","DOIUrl":"10.1016/j.chbah.2025.100198","url":null,"abstract":"<div><div>This study explores the use of AI-assisted summarization as part of a proposed AI moderation assistant for virtual focus group (VFG) settings, focusing on the calibration of trust through human oversight and transparency. To understand participant perspectives, this study employed a mixed-method approach: Study 1 conducted a focus group to gather initial data for the stimulus design of Study 2, and Study 2 was an online experiment that collected both quantitative and qualitative measures of perceptions of AI summarization across three groups—a control group, and two treatment groups (with vs. without human oversight). ANOVA and AI-assisted thematic analyses were performed. The findings indicate that AI summaries, with or without human oversight, were positively received by participants. However, no notable differences were observed in participants' satisfaction with the VFG application attributable to AI summaries. Qualitative findings reveal that participants appreciate AI's efficiency in summarization but express concerns about accuracy, authenticity, and the potential for AI to lack genuine human understanding. The findings contribute to the literature on trust in AI by demonstrating that <strong>trust can be achieved through transparency</strong>. By revealing the <strong>coexistence of AI appreciation and aversion</strong>, the study offers nuanced insights into <strong>trust calibration</strong> within <strong>socially and emotionally sensitive communication contexts</strong>. These results also inform the <strong>integration of AI summarization into qualitative research workflows</strong>.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100198"},"PeriodicalIF":0.0,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of sensory reactivity and haptic interaction on children's anthropomorphism of a haptic robot 感觉反应性和触觉交互对儿童触觉机器人拟人化的影响
Pub Date : 2025-08-01 DOI: 10.1016/j.chbah.2025.100186
Hikaru Nozawa, Masaharu Kato
Social touch is vital for developing stable attachments and social skills, and haptic robots could provide children opportunities to develop those attachments and skills. However, haptic robots are not guaranteed suitable for every child, and individual differences exist in accepting these robots. In this study, we proposed that screening children's sensory reactivity can predict the suitable and challenging attributes for accepting these robots. Additionally, we investigated how sensory reactivity influences the tendency to anthropomorphize a haptic robot, as anthropomorphizing a robot is considered an indicator of accepting the robot. Sixty-seven preschool children aged 5–6 years participated. Results showed that the initial anthropomorphic tendency toward the robot was more likely to decrease with increasing atypicality in sensory reactivity, and haptic interaction with the robot tended to promote anthropomorphic tendency. A detailed analysis focusing on children's sensory insensitivity revealed polarized results: those actively seeking sensory information (i.e., sensory seeking) showed a lower anthropomorphic tendency toward the robot, whereas those who were passive (i.e., low registration) showed a higher anthropomorphic tendency. Importantly, haptic interaction with the robot mitigated the lower anthropomorphic tendency observed in sensory seekers. Finally, we found that the degree of anthropomorphizing the robot. positively influenced physiological arousal level. These results indicate that children with atypical sensory reactivity may accept robots through haptic interaction This extends previous research by demonstrating how individual sensory reactivity profiles modulate children's robot acceptance through physical interaction rather than visual observation alone. Future robots must be designed to interact in ways tailored to each child's sensory reactivity to develop stable attachment and social skills.
社交接触对于发展稳定的依恋关系和社交技能至关重要,而触觉机器人可以为儿童提供发展这些依恋关系和技能的机会。然而,触觉机器人并不能保证适合每个孩子,在接受这些机器人时存在个体差异。在这项研究中,我们提出筛选儿童的感官反应可以预测接受这些机器人的合适和具有挑战性的属性。此外,我们研究了感官反应如何影响拟人化触觉机器人的倾向,因为拟人化机器人被认为是接受机器人的一个指标。67名5-6岁学龄前儿童参与。结果表明,随着感觉反应非典型性的增加,机器人的初始拟人化倾向更有可能降低,与机器人的触觉交互倾向于促进拟人化倾向。通过对儿童感官不敏感的详细分析,发现了两极分化的结果:主动寻找感官信息(即感官寻求)的儿童对机器人的拟人化倾向较低,而被动(即低注册)的儿童对机器人的拟人化倾向较高。重要的是,与机器人的触觉互动减轻了在感官寻求者中观察到的较低的拟人化倾向。最后,我们发现机器人的拟人化程度。积极影响生理唤醒水平。这些结果表明,具有非典型感觉反应性的儿童可能通过触觉交互接受机器人。这扩展了先前的研究,展示了个体感觉反应性如何通过物理交互而不是视觉观察来调节儿童对机器人的接受。未来的机器人必须根据每个孩子的感官反应来设计互动方式,以培养稳定的依恋和社交技能。
{"title":"Effects of sensory reactivity and haptic interaction on children's anthropomorphism of a haptic robot","authors":"Hikaru Nozawa,&nbsp;Masaharu Kato","doi":"10.1016/j.chbah.2025.100186","DOIUrl":"10.1016/j.chbah.2025.100186","url":null,"abstract":"<div><div>Social touch is vital for developing stable attachments and social skills, and haptic robots could provide children opportunities to develop those attachments and skills. However, haptic robots are not guaranteed suitable for every child, and individual differences exist in accepting these robots. In this study, we proposed that screening children's sensory reactivity can predict the suitable and challenging attributes for accepting these robots. Additionally, we investigated how sensory reactivity influences the tendency to anthropomorphize a haptic robot, as anthropomorphizing a robot is considered an indicator of accepting the robot. Sixty-seven preschool children aged 5–6 years participated. Results showed that the initial anthropomorphic tendency toward the robot was more likely to decrease with increasing atypicality in sensory reactivity, and haptic interaction with the robot tended to promote anthropomorphic tendency. A detailed analysis focusing on children's sensory insensitivity revealed polarized results: those actively seeking sensory information (i.e., <em>sensory seeking</em>) showed a lower anthropomorphic tendency toward the robot, whereas those who were passive (i.e., <em>low registration</em>) showed a higher anthropomorphic tendency. Importantly, haptic interaction with the robot mitigated the lower anthropomorphic tendency observed in sensory seekers. Finally, we found that the degree of anthropomorphizing the robot. positively influenced physiological arousal level. These results indicate that children with atypical sensory reactivity may accept robots through haptic interaction This extends previous research by demonstrating how individual sensory reactivity profiles modulate children's robot acceptance through physical interaction rather than visual observation alone. Future robots must be designed to interact in ways tailored to each child's sensory reactivity to develop stable attachment and social skills.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100186"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144828727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring dimensions of perceived anthropomorphism in conversational AI: Implications for human identity threat and dehumanization 探索对话人工智能中感知的拟人化维度:对人类身份威胁和非人性化的影响
Pub Date : 2025-08-01 DOI: 10.1016/j.chbah.2025.100192
Yejin Lee , Sang-Hwan Kim
This study aims to identify humanlike traits in conversational AI (CAI) that influence human identity threat and dehumanization, and to propose design guidelines that mitigate these effects. An online survey was conducted with 323 participants. Factor analysis revealed four key dimensions of perceived anthropomorphism in CAI: Self-likeness, Communication & Memory, Social Adaptability, and Agency. Structural equation modeling showed that Self-likeness heightened both perceived human identity threat and dehumanization, whereas Agency significantly moderated these effects while also directly mitigating dehumanization. Social Adaptability generally reduced perceived human identity threat but amplified it when combined with high Self-likeness. Furthermore, younger individuals were more likely to experience perceived human identity threat and dehumanization, underscoring the importance of considering user age. By elucidating the psychological structure underlying users’ perceptions of CAI anthropomorphism, this study deepens understanding of its psychosocial implications and provides practical guidance for the ethical design of CAI systems.
本研究旨在确定会话人工智能(CAI)中影响人类身份威胁和非人性化的类人特征,并提出减轻这些影响的设计指导方针。这项在线调查共有323人参与。因子分析揭示了CAI中感知拟人化的四个关键维度:自我相似、沟通与沟通;记忆、社会适应性和能动性。结构方程模型表明,自我相似性增加了感知到的人类身份威胁和非人性化,而代理显著地调节了这些影响,同时也直接减轻了非人性化。社会适应能力通常会降低感知到的人类身份威胁,但当与高度自我相似相结合时,会放大这种威胁。此外,年轻人更有可能体验到人类身份威胁和非人性化,这强调了考虑用户年龄的重要性。本研究通过阐明使用者对人工智能拟人化感知的心理结构,加深对其心理社会含义的理解,并为人工智能系统的伦理设计提供实践指导。
{"title":"Exploring dimensions of perceived anthropomorphism in conversational AI: Implications for human identity threat and dehumanization","authors":"Yejin Lee ,&nbsp;Sang-Hwan Kim","doi":"10.1016/j.chbah.2025.100192","DOIUrl":"10.1016/j.chbah.2025.100192","url":null,"abstract":"<div><div>This study aims to identify humanlike traits in conversational AI (CAI) that influence human identity threat and dehumanization, and to propose design guidelines that mitigate these effects. An online survey was conducted with 323 participants. Factor analysis revealed four key dimensions of perceived anthropomorphism in CAI: Self-likeness, Communication &amp; Memory, Social Adaptability, and Agency. Structural equation modeling showed that Self-likeness heightened both perceived human identity threat and dehumanization, whereas Agency significantly moderated these effects while also directly mitigating dehumanization. Social Adaptability generally reduced perceived human identity threat but amplified it when combined with high Self-likeness. Furthermore, younger individuals were more likely to experience perceived human identity threat and dehumanization, underscoring the importance of considering user age. By elucidating the psychological structure underlying users’ perceptions of CAI anthropomorphism, this study deepens understanding of its psychosocial implications and provides practical guidance for the ethical design of CAI systems.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100192"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144841791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trusting emotional support from generative artificial intelligence: a conceptual review 从生成式人工智能中信任情感支持:一个概念回顾
Pub Date : 2025-08-01 DOI: 10.1016/j.chbah.2025.100195
Riccardo Volpato , Lisa DeBruine , Simone Stumpf
People are increasingly using generative artificial intelligence (AI) for emotional support, creating trust-based interactions with limited predictability and transparency. We address the fragmented nature of research on trust in AI through a multidisciplinary conceptual review, examining theoretical foundations for understanding trust in the emerging context of emotional support from generative AI. Through an in-depth literature search across human-computer interaction, computer-mediated communication, social psychology, mental health, economics, sociology, philosophy, and science and technology studies, we developed two principal contributions. First, we summarise relevant definitions of trust across disciplines. Second, based on our first contribution, we define trust in the context of emotional support provided by AI and present a categorisation of relevant concepts that recur across well-established research areas. Our work equips researchers with a map for navigating the literature and formulating hypotheses about AI-based mental health support, as well as important theoretical, methodological, and practical implications for advancing research in this area.
人们越来越多地使用生成式人工智能(AI)来获得情感支持,创建基于信任的互动,但可预测性和透明度有限。我们通过多学科的概念回顾来解决人工智能信任研究的碎片化本质,研究了在生成式人工智能的情感支持的新兴背景下理解信任的理论基础。通过对人机交互、计算机媒介传播、社会心理学、心理健康、经济学、社会学、哲学和科学技术研究等领域的深入文献检索,我们做出了两项主要贡献。首先,我们总结了不同学科对信任的相关定义。其次,基于我们的第一个贡献,我们在人工智能提供的情感支持的背景下定义了信任,并对在成熟的研究领域中反复出现的相关概念进行了分类。我们的工作为研究人员提供了一张地图,用于浏览文献和制定关于基于人工智能的心理健康支持的假设,以及对推进该领域研究的重要理论、方法和实践意义。
{"title":"Trusting emotional support from generative artificial intelligence: a conceptual review","authors":"Riccardo Volpato ,&nbsp;Lisa DeBruine ,&nbsp;Simone Stumpf","doi":"10.1016/j.chbah.2025.100195","DOIUrl":"10.1016/j.chbah.2025.100195","url":null,"abstract":"<div><div>People are increasingly using generative artificial intelligence (AI) for emotional support, creating trust-based interactions with limited predictability and transparency. We address the fragmented nature of research on trust in AI through a multidisciplinary conceptual review, examining theoretical foundations for understanding trust in the emerging context of emotional support from generative AI. Through an in-depth literature search across human-computer interaction, computer-mediated communication, social psychology, mental health, economics, sociology, philosophy, and science and technology studies, we developed two principal contributions. First, we summarise relevant definitions of trust across disciplines. Second, based on our first contribution, we define trust in the context of emotional support provided by AI and present a categorisation of relevant concepts that recur across well-established research areas. Our work equips researchers with a map for navigating the literature and formulating hypotheses about AI-based mental health support, as well as important theoretical, methodological, and practical implications for advancing research in this area.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100195"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144841789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1