首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Gaze-informed signatures of trust and collaboration in human-autonomy teams 人类自主团队中信任和协作的注视信息签名
Pub Date : 2025-06-07 DOI: 10.1016/j.chbah.2025.100171
Anthony J. Ries , Stéphane Aroca-Ouellette , Alessandro Roncone , Ewart J. de Visser
In the evolving landscape of human-autonomy teaming (HAT), fostering effective collaboration and trust between human and autonomous agents is increasingly important. To explore this, we used the game Overcooked AI to create dynamic teaming scenarios featuring varying agent behaviors (clumsy, rigid, adaptive) and environmental complexities (low, medium, high). Our objectives were to assess the performance of adaptive AI agents designed with hierarchical reinforcement learning for better teamwork and measure eye tracking signals related to changes in trust and collaboration. The results indicate that the adaptive agent was more effective in managing teaming and creating an equitable task distribution across environments compared to the other agents. Working with the adaptive agent resulted in better coordination, reduced collisions, more balanced task contributions, and higher trust ratings. Reduced gaze allocation, across all agents, was associated with higher trust levels, while blink count, scanpath length, agent revisits and trust were predictive of the human's contribution to the team. Notably, fixation revisits on the agent increased with environmental complexity and decreased with agent versatility, offering a unique metric for measuring teammate performance monitoring. This is one of the first studies to use gaze metrics such as revisits, gaze allocation, and scanpath length to predict not only trust, but also human contribution to teaming behavior in a real-time task with cooperative agents. These findings underscore the importance of designing autonomous teammates that not only excel in task performance but also enhance teamwork by being more predictable and reducing the cognitive load on human team members. Additionally, this study highlights the potential of eye-tracking as an unobtrusive measure for evaluating and improving human-autonomy teams, suggesting eye gaze could be used by agents to dynamically adapt their behaviors.
在不断发展的人类自主团队(HAT)中,促进人类和自主代理之间的有效协作和信任变得越来越重要。为了探索这一点,我们使用游戏Overcooked AI来创造具有不同代理行为(笨拙,刚性,适应性)和环境复杂性(低,中,高)的动态团队场景。我们的目标是评估采用分层强化学习设计的自适应人工智能代理的性能,以实现更好的团队合作,并测量与信任和协作变化相关的眼动追踪信号。结果表明,与其他智能体相比,自适应智能体在管理团队和创建公平的跨环境任务分配方面更有效。使用自适应代理可以实现更好的协调、减少冲突、更平衡的任务贡献和更高的信任评级。在所有代理中,减少凝视分配与更高的信任水平有关,而眨眼次数、扫描路径长度、代理重访和信任可以预测人类对团队的贡献。值得注意的是,对代理的固定重访次数随着环境复杂性的增加而增加,随着代理的多功能性而减少,这为衡量团队绩效监控提供了一个独特的指标。这是首次使用诸如重访、凝视分配和扫描路径长度等凝视指标来预测信任,以及人类对协作代理实时任务中的团队行为的贡献的研究之一。这些发现强调了设计自主团队的重要性,他们不仅在任务表现上表现出色,而且通过更可预测和减少人类团队成员的认知负荷来加强团队合作。此外,这项研究强调了眼动追踪作为评估和改进人类自主团队的一种不显眼的措施的潜力,表明眼睛注视可以被代理人用来动态地适应他们的行为。
{"title":"Gaze-informed signatures of trust and collaboration in human-autonomy teams","authors":"Anthony J. Ries ,&nbsp;Stéphane Aroca-Ouellette ,&nbsp;Alessandro Roncone ,&nbsp;Ewart J. de Visser","doi":"10.1016/j.chbah.2025.100171","DOIUrl":"10.1016/j.chbah.2025.100171","url":null,"abstract":"<div><div>In the evolving landscape of human-autonomy teaming (HAT), fostering effective collaboration and trust between human and autonomous agents is increasingly important. To explore this, we used the game Overcooked AI to create dynamic teaming scenarios featuring varying agent behaviors (clumsy, rigid, adaptive) and environmental complexities (low, medium, high). Our objectives were to assess the performance of adaptive AI agents designed with hierarchical reinforcement learning for better teamwork and measure eye tracking signals related to changes in trust and collaboration. The results indicate that the adaptive agent was more effective in managing teaming and creating an equitable task distribution across environments compared to the other agents. Working with the adaptive agent resulted in better coordination, reduced collisions, more balanced task contributions, and higher trust ratings. Reduced gaze allocation, across all agents, was associated with higher trust levels, while blink count, scanpath length, agent revisits and trust were predictive of the human's contribution to the team. Notably, fixation revisits on the agent increased with environmental complexity and decreased with agent versatility, offering a unique metric for measuring teammate performance monitoring. This is one of the first studies to use gaze metrics such as revisits, gaze allocation, and scanpath length to predict not only trust, but also human contribution to teaming behavior in a real-time task with cooperative agents. These findings underscore the importance of designing autonomous teammates that not only excel in task performance but also enhance teamwork by being more predictable and reducing the cognitive load on human team members. Additionally, this study highlights the potential of eye-tracking as an unobtrusive measure for evaluating and improving human-autonomy teams, suggesting eye gaze could be used by agents to dynamically adapt their behaviors.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100171"},"PeriodicalIF":0.0,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144563946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Increased morality through social communication or decision situation worsens the acceptance of robo-advisors 通过社会沟通或决策情况增加的道德会使机器人顾问的接受程度恶化
Pub Date : 2025-06-06 DOI: 10.1016/j.chbah.2025.100173
Clarissa Sabrina Arlinghaus , Carolin Straßmann , Annika Dix
This German study (N = 317) tests social communication (i.e., self-disclosure, content intimacy, relational continuity units, we-phrases) as a potential compensation strategy for algorithm aversion. Therefore, we explore the acceptance of a robot as an advisor in non-moral, somewhat moral, and very moral decision situations and compare the influence of two verbal communication styles of the robot (functional vs. social).
Subjects followed the robot's recommendation similarly often for both communication styles (functional vs. social), but more often in the non-moral decision situation than in the moral decision situations. Subjects perceived the robot as more human and more moral during social communication than during functional communication but similarly trustworthy, likable, and intelligent for both communication styles. In moral decision situations, subjects ascribed more anthropomorphism and morality but less trust, likability, and intelligence to the robot compared to the non-moral decision situation.
Subjects perceive the robot as more moral in social communication. This unexpectedly led to subjects following the robot's recommendation less often. No other mediation effects were found. From this we conclude, that the verbal communication style alone has a rather small influence on the robot's acceptance as an advisor for moral decision-making and does not reduce algorithm aversion. Potential reasons for this (e.g., multimodality, no visual changes), as well as implications (e.g., avoidance of self-disclosure in human-robot interaction) and limitations (e.g., video interaction) of this study, are discussed.
这项德国研究(N = 317)测试了社会沟通(即自我披露、内容亲密性、关系连续性单元、自短语)作为算法厌恶的潜在补偿策略。因此,我们探讨了在非道德、有点道德和非常道德的决策情况下,机器人作为顾问的接受程度,并比较了机器人两种语言沟通风格(功能性与社会性)的影响。受试者在两种沟通方式(功能性与社会性)上都遵循机器人的建议,但在非道德决策情境下的情况比在道德决策情境下的情况更多。受试者认为机器人在社交交流中比在功能性交流中更人性化、更有道德,但在两种交流方式中都同样值得信赖、可爱、聪明。与非道德决策情境相比,在道德决策情境中,受试者对机器人的拟人化和道德感更多,而信任、可爱度和智能程度更低。实验对象认为机器人在社会交流中更有道德。这出乎意料地导致受试者不太听从机器人的建议。未发现其他中介效应。由此我们得出结论,语言沟通风格本身对机器人作为道德决策顾问的接受程度的影响相当小,并且不会减少算法厌恶。本文还讨论了造成这种情况的潜在原因(例如,多模态,无视觉变化),以及本研究的含义(例如,在人机交互中避免自我披露)和局限性(例如,视频交互)。
{"title":"Increased morality through social communication or decision situation worsens the acceptance of robo-advisors","authors":"Clarissa Sabrina Arlinghaus ,&nbsp;Carolin Straßmann ,&nbsp;Annika Dix","doi":"10.1016/j.chbah.2025.100173","DOIUrl":"10.1016/j.chbah.2025.100173","url":null,"abstract":"<div><div>This German study (<em>N</em> = 317) tests social communication (i.e., self-disclosure, content intimacy, relational continuity units, we-phrases) as a potential compensation strategy for algorithm aversion. Therefore, we explore the acceptance of a robot as an advisor in non-moral, somewhat moral, and very moral decision situations and compare the influence of two verbal communication styles of the robot (functional vs. social).</div><div>Subjects followed the robot's recommendation similarly often for both communication styles (functional vs. social), but more often in the non-moral decision situation than in the moral decision situations. Subjects perceived the robot as more human and more moral during social communication than during functional communication but similarly trustworthy, likable, and intelligent for both communication styles. In moral decision situations, subjects ascribed more anthropomorphism and morality but less trust, likability, and intelligence to the robot compared to the non-moral decision situation.</div><div>Subjects perceive the robot as more moral in social communication. This unexpectedly led to subjects following the robot's recommendation less often. No other mediation effects were found. From this we conclude, that the verbal communication style alone has a rather small influence on the robot's acceptance as an advisor for moral decision-making and does not reduce algorithm aversion. Potential reasons for this (e.g., multimodality, no visual changes), as well as implications (e.g., avoidance of self-disclosure in human-robot interaction) and limitations (e.g., video interaction) of this study, are discussed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100173"},"PeriodicalIF":0.0,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Of love & lasers: Perceptions of narratives by AI versus human authors 爱与激光:人工智能与人类作者的叙事感知
Pub Date : 2025-06-06 DOI: 10.1016/j.chbah.2025.100168
Gavin Raffloer, Melanie C Green
Artificial Intelligence (AI) programs can produce narratives. However, readers' preconceptions about AI may influence their response to these narratives, and furthermore, AI-generated writing may differ from human writing. Genre may also be relevant for readers’ attitudes regarding AI. This study tests the effects of actual AI versus human authorship, stated (labeled) authorship, and genre on perceptions of narratives and narrative engagement. Participants were randomly assigned within a 2 (actual author: human or AI) X 2 (stated author: human or AI) X 2 (genre: romance or science fiction) design, across two studies. In Study 1, actual AI narratives were perceived as more enjoyable, but human narratives were more appreciated. Furthermore, participants enjoyed actual AI-written sci-fi more than human-written sci-fi. Study 2 found that actual AI stories were rated more highly, particularly in appreciation, transportation, character identification, and future engagement. However, stated human authorship led to higher ratings for romance, but not for sci-fi. An interaction was observed such that for the sci-fi condition, stated human writing was perceived as more likely to be actually AI-written. Future research could expand upon these findings across more genres, as well as examining the determinants of preferences for stated human content.
人工智能(AI)程序可以产生叙事。然而,读者对人工智能的先入之见可能会影响他们对这些叙事的反应,此外,人工智能生成的写作可能与人类写作不同。类型也可能与读者对AI的态度有关。本研究测试了实际人工智能与人类作者、声明(标记)作者和类型对叙事和叙事参与的看法的影响。在两项研究中,参与者被随机分配到2(实际作者:人类或人工智能)x2(声明作者:人类或人工智能)x2(类型:浪漫或科幻)的设计中。在研究1中,人们认为真实的AI叙事更有趣,但人类叙事更受欢迎。此外,参与者更喜欢人工智能编写的科幻小说,而不是人类编写的科幻小说。研究2发现,真实的人工智能故事得到了更高的评价,尤其是在欣赏、交通、角色识别和未来参与度方面。然而,人类的作者身份导致爱情小说的评分更高,而科幻小说则不然。我们观察到一种互动,在科幻条件下,陈述的人类文字被认为更有可能是人工智能写的。未来的研究可以将这些发现扩展到更多类型,并检查对陈述的人类内容的偏好的决定因素。
{"title":"Of love & lasers: Perceptions of narratives by AI versus human authors","authors":"Gavin Raffloer,&nbsp;Melanie C Green","doi":"10.1016/j.chbah.2025.100168","DOIUrl":"10.1016/j.chbah.2025.100168","url":null,"abstract":"<div><div>Artificial Intelligence (AI) programs can produce narratives. However, readers' preconceptions about AI may influence their response to these narratives, and furthermore, AI-generated writing may differ from human writing. Genre may also be relevant for readers’ attitudes regarding AI. This study tests the effects of actual AI versus human authorship, stated (labeled) authorship, and genre on perceptions of narratives and narrative engagement. Participants were randomly assigned within a 2 (actual author: human or AI) X 2 (stated author: human or AI) X 2 (genre: romance or science fiction) design, across two studies. In Study 1, actual AI narratives were perceived as more enjoyable, but human narratives were more appreciated. Furthermore, participants enjoyed actual AI-written sci-fi more than human-written sci-fi. Study 2 found that actual AI stories were rated more highly, particularly in appreciation, transportation, character identification, and future engagement. However, stated human authorship led to higher ratings for romance, but not for sci-fi. An interaction was observed such that for the sci-fi condition, stated human writing was perceived as more likely to be actually AI-written. Future research could expand upon these findings across more genres, as well as examining the determinants of preferences for stated human content.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100168"},"PeriodicalIF":0.0,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To Be competitive or not to be competitive: How performance goals shape human-AI and human-human collaboration 竞争还是不竞争:绩效目标如何塑造人类-人工智能和人与人之间的合作
Pub Date : 2025-06-05 DOI: 10.1016/j.chbah.2025.100169
Spatola Nicolas
Due to generative AI, and particularly algorithms using large language models, people's use of algorithms as recommendation tools is increasing at an unprecedented pace. While these tools are used in both private and work contexts, less is known about how the motivational context surrounding algorithm use impacts reliance patterns. This research examined how competitive versus non-performance goals affect adherence to algorithmic versus human recommendation. In Experiment 1, participants completed Raven's Matrices with optional algorithm assistance. Framing the task as a competitive test increased reliance on the algorithm compared to a control condition. This effect was mediated by heightened perceived usefulness but not accuracy. Experiment 2 introduced human assistance alongside the algorithm assistance from Experiment 1. Performance (compared to control) goals increased reliance on the algorithm over peer assistance by selectively enhancing the perceived usefulness of the algorithm versus human assistance. These results demonstrate how setting goals may influence the preference to rely on algorithmic or human assistance and particularly how performance goal contexts catalyze a situation in which participants are more prone to rely on algorithms compared to peer recommendation. These results are discussed with regard to social goals and social cognition in competitive settings with the aim of elucidating how motivational framing shapes human-AI collaborative dynamics, informing responsible system design.
由于生成式人工智能,特别是使用大型语言模型的算法,人们使用算法作为推荐工具的速度正在以前所未有的速度增长。虽然这些工具在私人和工作环境中都有使用,但围绕算法使用的动机环境如何影响依赖模式却知之甚少。这项研究考察了竞争性目标和非绩效目标如何影响算法和人类推荐的依从性。在实验1中,参与者在可选算法辅助下完成Raven’s Matrices。与控制条件相比,将任务设置为竞争性测试增加了对算法的依赖。这种效应是由感知有用性的提高介导的,而不是准确性。实验2在实验1的算法辅助下引入了人工辅助。性能目标(与控制目标相比)通过选择性地增强算法相对于人类帮助的感知有用性,增加了对算法的依赖。这些结果证明了设定目标如何影响依赖算法或人工帮助的偏好,特别是绩效目标背景如何催化参与者更倾向于依赖算法而不是同伴推荐的情况。这些结果讨论了竞争环境中的社会目标和社会认知,目的是阐明动机框架如何塑造人类-人工智能协作动态,为负责任的系统设计提供信息。
{"title":"To Be competitive or not to be competitive: How performance goals shape human-AI and human-human collaboration","authors":"Spatola Nicolas","doi":"10.1016/j.chbah.2025.100169","DOIUrl":"10.1016/j.chbah.2025.100169","url":null,"abstract":"<div><div>Due to generative AI, and particularly algorithms using large language models, people's use of algorithms as recommendation tools is increasing at an unprecedented pace. While these tools are used in both private and work contexts, less is known about how the motivational context surrounding algorithm use impacts reliance patterns. This research examined how competitive versus non-performance goals affect adherence to algorithmic versus human recommendation. In Experiment 1, participants completed Raven's Matrices with optional algorithm assistance. Framing the task as a competitive test increased reliance on the algorithm compared to a control condition. This effect was mediated by heightened perceived usefulness but not accuracy. Experiment 2 introduced human assistance alongside the algorithm assistance from Experiment 1. Performance (compared to control) goals increased reliance on the algorithm over peer assistance by selectively enhancing the perceived usefulness of the algorithm versus human assistance. These results demonstrate how setting goals may influence the preference to rely on algorithmic or human assistance and particularly how performance goal contexts catalyze a situation in which participants are more prone to rely on algorithms compared to peer recommendation. These results are discussed with regard to social goals and social cognition in competitive settings with the aim of elucidating how motivational framing shapes human-AI collaborative dynamics, informing responsible system design.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100169"},"PeriodicalIF":0.0,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144241902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial social influence via human-embodied AI agent interaction in immersive virtual reality (VR): Effects of similarity-matching during health conversations 沉浸式虚拟现实(VR)中人工智能代理交互的人工社会影响:健康对话中的相似性匹配效应
Pub Date : 2025-06-03 DOI: 10.1016/j.chbah.2025.100172
Sue Lim, Ralf Schmälzle, Gary Bente
Interactions with artificial intelligence (AI) based agents can positively influence human behavior and judgment. However, studies to date focus on text-based conversational agents (CA) with limited embodiment, restricting our understanding of how social influence principles, such as physical similarity, apply to AI agents (i.e., artificial social influence). We address this gap by leveraging latest advances in AI (large language models) and combining them with immersive virtual reality (VR). Specifically, we built VR-ECAs, or embodied conversational agents that can engage in turn-taking conversations with humans about health-related topics in a virtual environment. Then we manipulated interpersonal similarity via gender matching and examined its effects on biobehavioral (i.e., gaze), social (e.g., agent likeability), and behavioral outcomes (i.e., healthy snack selection). We observed an interaction effect between agent and participant gender on biobehavioral outcomes: discussing health with opposite-gender agents tended to enhance gaze duration, with the effect stronger for male participants compared to their female counterparts. A similar directional pattern was observed for healthy snack selection. In addition, female participants liked the VR-ECAs more than their male counterparts, regardless of the VR-ECAs’ gender. Finally, participants experienced greater presence while conversing with embodied agents than chatting with text-only agents. Overall, our findings highlight embodiment as a crucial factor of AI's influence on human behavior, and our paradigm enables new experimental research at the intersection of social influence, human-AI communication, and immersive virtual reality (VR).
与基于人工智能(AI)的代理的交互可以积极地影响人类的行为和判断。然而,迄今为止的研究主要集中在基于文本的会话代理(CA)上,具有有限的体现,限制了我们对社会影响原则(如物理相似性)如何应用于人工智能代理(即人工社会影响)的理解。我们利用人工智能(大型语言模型)的最新进展,并将其与沉浸式虚拟现实(VR)相结合,以解决这一差距。具体来说,我们构建了vr - eca,或具体化的会话代理,可以在虚拟环境中与人类就健康相关主题进行轮流对话。然后,我们通过性别匹配操纵人际相似性,并研究其对生物行为(如凝视)、社会(如代理人喜爱程度)和行为结果(如健康零食选择)的影响。我们观察到代理人和参与者性别对生物行为结果的交互作用:与异性代理人讨论健康倾向于延长凝视时间,男性参与者的影响比女性参与者更强。在健康零食的选择上也观察到类似的方向模式。此外,女性参与者比男性参与者更喜欢vr - eca,无论vr - eca的性别如何。最后,参与者在与具身代理交谈时比与纯文本代理聊天时体验到更多的存在感。总的来说,我们的研究结果强调了体现是人工智能对人类行为影响的关键因素,我们的范式使社会影响、人类-人工智能交流和沉浸式虚拟现实(VR)交叉的新实验研究成为可能。
{"title":"Artificial social influence via human-embodied AI agent interaction in immersive virtual reality (VR): Effects of similarity-matching during health conversations","authors":"Sue Lim,&nbsp;Ralf Schmälzle,&nbsp;Gary Bente","doi":"10.1016/j.chbah.2025.100172","DOIUrl":"10.1016/j.chbah.2025.100172","url":null,"abstract":"<div><div>Interactions with artificial intelligence (AI) based agents can positively influence human behavior and judgment. However, studies to date focus on text-based conversational agents (CA) with limited embodiment, restricting our understanding of how social influence principles, such as physical similarity, apply to AI agents (i.e., artificial social influence). We address this gap by leveraging latest advances in AI (large language models) and combining them with immersive virtual reality (VR). Specifically, we built VR-ECAs, or embodied conversational agents that can engage in turn-taking conversations with humans about health-related topics in a virtual environment. Then we manipulated interpersonal similarity via gender matching and examined its effects on biobehavioral (i.e., gaze), social (e.g., agent likeability), and behavioral outcomes (i.e., healthy snack selection). We observed an interaction effect between agent and participant gender on biobehavioral outcomes: discussing health with opposite-gender agents tended to enhance gaze duration, with the effect stronger for male participants compared to their female counterparts. A similar directional pattern was observed for healthy snack selection. In addition, female participants liked the VR-ECAs more than their male counterparts, regardless of the VR-ECAs’ gender. Finally, participants experienced greater presence while conversing with embodied agents than chatting with text-only agents. Overall, our findings highlight embodiment as a crucial factor of AI's influence on human behavior, and our paradigm enables new experimental research at the intersection of social influence, human-AI communication, and immersive virtual reality (VR).</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100172"},"PeriodicalIF":0.0,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144270226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Whose mind is it anyway? A systematic review and exploration on agency in cognitive augmentation 这到底是谁的想法?认知增强中能动性的系统回顾与探索
Pub Date : 2025-06-02 DOI: 10.1016/j.chbah.2025.100158
Steeven Villa , Lisa L. Barth , Francesco Chiossi , Robin Welsch , Thomas Kosch
Technologies for human augmentation aim to enhance sensory, motor, and cognitive abilities. Despite the growing interest in cognitive augmentation, the sense of agency and the feeling of control over one’s actions and outcomes remained underexplored. We conducted a systematic literature review, screening 434 human–computer Interaction articles, and identified 27 papers examining agency in cognitive augmentation. Our analysis revealed a lack of objective methods to measure the sense of agency. We analyzed Electroencephalography (EEG) data of a dataset from 27 participants performing a Columbia Card Task with and without perceived AI assistance to address this research gap. We observed changes in EEG data for alpha and low-beta power, demonstrating EEG as a measure of perceived cognitive agency. These findings demonstrate how EEG can quantify perceived agency, presenting a method to evaluate the impact of cognitive augmentation technologies on the sense of agency. This study not only provides a novel neurophysiological approach for assessing the impact of cognitive augmentation technologies on agency but also leads the way to designing interfaces that create user awareness regarding their sense of agency.
人类增强技术旨在增强感觉、运动和认知能力。尽管人们对认知增强的兴趣日益浓厚,但能动性和对一个人的行为和结果的控制感仍然没有得到充分的探索。我们进行了系统的文献综述,筛选了434篇人机交互的文章,并确定了27篇研究认知增强中的代理的论文。我们的分析显示,缺乏衡量代理意识的客观方法。我们分析了27名参与者在有和没有人工智能帮助的情况下执行哥伦比亚卡片任务的数据集的脑电图(EEG)数据,以解决这一研究空白。我们观察到α和低β功率的脑电图数据的变化,证明脑电图是感知认知代理的一种测量方法。这些发现证明了脑电图如何量化感知代理,为评估认知增强技术对代理感的影响提供了一种方法。这项研究不仅为评估认知增强技术对代理的影响提供了一种新的神经生理学方法,而且还为设计能够创造用户对代理感的意识的界面提供了途径。
{"title":"Whose mind is it anyway? A systematic review and exploration on agency in cognitive augmentation","authors":"Steeven Villa ,&nbsp;Lisa L. Barth ,&nbsp;Francesco Chiossi ,&nbsp;Robin Welsch ,&nbsp;Thomas Kosch","doi":"10.1016/j.chbah.2025.100158","DOIUrl":"10.1016/j.chbah.2025.100158","url":null,"abstract":"<div><div>Technologies for human augmentation aim to enhance sensory, motor, and cognitive abilities. Despite the growing interest in cognitive augmentation, the sense of agency and the feeling of control over one’s actions and outcomes remained underexplored. We conducted a systematic literature review, screening 434 human–computer Interaction articles, and identified 27 papers examining agency in cognitive augmentation. Our analysis revealed a lack of objective methods to measure the sense of agency. We analyzed Electroencephalography (EEG) data of a dataset from 27 participants performing a Columbia Card Task with and without perceived AI assistance to address this research gap. We observed changes in EEG data for alpha and low-beta power, demonstrating EEG as a measure of perceived cognitive agency. These findings demonstrate how EEG can quantify perceived agency, presenting a method to evaluate the impact of cognitive augmentation technologies on the sense of agency. This study not only provides a novel neurophysiological approach for assessing the impact of cognitive augmentation technologies on agency but also leads the way to designing interfaces that create user awareness regarding their sense of agency.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100158"},"PeriodicalIF":0.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144470649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Educational robotics: Parental views of telepresence robots as social and academic support for children undergoing cancer treatment in Denmark 教育机器人:家长对远程呈现机器人作为丹麦儿童接受癌症治疗的社会和学术支持的看法
Pub Date : 2025-05-24 DOI: 10.1016/j.chbah.2025.100164
Emilie Løvenstein Vegeberg , Mette Weibel Willard , Mads Lund Andersen , Lykke Brogaard Bertel , Hanne Bækgaard Larsen
Disrupted school attendance can trigger social and academic setbacks in children with prolonged illness. This study explores parental perspectives of telepresence robots in facilitating social and academic inclusion of their children undergoing cancer treatment. Parents (n = 15) of school-aged children with cancer (n = 15) in Denmark participated in semi-structured interviews between November 2022 and July 2023. An abductive approach was used, based on thematic analysis and the Agential Realism theory. The analyses were structured around five themes: 1) multifaceted responsibilities and roles; 2) aid or burden; 3) robot personification; 4) social connectivity; and 5) educational support. From a parental perspective, telepresence robots can support regular school attendance in children with cancer, classmate interactions and facilitate information sharing about teaching content. Conversely, telepresence robots can impose an additional burden on parents of children with cancer including responsibility for facilitating robot use while lacking surplus resources otherwise dedicated to the sick child. This study corroborates the potential of telepresence robots to provide social and academic support for children undergoing treatment, thereby alleviating the burden faced by their parents.
在长期患病的儿童中,上学中断可能会引发社会和学业上的挫折。本研究探讨了远程呈现机器人在促进其接受癌症治疗的孩子的社会和学术包容方面的父母观点。在2022年11月至2023年7月期间,丹麦学龄癌症儿童(n = 15)的父母(n = 15)参加了半结构化访谈。在主题分析和代理现实主义理论的基础上,采用了溯因法。分析围绕五个主题展开:1)多方面的责任和角色;2)援助或负担;3)机器人拟人化;4)社会连通性;5)教育支持。从家长的角度来看,远程呈现机器人可以支持癌症儿童的正常上学,同学互动,促进教学内容的信息共享。相反,远程呈现机器人可能会给患有癌症的儿童的父母带来额外的负担,包括在缺乏用于患病儿童的剩余资源的情况下促进机器人使用的责任。这项研究证实了远程呈现机器人为接受治疗的儿童提供社会和学术支持的潜力,从而减轻了他们父母面临的负担。
{"title":"Educational robotics: Parental views of telepresence robots as social and academic support for children undergoing cancer treatment in Denmark","authors":"Emilie Løvenstein Vegeberg ,&nbsp;Mette Weibel Willard ,&nbsp;Mads Lund Andersen ,&nbsp;Lykke Brogaard Bertel ,&nbsp;Hanne Bækgaard Larsen","doi":"10.1016/j.chbah.2025.100164","DOIUrl":"10.1016/j.chbah.2025.100164","url":null,"abstract":"<div><div>Disrupted school attendance can trigger social and academic setbacks in children with prolonged illness. This study explores parental perspectives of telepresence robots in facilitating social and academic inclusion of their children undergoing cancer treatment. Parents (n = 15) of school-aged children with cancer (n = 15) in Denmark participated in semi-structured interviews between November 2022 and July 2023. An abductive approach was used, based on thematic analysis and the Agential Realism theory. The analyses were structured around five themes: 1) multifaceted responsibilities and roles; 2) aid or burden; 3) robot personification; 4) social connectivity; and 5) educational support. From a parental perspective, telepresence robots can support regular school attendance in children with cancer, classmate interactions and facilitate information sharing about teaching content. Conversely, telepresence robots can impose an additional burden on parents of children with cancer including responsibility for facilitating robot use while lacking surplus resources otherwise dedicated to the sick child. This study corroborates the potential of telepresence robots to provide social and academic support for children undergoing treatment, thereby alleviating the burden faced by their parents.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100164"},"PeriodicalIF":0.0,"publicationDate":"2025-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144241773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experimental evaluation of cognitive agents for collaboration in human-autonomy cyber defense teams 认知代理在人类自主网络防御团队协作中的实验评估
Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100148
Yinuo Du , Baptiste Prébot , Tyler Malloy , Fei Fang , Cleotilde Gonzalez
Autonomous agents are becoming increasingly prevalent and capable of collaborating with humans on interdependent tasks as teammates. There is increasing recognition that human-like agents might be natural human collaborators. However, there has been limited work on designing agents according to the principles of human cognition or in empirically testing their teamwork effectiveness. In this study, we introduce the Team Defense Game (TDG), a novel experimental platform for investigating human-autonomy teaming in cyber defense scenarios. We design an agent that relies on episodic memory to determine its actions (Cognitive agent) and compare its effectiveness with two types of autonomous agents: one that relies on heuristic reasoning (Heuristic agent) and one that behaves randomly (Random agent). These agents are compared in a human-autonomy team (HAT) performing a cyber-protection task in the TDG. We systematically evaluate how autonomous teammates’ abilities and competence impact the team’s interaction and outcomes. The results revealed that teams with Cognitive agents are the most effective partners, followed by teams with Heuristic and Random agents. Evaluation of collaborative team process metrics suggests that the cognitive agent is more adaptive to individual play styles of human teammates, but it is also inconsistent and less predictable than the Heuristic agent. Competent agents (Cognitive and Heuristic agents) require less human effort but might cause over-reliance. A post-experiment questionnaire showed that competent agents are rated more trustworthy and cooperative than Random agents. We also found that human participants’ subjective ratings correlate with their team performance, and humans tend to take the credit or responsibility for the team. Our work advances HAT research by providing empirical evidence of how the design of different autonomous agents (cognitive, heuristic, and random) affect team performance and dynamics in cybersecurity contexts. We propose that autonomous agents for HATs should possess both competence and human-like cognition while also ensuring predictable behavior or clear explanations to maintain human trust. Additionally, they should proactively seek human input to enhance teamwork effectiveness.
自主代理正变得越来越普遍,并且能够以队友的身份与人类合作完成相互依赖的任务。越来越多的人认识到,类人的代理人可能是自然的人类合作者。然而,根据人类认知原则设计代理或对其团队合作有效性进行实证测试的工作有限。在这项研究中,我们引入了团队防御游戏(TDG),这是一个研究网络防御场景中人类自主团队的新实验平台。我们设计了一个依赖情景记忆来决定其行为的智能体(认知智能体),并将其与两种类型的自主智能体进行了比较:一种依赖启发式推理(启发式智能体),另一种依赖随机行为(随机智能体)。在TDG中执行网络保护任务的人类自主团队(HAT)中对这些代理进行了比较。我们系统地评估自主团队成员的能力和能力如何影响团队的互动和结果。结果显示,使用认知代理的团队是最有效的合作伙伴,其次是启发式代理和随机代理。对协作团队过程指标的评估表明,认知代理更能适应人类队友的个人游戏风格,但它也不一致,而且比启发式代理更不可预测。有能力的代理(认知代理和启发式代理)需要较少的人力,但可能导致过度依赖。实验后问卷调查显示,有能力的智能体比随机智能体更值得信任和合作。我们还发现,人类参与者的主观评分与他们的团队表现相关,人类倾向于为团队邀功或承担责任。我们的工作通过提供不同自主代理(认知、启发式和随机)的设计如何影响网络安全背景下的团队绩效和动态的经验证据,推进了HAT研究。我们建议HATs的自主代理应该具备能力和类似人类的认知,同时确保可预测的行为或明确的解释,以维持人类的信任。此外,他们应该主动寻求人力投入,以提高团队效率。
{"title":"Experimental evaluation of cognitive agents for collaboration in human-autonomy cyber defense teams","authors":"Yinuo Du ,&nbsp;Baptiste Prébot ,&nbsp;Tyler Malloy ,&nbsp;Fei Fang ,&nbsp;Cleotilde Gonzalez","doi":"10.1016/j.chbah.2025.100148","DOIUrl":"10.1016/j.chbah.2025.100148","url":null,"abstract":"<div><div>Autonomous agents are becoming increasingly prevalent and capable of collaborating with humans on interdependent tasks as teammates. There is increasing recognition that human-like agents might be natural human collaborators. However, there has been limited work on designing agents according to the principles of human cognition or in empirically testing their teamwork effectiveness. In this study, we introduce the Team Defense Game (TDG), a novel experimental platform for investigating human-autonomy teaming in cyber defense scenarios. We design an agent that relies on episodic memory to determine its actions (<em>Cognitive agent</em>) and compare its effectiveness with two types of autonomous agents: one that relies on heuristic reasoning (<em>Heuristic agent</em>) and one that behaves randomly (<em>Random agent</em>). These agents are compared in a human-autonomy team (HAT) performing a cyber-protection task in the TDG. We systematically evaluate how autonomous teammates’ abilities and competence impact the team’s interaction and outcomes. The results revealed that teams with Cognitive agents are the most effective partners, followed by teams with Heuristic and Random agents. Evaluation of collaborative team process metrics suggests that the cognitive agent is more adaptive to individual play styles of human teammates, but it is also inconsistent and less predictable than the Heuristic agent. Competent agents (Cognitive and Heuristic agents) require less human effort but might cause over-reliance. A post-experiment questionnaire showed that competent agents are rated more trustworthy and cooperative than Random agents. We also found that human participants’ subjective ratings correlate with their team performance, and humans tend to take the credit or responsibility for the team. Our work advances HAT research by providing empirical evidence of how the design of different autonomous agents (cognitive, heuristic, and random) affect team performance and dynamics in cybersecurity contexts. We propose that autonomous agents for HATs should possess both competence and human-like cognition while also ensuring predictable behavior or clear explanations to maintain human trust. Additionally, they should proactively seek human input to enhance teamwork effectiveness.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100148"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143891973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the connecting potential of AI: Integrating human interpersonal listening and parasocial support into human-computer interactions 探索人工智能的连接潜力:将人类人际倾听和副社会支持整合到人机交互中
Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100149
Netta Weinstein , Guy Itzchakov , Michael R. Maniaci
Conversational artificial intelligence (AI) can be harnessed to provide supportive parasocial interactions that rival or even exceed social support from human interactions. High-quality listening in human conversations fosters social connection that heals interpersonal wounds and lessens loneliness. While AI can furnish advice, listening involves each speaker's perception of positive intention, a quality that AI can only simulate. Can such deep-seated support be provided by AI? This research examined two previously siloed areas of knowledge: the healing capabilities of human interpersonal listening, and the potential for AI to produce parasocial experiences of connection. Three experiments (N = 668) addressed this question through manipulating conversational AI listening to test effects on perceived listening, psychological needs, and state loneliness. We show that when prompted, AI could provide high-quality listening, characterized by careful attention and a positive environment for self-expression. More so, AI's high-quality listening was perceived as better than participants' average human interaction (Studies 1–3). Receiving high-quality listening predicted greater relatedness (Study 3) and autonomy (Studies 2 and 3) need satisfaction after participants discussed rejection (Study 2–3), loneliness (Study 3), and isolating attitudes (Study 3). Despite this, we did not observe downstream lessening of loneliness typically observed in human interactions, even for those who were high in trait loneliness (Study 3). These findings clearly contrast with research on human interactions and hint at the potential power, but also the limits, of AI in replicating supportive human interactions.
会话人工智能(AI)可以用来提供支持性的准社会互动,与人类互动的社会支持相媲美甚至超过人类互动的社会支持。在人类对话中,高质量的倾听可以促进社会联系,治愈人际创伤,减少孤独感。虽然人工智能可以提供建议,但倾听涉及到每个说话者对积极意图的感知,这是人工智能只能模拟的品质。人工智能能提供这种深层次的支持吗?这项研究考察了两个以前被孤立的知识领域:人类人际倾听的治愈能力,以及人工智能产生连接的准社会体验的潜力。三个实验(N = 668)通过操纵对话式人工智能倾听来测试感知倾听、心理需求和状态孤独的影响,解决了这个问题。我们表明,当提示时,人工智能可以提供高质量的倾听,其特点是仔细关注和积极的自我表达环境。更重要的是,人工智能的高质量倾听被认为比参与者的平均人际互动要好(研究1-3)。在参与者讨论拒绝(研究2 - 3)、孤独(研究3)和孤立态度(研究3)后,接受高质量的倾听预示着更大的相关性(研究3)和自主性(研究2和3)需求满足。尽管如此,我们并没有观察到在人际交往中典型观察到的孤独感的下游减少,即使是那些高特质孤独感的人(研究3)。这些发现与人类互动的研究形成鲜明对比,暗示了人工智能在复制支持性人类互动方面的潜在力量,但也有局限性。
{"title":"Exploring the connecting potential of AI: Integrating human interpersonal listening and parasocial support into human-computer interactions","authors":"Netta Weinstein ,&nbsp;Guy Itzchakov ,&nbsp;Michael R. Maniaci","doi":"10.1016/j.chbah.2025.100149","DOIUrl":"10.1016/j.chbah.2025.100149","url":null,"abstract":"<div><div>Conversational artificial intelligence (AI) can be harnessed to provide supportive parasocial interactions that rival or even exceed social support from human interactions. High-quality listening in human conversations fosters social connection that heals interpersonal wounds and lessens loneliness. While AI can furnish advice, listening involves each speaker's perception of positive intention, a quality that AI can only simulate. Can such deep-seated support be provided by AI? This research examined two previously siloed areas of knowledge: the healing capabilities of human interpersonal listening, and the potential for AI to produce parasocial experiences of connection. Three experiments (<em>N</em> = 668) addressed this question through manipulating conversational AI listening to test effects on perceived listening, psychological needs, and state loneliness. We show that when prompted, AI could provide high-quality listening, characterized by careful attention and a positive environment for self-expression. More so, AI's high-quality listening was perceived as better than participants' average human interaction (Studies 1–3). Receiving high-quality listening predicted greater relatedness (Study 3) and autonomy (Studies 2 and 3) need satisfaction after participants discussed rejection (Study 2–3), loneliness (Study 3), and isolating attitudes (Study 3). Despite this, we did not observe downstream lessening of loneliness typically observed in human interactions, even for those who were high in trait loneliness (Study 3). These findings clearly contrast with research on human interactions and hint at the potential power, but also the limits, of AI in replicating supportive human interactions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100149"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143891972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Becoming dehumanized by a service robot: An empirical examination of what happens when non-humans perceive us as less than full humans 被服务机器人剥夺人性:当非人类认为我们不是完整的人类时,会发生什么
Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100163
Magnus Söderlund
Service robots are expected to become increasingly common, and one fundamental task for them is to detect when a human user is present. Thus, they need to be able to correctly categorize a user as a “user”. So far, however, little is known about how users react to robots' understanding of what a user is in terms of a superordinate social category, namely “human”. Given that we humans are sensitive to how we are categorized by others, particularly when we are dehumanized in the categorization process, it was assumed in the present study that this sensitivity may materialize also when the categorizer is a (humanlike) service robot. This assumption was examined with two between-subjects experiments in which a service robot's categorization of the user was manipulated (low vs. high dehumanization). The main finding was that high robotic dehumanization had a negative impact on the user's overall evaluation of the robot.
服务机器人预计将变得越来越普遍,它们的一项基本任务是检测人类用户的存在。因此,他们需要能够正确地将用户分类为“用户”。然而,到目前为止,人们对机器人对用户的理解(即“人类”这一高级社会类别)的反应知之甚少。考虑到我们人类对他人如何对我们进行分类很敏感,特别是当我们在分类过程中被非人化时,在本研究中假设,当分类器是一个(类人的)服务机器人时,这种敏感性也可能实现。这一假设通过两个受试者之间的实验进行了检验,其中服务机器人对用户的分类被操纵(低与高非人性化)。主要发现是机器人的高度非人性化对用户对机器人的整体评价有负面影响。
{"title":"Becoming dehumanized by a service robot: An empirical examination of what happens when non-humans perceive us as less than full humans","authors":"Magnus Söderlund","doi":"10.1016/j.chbah.2025.100163","DOIUrl":"10.1016/j.chbah.2025.100163","url":null,"abstract":"<div><div>Service robots are expected to become increasingly common, and one fundamental task for them is to detect when a human user is present. Thus, they need to be able to correctly categorize a user as a “user”. So far, however, little is known about how users react to robots' understanding of what a user is in terms of a superordinate social category, namely “human”. Given that we humans are sensitive to how we are categorized by others, particularly when we are dehumanized in the categorization process, it was assumed in the present study that this sensitivity may materialize also when the categorizer is a (humanlike) service robot. This assumption was examined with two between-subjects experiments in which a service robot's categorization of the user was manipulated (low vs. high dehumanization). The main finding was that high robotic dehumanization had a negative impact on the user's overall evaluation of the robot.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100163"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144130860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1