首页 > 最新文献

International Journal of Social Robotics最新文献

英文 中文
Time-to-Collision Based Social Force Model for Intelligent Agents on Shared Public Spaces 共享公共空间中基于时间-碰撞的智能代理社会力模型
IF 4.7 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-09-06 DOI: 10.1007/s12369-024-01171-9
Alireza Jafari, Yen-Chen Liu

Intelligent transportation modes such as autonomous robots and electric scooters with ride assistance are gaining popularity, but their integration into public spaces poses challenges to pedestrian safety and comfort. Nevertheless, the attempts to address the problem are scattered and sometimes contradictory. Models describing the behavior of heterogeneous crowds are necessary for solution evaluation before implementation. Moreover, autonomous agents benefit from these models, aiming to operate more efficiently while prioritizing pedestrian safety. The novelty of the proposed model is integrating time-to-collision, an indicator of road users’ subjective safety, into the social force model, the primary tool for pedestrian movement predictions. Moreover, the model considers the cumulative effects of anticipating other agents’ trajectories and the incurred time-to-collisions within a specific time horizon. We conduct controlled experiments using electric scooters to calibrate the model, discuss the distribution of parameter sets, and present pooled parameter population properties. Furthermore, we validate the model’s performance for electric scooters in complex scenarios and compare it with previous models using behavior naturalness metrics. Lastly, we compare the model’s accuracy and computation resource intensity to existing models. The model is computationally cheap and better equipped to estimate nearby people’s comfort level, making it a better candidate for intelligent agents’ path-planning algorithms in shared spaces.

自动机器人和电动滑板车等智能交通模式越来越受欢迎,但将其融入公共空间对行人的安全和舒适度提出了挑战。然而,解决这一问题的尝试很分散,有时甚至相互矛盾。在实施之前,有必要建立描述异质人群行为的模型,以便对解决方案进行评估。此外,自主代理也能从这些模型中受益,从而在优先考虑行人安全的同时提高运行效率。所提议模型的新颖之处在于将碰撞时间(道路使用者主观安全的指标)整合到社会力模型中,而社会力模型是预测行人移动的主要工具。此外,该模型还考虑了在特定时间范围内预测其他行为主体的轨迹和发生碰撞时间的累积效应。我们使用电动滑板车进行了受控实验来校准模型,讨论了参数集的分布,并提出了集合参数群的属性。此外,我们还验证了模型在复杂场景下对电动滑板车的性能,并使用行为自然度指标将其与之前的模型进行了比较。最后,我们将模型的准确性和计算资源强度与现有模型进行了比较。该模型的计算成本低廉,而且能更好地估计附近人群的舒适度,因此更适合用于智能代理在共享空间中的路径规划算法。
{"title":"Time-to-Collision Based Social Force Model for Intelligent Agents on Shared Public Spaces","authors":"Alireza Jafari, Yen-Chen Liu","doi":"10.1007/s12369-024-01171-9","DOIUrl":"https://doi.org/10.1007/s12369-024-01171-9","url":null,"abstract":"<p>Intelligent transportation modes such as autonomous robots and electric scooters with ride assistance are gaining popularity, but their integration into public spaces poses challenges to pedestrian safety and comfort. Nevertheless, the attempts to address the problem are scattered and sometimes contradictory. Models describing the behavior of heterogeneous crowds are necessary for solution evaluation before implementation. Moreover, autonomous agents benefit from these models, aiming to operate more efficiently while prioritizing pedestrian safety. The novelty of the proposed model is integrating time-to-collision, an indicator of road users’ subjective safety, into the social force model, the primary tool for pedestrian movement predictions. Moreover, the model considers the cumulative effects of anticipating other agents’ trajectories and the incurred time-to-collisions within a specific time horizon. We conduct controlled experiments using electric scooters to calibrate the model, discuss the distribution of parameter sets, and present pooled parameter population properties. Furthermore, we validate the model’s performance for electric scooters in complex scenarios and compare it with previous models using behavior naturalness metrics. Lastly, we compare the model’s accuracy and computation resource intensity to existing models. The model is computationally cheap and better equipped to estimate nearby people’s comfort level, making it a better candidate for intelligent agents’ path-planning algorithms in shared spaces.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"59 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigation of Joint Action in Go/No-Go Tasks: Development of a Human-Like Eye Robot and Verification of Action Space 研究去/不去任务中的联合行动:开发仿人眼球机器人并验证行动空间
IF 4.7 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-08-27 DOI: 10.1007/s12369-024-01168-4
Kotaro Hayashi

Human–robot collaboration (HRC) is a natural progression of technological development and can improve job performance, address labor shortages, and reduce labor costs. However, it is still uncertain whether joint action, similar to that occurring between humans, can be replicated between humans and robots. Many robotic researchers have focused on joint action, and it has been demonstrated that gaze cueing plays a significant role in this context. Currently, previous studies on joint action use humanoids; however, robots utilized in the research on human-robot collaboration lack human-like eyes needed for verification. Therefore, this study focuses on the development of an eye robot with gaze-cueing behaviors that can be easily integrated into existing robotic systems. As another theme of this study, we proposed the use of fixation duration as a new metric, which is distinct from the commonly used response time, for the quantitative evaluation of joint action research. These are verified through a Go/No-go task under six conditions—three behavioral (i.e., joint action, joint attention-only, and alone), each with two partner conditions (robot or human partner). While developing a human-like eye robot, this study demonstrates the potential of a robot to be a better joint action partner than an uncertain human, with participants exhibiting the best reaction times when partnered with a robot. The shared action space of the participants was investigated, where a transference of the action space indicates the expression of joint action. The fixation duration indicates that the proposed robot cause participants to move their action space to include that of the robot. These results suggest that the proposed collaborative robot can initiate a joint action between a robot and a human, and can perform as a more effective partner in joint actions compared to an unfamiliar human. This study showcased the capacity of fixation duration as a quantitative assessment metric for joint action.

人机协作(HRC)是技术发展的必然趋势,可以提高工作绩效、解决劳动力短缺问题并降低劳动力成本。然而,人类与机器人之间能否复制类似于人类之间的联合行动,目前仍不确定。许多机器人研究人员都把重点放在联合行动上,研究表明,凝视提示在联合行动中发挥着重要作用。目前,以往关于联合行动的研究都使用人形机器人,但在人机协作研究中使用的机器人缺乏验证所需的类似人类的眼睛。因此,本研究的重点是开发一种具有注视提示行为的眼睛机器人,它可以很容易地集成到现有的机器人系统中。作为本研究的另一个主题,我们提出使用固定持续时间作为新的指标,与常用的反应时间不同,用于联合行动研究的定量评估。我们通过六种条件下的 "走/不走 "任务验证了这一点--三种行为条件(即联合行动、仅联合关注和单独行动),每种条件有两种搭档条件(机器人或人类搭档)。本研究在开发类人眼球机器人的同时,也证明了机器人比不确定的人类更适合作为联合行动伙伴的潜力。研究调查了参与者的共享行动空间,行动空间的转移表明了联合行动的表达。固定持续时间表明,拟议的机器人会使参与者移动自己的行动空间,以包括机器人的行动空间。这些结果表明,所提议的协作机器人能够发起机器人与人类之间的联合行动,并且与陌生人类相比,在联合行动中能够成为更有效的合作伙伴。这项研究展示了固定持续时间作为联合行动量化评估指标的能力。
{"title":"Investigation of Joint Action in Go/No-Go Tasks: Development of a Human-Like Eye Robot and Verification of Action Space","authors":"Kotaro Hayashi","doi":"10.1007/s12369-024-01168-4","DOIUrl":"https://doi.org/10.1007/s12369-024-01168-4","url":null,"abstract":"<p>Human–robot collaboration (HRC) is a natural progression of technological development and can improve job performance, address labor shortages, and reduce labor costs. However, it is still uncertain whether joint action, similar to that occurring between humans, can be replicated between humans and robots. Many robotic researchers have focused on joint action, and it has been demonstrated that gaze cueing plays a significant role in this context. Currently, previous studies on joint action use humanoids; however, robots utilized in the research on human-robot collaboration lack human-like eyes needed for verification. Therefore, this study focuses on the development of an eye robot with gaze-cueing behaviors that can be easily integrated into existing robotic systems. As another theme of this study, we proposed the use of fixation duration as a new metric, which is distinct from the commonly used response time, for the quantitative evaluation of joint action research. These are verified through a Go/No-go task under six conditions—three behavioral (i.e., joint action, joint attention-only, and alone), each with two partner conditions (robot or human partner). While developing a human-like eye robot, this study demonstrates the potential of a robot to be a better joint action partner than an uncertain human, with participants exhibiting the best reaction times when partnered with a robot. The shared action space of the participants was investigated, where a transference of the action space indicates the expression of joint action. The fixation duration indicates that the proposed robot cause participants to move their action space to include that of the robot. These results suggest that the proposed collaborative robot can initiate a joint action between a robot and a human, and can perform as a more effective partner in joint actions compared to an unfamiliar human. This study showcased the capacity of fixation duration as a quantitative assessment metric for joint action.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"26 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How Non-experts Kinesthetically Teach a Robot over Multiple Sessions: Diversity in Teaching Styles and Effects on Performance 非专业人员如何通过多次课程对机器人进行运动学教学:教学风格的多样性及其对表现的影响
IF 4.7 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-08-23 DOI: 10.1007/s12369-024-01164-8
Pourya Aliasghari, Moojan Ghafurian, Chtystopher L. Nehaniv, Kerstin Dautenhahn

In real-world applications, robots should adapt to users and environments; however, users may not know how to teach new tasks to a robot. We studied whether participants without any experience in teaching a robot would become more proficient robot teachers through repeated kinesthetic human–robot teaching interactions. An experiment was conducted with twenty-eight participants who were asked to kinesthetically teach a humanoid robot different cleaning tasks in five repeated sessions, each session including four tasks. Throughout the sessions, participants’ gaze patterns, methods of manipulating the robot’s arm, their perceived workload, and some physical properties of the demonstrated actions were measured. Our data analyses revealed a diversity in non-experts’ human–robot teaching styles in repeated interactions. Three clusters of human teachers were identified based on participants’ performance in providing the demonstrations. The majority of participants significantly improved their success and speed of kinesthetic demonstrations by performing multiple rounds of teaching the robot. Overall, participants gazed less often at the robot’s hand and perceived less effort over repeated sessions. Our findings highlight how non-experts adapt to robot teaching by being exposed repeatedly to human–robot teaching tasks, without any formal training or external intervention, and we identify the characteristics of successful and improving human teachers.

在实际应用中,机器人应适应用户和环境;然而,用户可能不知道如何向机器人教授新任务。我们研究了没有任何机器人教学经验的参与者是否会通过反复的人机教学互动,成为更熟练的机器人教师。我们对 28 名参与者进行了一项实验,要求他们在五个重复的环节中,以动觉方式向仿人机器人教授不同的清洁任务,每个环节包括四项任务。在整个过程中,我们测量了参与者的注视模式、操纵机器人手臂的方法、他们感知到的工作量以及演示动作的一些物理特性。我们的数据分析显示,在重复互动中,非专家的人类-机器人教学风格多种多样。根据参与者提供示范的表现,我们确定了三个人类教师集群。大多数参与者通过对机器人进行多轮教学,大大提高了动觉演示的成功率和速度。总体而言,参与者在多次教学过程中注视机器人手部的次数减少了,所付出的努力也减少了。我们的研究结果突显了非专业人员是如何在没有任何正规培训或外部干预的情况下,通过反复接触人机教学任务来适应机器人教学的。
{"title":"How Non-experts Kinesthetically Teach a Robot over Multiple Sessions: Diversity in Teaching Styles and Effects on Performance","authors":"Pourya Aliasghari, Moojan Ghafurian, Chtystopher L. Nehaniv, Kerstin Dautenhahn","doi":"10.1007/s12369-024-01164-8","DOIUrl":"https://doi.org/10.1007/s12369-024-01164-8","url":null,"abstract":"<p>In real-world applications, robots should adapt to users and environments; however, users may not know how to teach new tasks to a robot. We studied whether participants without any experience in teaching a robot would become more proficient robot teachers through repeated kinesthetic human–robot teaching interactions. An experiment was conducted with twenty-eight participants who were asked to kinesthetically teach a humanoid robot different cleaning tasks in five repeated sessions, each session including four tasks. Throughout the sessions, participants’ gaze patterns, methods of manipulating the robot’s arm, their perceived workload, and some physical properties of the demonstrated actions were measured. Our data analyses revealed a diversity in non-experts’ human–robot teaching styles in repeated interactions. Three clusters of human teachers were identified based on participants’ performance in providing the demonstrations. The majority of participants significantly improved their success and speed of kinesthetic demonstrations by performing multiple rounds of teaching the robot. Overall, participants gazed less often at the robot’s hand and perceived less effort over repeated sessions. Our findings highlight how non-experts adapt to robot teaching by being exposed repeatedly to human–robot teaching tasks, without any formal training or external intervention, and we identify the characteristics of successful and improving human teachers.\u0000</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"19 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Child Factor in Child–Robot Interaction: Discovering the Impact of Developmental Stage and Individual Characteristics 儿童与机器人互动中的儿童因素:发现发展阶段和个体特征的影响
IF 4.7 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-08-14 DOI: 10.1007/s12369-024-01121-5
Irina Rudenko, Andrey Rudenko, Achim J. Lilienthal, Kai O. Arras, Barbara Bruno

Social robots, owing to their embodied physical presence in human spaces and the ability to directly interact with the users and their environment, have a great potential to support children in various activities in education, healthcare and daily life. Child–Robot Interaction (CRI), as any domain involving children, inevitably faces the major challenge of designing generalized strategies to work with unique, turbulent and very diverse individuals. Addressing this challenging endeavor requires to combine the standpoint of the robot-centered perspective, i.e. what robots technically can and are best positioned to do, with that of the child-centered perspective, i.e. what children may gain from the robot and how the robot should act to best support them in reaching the goals of the interaction. This article aims to help researchers bridge the two perspectives and proposes to address the development of CRI scenarios with insights from child psychology and child development theories. To that end, we review the outcomes of the CRI studies, outline common trends and challenges, and identify two key factors from child psychology that impact child-robot interactions, especially in a long-term perspective: developmental stage and individual characteristics. For both of them we discuss prospective experiment designs which support building naturally engaging and sustainable interactions.

社交机器人因其在人类空间中的实体存在以及与用户及其环境直接互动的能力,在支持儿童参与教育、医疗保健和日常生活中的各种活动方面具有巨大潜力。儿童与机器人互动(CRI)与任何涉及儿童的领域一样,不可避免地面临着设计通用策略的重大挑战,以便与独特、动荡和千差万别的个体协同工作。要应对这一挑战,就必须将以机器人为中心的观点(即机器人在技术上能做什么、最适合做什么)与以儿童为中心的观点(即儿童能从机器人身上获得什么、机器人应如何行动才能最好地支持他们实现互动目标)结合起来。本文旨在帮助研究人员弥合这两种观点,并建议从儿童心理学和儿童发展理论中汲取灵感,以解决儿童智能交互场景的开发问题。为此,我们回顾了 CRI 研究的成果,概述了共同的趋势和挑战,并从儿童心理学中找出了影响儿童与机器人互动的两个关键因素,尤其是从长远角度来看:发展阶段和个体特征。针对这两个因素,我们讨论了支持建立自然参与和可持续互动的前瞻性实验设计。
{"title":"The Child Factor in Child–Robot Interaction: Discovering the Impact of Developmental Stage and Individual Characteristics","authors":"Irina Rudenko, Andrey Rudenko, Achim J. Lilienthal, Kai O. Arras, Barbara Bruno","doi":"10.1007/s12369-024-01121-5","DOIUrl":"https://doi.org/10.1007/s12369-024-01121-5","url":null,"abstract":"<p>Social robots, owing to their embodied physical presence in human spaces and the ability to directly interact with the users and their environment, have a great potential to support children in various activities in education, healthcare and daily life. Child–Robot Interaction (CRI), as any domain involving children, inevitably faces the major challenge of designing generalized strategies to work with unique, turbulent and very diverse individuals. Addressing this challenging endeavor requires to combine the standpoint of the robot-centered perspective, i.e. what robots technically can and are best positioned to do, with that of the child-centered perspective, i.e. what children may gain from the robot and how the robot should act to best support them in reaching the goals of the interaction. This article aims to help researchers bridge the two perspectives and proposes to address the development of CRI scenarios with insights from child psychology and child development theories. To that end, we review the outcomes of the CRI studies, outline common trends and challenges, and identify two key factors from child psychology that impact child-robot interactions, especially in a long-term perspective: developmental stage and individual characteristics. For both of them we discuss prospective experiment designs which support building naturally engaging and sustainable interactions.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"58 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is the Robot Spying on me? A Study on Perceived Privacy in Telepresence Scenarios in a Care Setting with Mobile and Humanoid Robots 机器人在监视我吗?在护理环境中使用移动机器人和仿人机器人进行远程呈现情景下的隐私感知研究
IF 4.7 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-08-13 DOI: 10.1007/s12369-024-01153-x
Celia Nieto Agraz, Pascal Hinrichs, Marco Eichelberg, Andreas Hein

The number of robots that are in use worldwide is increasing, and they are starting to be used in new areas, where a use of robotics was impossible in the past, such as nursing care. This brings about new challenges that need to be addressed, one of them is the challenge of privacy preservation. Privacy in robotics is still a very new field that has not been studied deeply yet, even though some studies show that it is a crucial factor. In this article, we investigate how users feel about their privacy when interacting in a telepresence scenario with three different technical means: a laptop computer with a built-in camera, the mobile robot Temi and the humanoid robot Ameca. Behaviors from human interaction were implemented for the humanoid robot, which are not aimed directly at deactivating the sensors, but symbolize this deactivation. We conducted a user study with 21 participants. We did not find out any statistical significant difference between the elements, which shows that the robotic solutions are also popular and people feel comfortable around them. In addition, we also found out that the best way for a humanoid robot to indicate privacy to the participants is to perform actions where it closes the eyes and gives a sense of deactivation. Lastly, the results show that even though the acceptance of a humanoid robot is quite good, further work is needed to increase the control feeling in order to increase the trust of the user over it.

全球范围内使用的机器人数量不断增加,并开始应用于护理等过去不可能使用机器人的新领域。这就带来了需要应对的新挑战,其中之一就是保护隐私的挑战。机器人技术中的隐私保护仍是一个非常新的领域,尽管一些研究表明它是一个关键因素,但尚未对其进行深入研究。在本文中,我们研究了用户在远程呈现场景中与三种不同的技术手段(带内置摄像头的笔记本电脑、移动机器人 Temi 和仿人机器人 Ameca)进行交互时对其隐私的感受。我们为仿人机器人设计了人机交互行为,这些行为并非直接旨在关闭传感器,而是象征着关闭传感器。我们对 21 名参与者进行了用户研究。我们没有发现这些元素之间有任何统计学上的显著差异,这表明机器人解决方案也很受欢迎,人们在它们周围感到舒适。此外,我们还发现,仿人机器人向参与者表明隐私的最佳方式是执行闭眼动作,给人一种停用的感觉。最后,研究结果表明,尽管仿人机器人的接受度很高,但还需要进一步提高控制感,以增加用户对其的信任度。
{"title":"Is the Robot Spying on me? A Study on Perceived Privacy in Telepresence Scenarios in a Care Setting with Mobile and Humanoid Robots","authors":"Celia Nieto Agraz, Pascal Hinrichs, Marco Eichelberg, Andreas Hein","doi":"10.1007/s12369-024-01153-x","DOIUrl":"https://doi.org/10.1007/s12369-024-01153-x","url":null,"abstract":"<p>The number of robots that are in use worldwide is increasing, and they are starting to be used in new areas, where a use of robotics was impossible in the past, such as nursing care. This brings about new challenges that need to be addressed, one of them is the challenge of privacy preservation. Privacy in robotics is still a very new field that has not been studied deeply yet, even though some studies show that it is a crucial factor. In this article, we investigate how users feel about their privacy when interacting in a telepresence scenario with three different technical means: a laptop computer with a built-in camera, the mobile robot Temi and the humanoid robot Ameca. Behaviors from human interaction were implemented for the humanoid robot, which are not aimed directly at deactivating the sensors, but symbolize this deactivation. We conducted a user study with 21 participants. We did not find out any statistical significant difference between the elements, which shows that the robotic solutions are also popular and people feel comfortable around them. In addition, we also found out that the best way for a humanoid robot to indicate privacy to the participants is to perform actions where it closes the eyes and gives a sense of deactivation. Lastly, the results show that even though the acceptance of a humanoid robot is quite good, further work is needed to increase the control feeling in order to increase the trust of the user over it.\u0000</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"58 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How an Android Expresses “Now Loading…”: Examining the Properties of Thinking Faces Android 如何表达 "正在加载......":检验思维面孔的属性
IF 4.7 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-08-05 DOI: 10.1007/s12369-024-01163-9
Shushi Namba, Wataru Sato, Saori Namba, Alexander Diel, Carlos Ishi, Takashi Minato

The “thinking face” is a facial signal used to convey being in thought. For androids, the thinking face may be important to achieve natural human–robot interaction. However, the facial pattern necessary for portraying the thinking face remains unclear and has not yet been investigated in androids. The current study aims to (a) identify the facial patterns when people are engaged in answering complex questions (i.e., thinking face) and (b) clarify whether implementing the observed thinking faces in an android can facilitate natural human–robot interaction. In Study 1, we analyze the facial movements of 40 participants after they are prompted with difficult questions and indicate five facial patterns that corresponded to thinking faces. In Study 2, we further focus on the pattern of furrowing of the brows and narrowing of the eyes among the observed thinking facial patterns and implement this pattern in an android. The results show that thinking faces enhance the perception of being in thought, genuineness, human-likeness, and appropriateness in androids while decreasing eeriness. The free-description data also revealed that negative emotions are attributed to the thinking face. In Study 3, we compared the thinking vs. neutral faces in a question–answer situation. The results showed that the android's thinking face facilitated the perception of being in thought and human-likeness. These findings suggest that the thinking face of androids can facilitate natural human–robot interaction.

思考表情 "是一种面部信号,用于传达思考的状态。对于机器人来说,思考表情对于实现自然的人机交互可能非常重要。然而,描绘思考表情所需的面部模式仍不清楚,也尚未在机器人中进行研究。本研究的目的是:(a)识别人在回答复杂问题时的面部模式(即思考表情);(b)阐明在机器人中实施观察到的思考表情是否能促进自然的人机交互。在研究 1 中,我们分析了 40 名参与者在回答难题后的面部动作,并指出了与思考表情相对应的五种面部模式。在研究 2 中,我们进一步聚焦于观察到的思考表情模式中的皱眉和眯眼模式,并将这一模式应用到机器人中。结果表明,思考脸增强了机器人的思考感、真实感、人似性和适当性,同时降低了阴森感。自由描述数据还显示,负面情绪被归因于思考表情。在研究 3 中,我们在问答情境中比较了思考脸和中性脸。结果表明,机器人的思考表情促进了对思考和人类相似性的感知。这些研究结果表明,机器人的思考表情可以促进自然的人机交互。
{"title":"How an Android Expresses “Now Loading…”: Examining the Properties of Thinking Faces","authors":"Shushi Namba, Wataru Sato, Saori Namba, Alexander Diel, Carlos Ishi, Takashi Minato","doi":"10.1007/s12369-024-01163-9","DOIUrl":"https://doi.org/10.1007/s12369-024-01163-9","url":null,"abstract":"<p>The “thinking face” is a facial signal used to convey being in thought. For androids, the thinking face may be important to achieve natural human–robot interaction. However, the facial pattern necessary for portraying the thinking face remains unclear and has not yet been investigated in androids. The current study aims to (a) identify the facial patterns when people are engaged in answering complex questions (i.e., thinking face) and (b) clarify whether implementing the observed thinking faces in an android can facilitate natural human–robot interaction. In Study 1, we analyze the facial movements of 40 participants after they are prompted with difficult questions and indicate five facial patterns that corresponded to thinking faces. In Study 2, we further focus on the pattern of furrowing of the brows and narrowing of the eyes among the observed thinking facial patterns and implement this pattern in an android. The results show that thinking faces enhance the perception of being in thought, genuineness, human-likeness, and appropriateness in androids while decreasing eeriness. The free-description data also revealed that negative emotions are attributed to the thinking face. In Study 3, we compared the thinking vs. neutral faces in a question–answer situation. The results showed that the android's thinking face facilitated the perception of being in thought and human-likeness. These findings suggest that the thinking face of androids can facilitate natural human–robot interaction.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"22 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141931386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human–Robot Companionship: Current Trends and Future Agenda 人与机器人的陪伴:当前趋势和未来议程
IF 4.7 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-24 DOI: 10.1007/s12369-024-01160-y
Eshtiak Ahmed, Oğuz ‘Oz’ Buruk, Juho Hamari

The field of robotics has grown exponentially over the years, especially the social aspect, which has enabled robots to interact with humans meaningfully. Robots are now used in many domains, such as manufacturing, healthcare, education, entertainment, rehabilitation, etc. Along with their widespread usage in many real-life environments, robots have been used as companions to humans. With the increased amount of research done on human–robot companionship (HRC), it is important to understand how this domain is developing, in which direction, and what the future might hold. There is also a need to understand the influencing factors and what kind of empirical results are in the literature. To address these questions, we conducted a systematic literature review and analyzed a final number of 134 relevant articles. The findings suggest that anthropomorphic and zoomorphic robots are more popular as human companions, while there is a lack of interest in functional and caricatured robots. Also, human-like and animal-like features are implemented more in companion robots. Studies rarely exploit the mobility available in these robots in companionship scenarios, especially in outdoor settings. In addition to that, co-existence and co-performance-based implementation with humans have been observed rarely. Based on the results, we propose a future research agenda that includes thematic, theoretical, methodological, and technological agendas. This study will help us understand the current state and usage of robotic companions which will then potentially aid in determining how HRC can be leveraged and integrated more seamlessly into human lives for better effectiveness.

多年来,机器人技术领域取得了突飞猛进的发展,尤其是在社交方面,它使机器人能够与人类进行有意义的互动。目前,机器人已被用于制造、医疗保健、教育、娱乐、康复等多个领域。在广泛应用于现实生活的同时,机器人还被用作人类的伴侣。随着有关人与机器人陪伴(HRC)的研究日益增多,了解这一领域的发展情况、方向以及未来的发展前景非常重要。此外,我们还需要了解影响因素以及文献中的实证结果。针对这些问题,我们进行了系统的文献综述,最终分析了 134 篇相关文章。研究结果表明,拟人和变形机器人作为人类伴侣更受欢迎,而功能和漫画机器人则缺乏关注。此外,人形机器人和动物形机器人在伴侣机器人中的应用也更多。研究很少利用这些机器人在陪伴场景中的可移动性,尤其是在户外环境中。此外,很少观察到与人类共存和基于共同性能的实施。根据研究结果,我们提出了未来的研究议程,包括主题、理论、方法和技术议程。这项研究将帮助我们了解机器人伴侣的现状和使用情况,从而有可能帮助我们确定如何利用人机交互技术并将其更完美地融入人类生活,以提高效率。
{"title":"Human–Robot Companionship: Current Trends and Future Agenda","authors":"Eshtiak Ahmed, Oğuz ‘Oz’ Buruk, Juho Hamari","doi":"10.1007/s12369-024-01160-y","DOIUrl":"https://doi.org/10.1007/s12369-024-01160-y","url":null,"abstract":"<p>The field of robotics has grown exponentially over the years, especially the social aspect, which has enabled robots to interact with humans meaningfully. Robots are now used in many domains, such as manufacturing, healthcare, education, entertainment, rehabilitation, etc. Along with their widespread usage in many real-life environments, robots have been used as companions to humans. With the increased amount of research done on human–robot companionship (HRC), it is important to understand how this domain is developing, in which direction, and what the future might hold. There is also a need to understand the influencing factors and what kind of empirical results are in the literature. To address these questions, we conducted a systematic literature review and analyzed a final number of 134 relevant articles. The findings suggest that anthropomorphic and zoomorphic robots are more popular as human companions, while there is a lack of interest in functional and caricatured robots. Also, human-like and animal-like features are implemented more in companion robots. Studies rarely exploit the mobility available in these robots in companionship scenarios, especially in outdoor settings. In addition to that, co-existence and co-performance-based implementation with humans have been observed rarely. Based on the results, we propose a future research agenda that includes thematic, theoretical, methodological, and technological agendas. This study will help us understand the current state and usage of robotic companions which will then potentially aid in determining how HRC can be leveraged and integrated more seamlessly into human lives for better effectiveness.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"66 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Robotic Collaborative Tasks Through Contextual Human Motion Prediction and Intention Inference 通过上下文人类运动预测和意图推理加强机器人协作任务
IF 4.7 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-13 DOI: 10.1007/s12369-024-01140-2
Javier Laplaza, Francesc Moreno, Alberto Sanfeliu

Predicting human motion based on a sequence of past observations is crucial for various applications in robotics and computer vision. Currently, this problem is typically addressed by training deep learning models using some of the most well-known 3D human motion datasets widely used in the community. However, these datasets generally do not consider how humans behave and move when a robot is nearby, leading to a data distribution different from the real distribution of motion that robots will encounter when collaborating with humans. Additionally, incorporating contextual information related to the interactive task between the human and the robot, as well as information on the human willingness to collaborate with the robot, can improve not only the accuracy of the predicted sequence but also serve as a useful tool for robots to navigate through collaborative tasks successfully. In this research, we propose a deep learning architecture that predicts both 3D human body motion and human intention for collaborative tasks. The model employs a multi-head attention mechanism, taking into account human motion and task context as inputs. The resulting outputs include the predicted motion of the human body and the inferred human intention. We have validated this architecture in two different tasks: collaborative object handover and collaborative grape harvesting. While the architecture remains the same for both tasks, the inputs differ. In the handover task, the architecture considers human motion, robot end effector, and obstacle positions as inputs. Additionally, the model can be conditioned on the desired intention to tailor the output motion accordingly. To assess the performance of the collaborative handover task, we conducted a user study to evaluate human perception of the robot’s sociability, naturalness, security, and comfort. This evaluation was conducted by comparing the robot’s behavior when it utilized the prediction in its planner versus when it did not. Furthermore, we also applied the model to a collaborative grape harvesting task. By integrating human motion prediction and human intention inference, our architecture shows promising results in enhancing the capabilities of robots in collaborative scenarios. The model’s flexibility allows it to handle various tasks with different inputs, making it adaptable to real-world applications.

根据一系列过去的观察结果预测人类运动对于机器人和计算机视觉领域的各种应用至关重要。目前,解决这一问题的典型方法是使用一些最著名的三维人类运动数据集来训练深度学习模型,这些数据集在社区中被广泛使用。然而,这些数据集一般不考虑机器人在附近时人类的行为和移动方式,导致数据分布与机器人与人类协作时遇到的真实运动分布不同。此外,纳入与人类和机器人之间交互任务相关的上下文信息,以及人类与机器人合作意愿的信息,不仅能提高预测序列的准确性,还能成为机器人成功完成协作任务的有用工具。在这项研究中,我们提出了一种深度学习架构,可以预测协作任务中的三维人体运动和人类意图。该模型采用多头注意力机制,将人体运动和任务背景作为输入。结果输出包括预测的人体运动和推断的人类意图。我们在两个不同的任务中对这一架构进行了验证:协作物体交接和协作葡萄采摘。虽然两个任务的架构相同,但输入不同。在交接任务中,该架构将人类运动、机器人末端效应器和障碍物位置作为输入。此外,该模型还可根据所需的意图来调整输出运动。为了评估协作交接任务的性能,我们进行了一项用户研究,以评估人类对机器人社交性、自然性、安全性和舒适性的感知。这项评估是通过比较机器人在计划程序中使用预测和不使用预测时的行为来进行的。此外,我们还将该模型应用于协作收获葡萄的任务中。通过整合人类运动预测和人类意图推理,我们的架构在增强机器人在协作场景中的能力方面取得了可喜的成果。该模型的灵活性使其能够处理具有不同输入的各种任务,从而使其能够适应现实世界的应用。
{"title":"Enhancing Robotic Collaborative Tasks Through Contextual Human Motion Prediction and Intention Inference","authors":"Javier Laplaza, Francesc Moreno, Alberto Sanfeliu","doi":"10.1007/s12369-024-01140-2","DOIUrl":"https://doi.org/10.1007/s12369-024-01140-2","url":null,"abstract":"<p>Predicting human motion based on a sequence of past observations is crucial for various applications in robotics and computer vision. Currently, this problem is typically addressed by training deep learning models using some of the most well-known 3D human motion datasets widely used in the community. However, these datasets generally do not consider how humans behave and move when a robot is nearby, leading to a data distribution different from the real distribution of motion that robots will encounter when collaborating with humans. Additionally, incorporating contextual information related to the interactive task between the human and the robot, as well as information on the human willingness to collaborate with the robot, can improve not only the accuracy of the predicted sequence but also serve as a useful tool for robots to navigate through collaborative tasks successfully. In this research, we propose a deep learning architecture that predicts both 3D human body motion and human intention for collaborative tasks. The model employs a multi-head attention mechanism, taking into account human motion and task context as inputs. The resulting outputs include the predicted motion of the human body and the inferred human intention. We have validated this architecture in two different tasks: collaborative object handover and collaborative grape harvesting. While the architecture remains the same for both tasks, the inputs differ. In the handover task, the architecture considers human motion, robot end effector, and obstacle positions as inputs. Additionally, the model can be conditioned on the desired intention to tailor the output motion accordingly. To assess the performance of the collaborative handover task, we conducted a user study to evaluate human perception of the robot’s sociability, naturalness, security, and comfort. This evaluation was conducted by comparing the robot’s behavior when it utilized the prediction in its planner versus when it did not. Furthermore, we also applied the model to a collaborative grape harvesting task. By integrating human motion prediction and human intention inference, our architecture shows promising results in enhancing the capabilities of robots in collaborative scenarios. The model’s flexibility allows it to handle various tasks with different inputs, making it adaptable to real-world applications.\u0000</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"25 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141613281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the Viability of Socially Assistive Robots for At-Home Cognitive Monitoring: Potential and Limitations 探索用于居家认知监测的社会辅助机器人的可行性:潜力与局限
IF 4.7 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-12 DOI: 10.1007/s12369-024-01158-6
Matteo Luperto, Marta Romeo, Francesca Lunardini, Javier Monroy, Daniel Hernández García, Carlo Abbate, Angelo Cangelosi, Simona Ferrante, Javier Gonzalez-Jimenez, Nicola Basilico, N. Alberto Borghese

The early detection of mild cognitive impairment, a condition of increasing impact in our aging society, is a challenging task with no established answer. One promising solution is the deployment of robotic systems and ambient assisted living technology in the houses of older adults for monitoring and assistance. In this work, we address and discuss a qualitative analysis on the feasibility and acceptability of a socially assistive robot (SAR) deployed in prospective users’ houses to monitor their cognitive capabilities through a set of digitalised neuropsychological tests and spot questions conveniently integrated within the robotic assistant’s daily tasks. We do this by describing an experimental campaign where a robotic system, integrated with a larger framework, was installed in the house of 10 users for a duration of at least 10 weeks, during which their cognitive capabilities were monitored by the robot. Concretely, the robots supervised the users during the completion of the tests and transparently monitored them by asking questions interleaved in their everyday activities. Results show a general acceptance of such technology, being able to carry out the intended tasks without being too invasive, paving the way for an impactful at-home use of SARs.

轻度认知障碍对老龄化社会的影响越来越大,而早期检测轻度认知障碍是一项具有挑战性的任务,目前尚无既定答案。一个很有前景的解决方案是在老年人家中部署机器人系统和环境辅助生活技术,以提供监测和帮助。在这项工作中,我们将对部署在未来用户家中的社交辅助机器人(SAR)的可行性和可接受性进行定性分析和讨论,通过一套数字化的神经心理学测试和现场问题来监测他们的认知能力,并将其方便地整合到机器人助手的日常任务中。为此,我们介绍了一项实验活动,在该活动中,机器人系统与一个更大的框架相结合,被安装在 10 名用户的家中,持续时间至少 10 周,在此期间,机器人对他们的认知能力进行监测。具体来说,机器人在用户完成测试期间对其进行监督,并通过在其日常活动中穿插提问的方式对其进行透明监控。结果表明,这种技术得到了普遍接受,既能完成预期任务,又不会造成过多干扰,为在家中使用具有影响力的合成孔径雷达铺平了道路。
{"title":"Exploring the Viability of Socially Assistive Robots for At-Home Cognitive Monitoring: Potential and Limitations","authors":"Matteo Luperto, Marta Romeo, Francesca Lunardini, Javier Monroy, Daniel Hernández García, Carlo Abbate, Angelo Cangelosi, Simona Ferrante, Javier Gonzalez-Jimenez, Nicola Basilico, N. Alberto Borghese","doi":"10.1007/s12369-024-01158-6","DOIUrl":"https://doi.org/10.1007/s12369-024-01158-6","url":null,"abstract":"<p>The early detection of mild cognitive impairment, a condition of increasing impact in our aging society, is a challenging task with no established answer. One promising solution is the deployment of robotic systems and ambient assisted living technology in the houses of older adults for monitoring and assistance. In this work, we address and discuss a qualitative analysis on the feasibility and acceptability of a socially assistive robot (SAR) deployed in prospective users’ houses to monitor their cognitive capabilities through a set of digitalised neuropsychological tests and spot questions conveniently integrated within the robotic assistant’s daily tasks. We do this by describing an experimental campaign where a robotic system, integrated with a larger framework, was installed in the house of 10 users for a duration of at least 10 weeks, during which their cognitive capabilities were monitored by the robot. Concretely, the robots supervised the users during the completion of the tests and transparently monitored them by asking questions interleaved in their everyday activities. Results show a general acceptance of such technology, being able to carry out the intended tasks without being too invasive, paving the way for an impactful at-home use of SARs.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"10874 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141613224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating Human-Care Robot Services for the Elderly: An Experimental Study 评估面向老年人的人类护理机器人服务:实验研究
IF 4.7 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-08 DOI: 10.1007/s12369-024-01157-7
Miyoung Cho, Dohyung Kim, Minsu Jang, Jaeyeon Lee, Jaehong Kim, Woo-han Yun, Youngwoo Yoon, Jinhyeok Jang, Chankyu Park, Woo-Ri Ko, Jaeyoon Jang, Ho-Sub Yoon, Daeha Lee, Choulsoo Jang

The increase in elderly population is emerging as a serious social issue. The coronavirus pandemic has increased the number of elderly people suffering from depression and loneliness owing to the lack of face-to-face activities. In this study, we developed an integrated system for the human-care robot service, considering cognitive and emotional support for elderly people, and verified its stability and usefulness in the real world. We recruited 40 elderly people for an apartment testbed environment experiment and two elderly people living alone for a long time participated in the experiment at their homes. Quantitative experimental results were analyzed by comparing service success rates and user satisfaction in two different test environments to verify the stability of the service. Qualitative evaluations were also conducted through surveys and interviews to assess the usefulness of the service.

老年人口的增加正在成为一个严重的社会问题。冠状病毒大流行使老年人因缺乏面对面的活动而患抑郁症和孤独症的人数增加。在本研究中,我们开发了一套综合的人护机器人服务系统,考虑到了对老年人的认知和情感支持,并在现实世界中验证了其稳定性和实用性。我们招募了 40 名老人进行公寓试验台环境实验,并有两名长期独居老人在家中参与了实验。通过比较两种不同测试环境下的服务成功率和用户满意度,对定量实验结果进行了分析,以验证服务的稳定性。此外,还通过调查和访谈进行了定性评估,以评估服务的实用性。
{"title":"Evaluating Human-Care Robot Services for the Elderly: An Experimental Study","authors":"Miyoung Cho, Dohyung Kim, Minsu Jang, Jaeyeon Lee, Jaehong Kim, Woo-han Yun, Youngwoo Yoon, Jinhyeok Jang, Chankyu Park, Woo-Ri Ko, Jaeyoon Jang, Ho-Sub Yoon, Daeha Lee, Choulsoo Jang","doi":"10.1007/s12369-024-01157-7","DOIUrl":"https://doi.org/10.1007/s12369-024-01157-7","url":null,"abstract":"<p>The increase in elderly population is emerging as a serious social issue. The coronavirus pandemic has increased the number of elderly people suffering from depression and loneliness owing to the lack of face-to-face activities. In this study, we developed an integrated system for the human-care robot service, considering cognitive and emotional support for elderly people, and verified its stability and usefulness in the real world. We recruited 40 elderly people for an apartment testbed environment experiment and two elderly people living alone for a long time participated in the experiment at their homes. Quantitative experimental results were analyzed by comparing service success rates and user satisfaction in two different test environments to verify the stability of the service. Qualitative evaluations were also conducted through surveys and interviews to assess the usefulness of the service.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"26 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Social Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1