首页 > 最新文献

Frontiers in Robotics and AI最新文献

英文 中文
A feasibility study: a non-inferiority study comparing head-mounted and console-based virtual reality for robotic surgery training. 可行性研究:一项比较头戴式和基于控制台的虚拟现实机器人手术训练的非劣效性研究。
IF 3 Q2 ROBOTICS Pub Date : 2026-01-06 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1616462
Kazuho Kawashima, Shadi Ghali, Justin W Collins, Ali Esmaeili

Background: Head-mounted virtual reality (VR) simulations are increasingly explored in healthcare, particularly in patient education, stroke rehabilitation, and surgical training. While VR-based simulation plays a growing role in robotic-assisted surgery (RAS) training, the implications of head-mounted VR in this context remain underexamined.

Method: This prospective, randomised, controlled trial with a single-arm crossover compared two training modalities: a head-mounted VR simulation and a conventional console-based simulation. Participants in the experimental group used head-mounted VR as their primary training method, while the control group trained on a conventional console. Both groups completed a running suture task at baseline, midterm, and final assessments on the surgical console. The primary outcome was the composite score from the final assessment.

Results: Fourteen participants were equally distributed between the two arms. Baseline scores showed no significant differences. A two-way repeated measures ANOVA demonstrated significant overall improvement across assessments (F (1.688, 20.26) = 48.34, p < 0.001, partial η2 = 0.80). No statistical difference was found in final composite scores (mean difference: 8.4 ± 9.45, p = 0.391, Cohen's d = -0.48), midterm scores, or granular kinematic data. However, non-inferiority could not be established as the confidence interval fell outside our pre-set margin. The crossover group required less time (mean difference: 39 ± 9.01 min, p = 0.004) and fewer attempts (mean difference: 8 ± 2.2, p = 0.009) to reach benchmark performance compared to the control group.

Conclusion: Both head-mounted VR and console-based training significantly improved fundamental RAS skills in novices. While our study showed that the VR training shortened the time and attempts required to reach proficiency benchmarks, the small scale of this trial and the breadth of the confidence intervals mean the results should be viewed as preliminary observations. These results provide an initial signal of feasibility that warrants confirmation in larger studies.

背景:头戴式虚拟现实(VR)模拟越来越多地应用于医疗保健领域,特别是在患者教育、中风康复和外科培训方面。虽然基于VR的模拟在机器人辅助手术(RAS)训练中发挥着越来越大的作用,但在这种情况下,头戴式VR的影响仍未得到充分研究。方法:这项前瞻性、随机、对照试验采用单臂交叉试验,比较了两种训练方式:头戴式VR模拟和传统的基于控制台的模拟。实验组的参与者使用头戴式VR作为他们的主要训练方法,而对照组则在传统的控制台进行训练。两组均在手术控制台上完成了基线、中期和最终评估的连续缝合任务。主要结果是最终评估的综合得分。结果:14名参与者平均分布在两组之间。基线评分无显著差异。双向重复测量方差分析显示各评估的总体改善显著(F (1.688, 20.26) = 48.34, p < 0.001,部分η2 = 0.80)。最终综合评分(平均差值:8.4±9.45,p = 0.391, Cohen’s d = -0.48)、中期评分或颗粒运动数据均无统计学差异。然而,由于置信区间超出了我们预先设定的范围,因此无法建立非劣效性。与对照组相比,交叉组达到基准性能所需的时间更短(平均差值:39±9.01 min, p = 0.004),尝试次数更少(平均差值:8±2.2,p = 0.009)。结论:头戴式VR和基于控制台的训练均能显著提高新手的RAS基础技能。虽然我们的研究表明虚拟现实训练缩短了达到熟练程度基准所需的时间和尝试,但该试验的小规模和置信区间的广度意味着结果应被视为初步观察结果。这些结果提供了可行性的初步信号,值得在更大规模的研究中得到证实。
{"title":"A feasibility study: a non-inferiority study comparing head-mounted and console-based virtual reality for robotic surgery training.","authors":"Kazuho Kawashima, Shadi Ghali, Justin W Collins, Ali Esmaeili","doi":"10.3389/frobt.2025.1616462","DOIUrl":"10.3389/frobt.2025.1616462","url":null,"abstract":"<p><strong>Background: </strong>Head-mounted virtual reality (VR) simulations are increasingly explored in healthcare, particularly in patient education, stroke rehabilitation, and surgical training. While VR-based simulation plays a growing role in robotic-assisted surgery (RAS) training, the implications of head-mounted VR in this context remain underexamined.</p><p><strong>Method: </strong>This prospective, randomised, controlled trial with a single-arm crossover compared two training modalities: a head-mounted VR simulation and a conventional console-based simulation. Participants in the experimental group used head-mounted VR as their primary training method, while the control group trained on a conventional console. Both groups completed a running suture task at baseline, midterm, and final assessments on the surgical console. The primary outcome was the composite score from the final assessment.</p><p><strong>Results: </strong>Fourteen participants were equally distributed between the two arms. Baseline scores showed no significant differences. A two-way repeated measures ANOVA demonstrated significant overall improvement across assessments (F (1.688, 20.26) = 48.34, p < 0.001, partial η<sup>2</sup> = 0.80). No statistical difference was found in final composite scores (mean difference: 8.4 ± 9.45, p = 0.391, Cohen's d = -0.48), midterm scores, or granular kinematic data. However, non-inferiority could not be established as the confidence interval fell outside our pre-set margin. The crossover group required less time (mean difference: 39 ± 9.01 min, p = 0.004) and fewer attempts (mean difference: 8 ± 2.2, p = 0.009) to reach benchmark performance compared to the control group.</p><p><strong>Conclusion: </strong>Both head-mounted VR and console-based training significantly improved fundamental RAS skills in novices. While our study showed that the VR training shortened the time and attempts required to reach proficiency benchmarks, the small scale of this trial and the breadth of the confidence intervals mean the results should be viewed as preliminary observations. These results provide an initial signal of feasibility that warrants confirmation in larger studies.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1616462"},"PeriodicalIF":3.0,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12816980/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146020069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the ethical, legal, and social implications of cybernetic avatars. 探索控制论化身的伦理、法律和社会含义。
IF 3 Q2 ROBOTICS Pub Date : 2026-01-05 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1724149
Ryuma Shineha

A cybernetic avatar (CA) is a concept that encompasses not only avatars representing virtual bodies in cyberspace but also information and communication technology (ICT) and robotic technologies that enhance the physical, cognitive, and perceptual capabilities of humans. CAs can enable multiple people to remotely operate numerous avatars and robots together to perform complex tasks on a large scale and create the necessary infrastructure for their operation and other related activities. However, due to the novelty of this concept, the ethical, legal, and social implications (ELSI) of CAs have not been discussed sufficiently. Therefore, the objective of this paper is to provide an overview of ELSI in the context of a CA, taking into account the implications from fields similar to that of CAs, such as robotic avatars, virtual avatars, metaverses, virtual reality, extended reality, social robots, human-robot interaction, telepresence, telexistence, embodied technology, and exoskeletons. In our review of ELSI in related fields, we found common themes: safety and security, data privacy, identity theft and identity loss, manipulation, intellectual property management, user addiction and overdependence, cyber abuse, risk management in a specific domain (e.g., medical applications), regulatory gaps, balance between free expression and harmful content, accountability, transparency, distributive justice, prevention of inequalities, dual use, and conceptual changes of familiarity. These issues should not be ignored when considering the social implementation of CAs.

控制论化身(cybernetic avatar, CA)是一个概念,它不仅包括在网络空间中代表虚拟身体的化身,还包括增强人类身体、认知和感知能力的信息和通信技术(ICT)和机器人技术。ca可以使多人远程操作多个化身和机器人,一起执行大规模的复杂任务,并为其操作和其他相关活动创建必要的基础设施。然而,由于这一概念的新颖性,ca的伦理、法律和社会影响(ELSI)尚未得到充分的讨论。因此,本文的目的是在CA的背景下概述ELSI,并考虑到与CA类似的领域的影响,如机器人化身、虚拟化身、元现实、虚拟现实、扩展现实、社交机器人、人机交互、远程呈现、远程存在、具体化技术和外骨骼。在我们对相关领域ELSI的回顾中,我们发现了共同的主题:安全和保障、数据隐私、身份盗窃和身份丢失、操纵、知识产权管理、用户成瘾和过度依赖、网络滥用、特定领域(例如医疗应用)的风险管理、监管差距、自由表达与有害内容之间的平衡、问责制、透明度、分配正义、预防不平等、双重用途以及熟悉的概念变化。在考虑ca的社会实施时,这些问题不应被忽视。
{"title":"Exploring the ethical, legal, and social implications of cybernetic avatars.","authors":"Ryuma Shineha","doi":"10.3389/frobt.2025.1724149","DOIUrl":"10.3389/frobt.2025.1724149","url":null,"abstract":"<p><p>A cybernetic avatar (CA) is a concept that encompasses not only avatars representing virtual bodies in cyberspace but also information and communication technology (ICT) and robotic technologies that enhance the physical, cognitive, and perceptual capabilities of humans. CAs can enable multiple people to remotely operate numerous avatars and robots together to perform complex tasks on a large scale and create the necessary infrastructure for their operation and other related activities. However, due to the novelty of this concept, the ethical, legal, and social implications (ELSI) of CAs have not been discussed sufficiently. Therefore, the objective of this paper is to provide an overview of ELSI in the context of a CA, taking into account the implications from fields similar to that of CAs, such as robotic avatars, virtual avatars, metaverses, virtual reality, extended reality, social robots, human-robot interaction, telepresence, telexistence, embodied technology, and exoskeletons. In our review of ELSI in related fields, we found common themes: safety and security, data privacy, identity theft and identity loss, manipulation, intellectual property management, user addiction and overdependence, cyber abuse, risk management in a specific domain (e.g., medical applications), regulatory gaps, balance between free expression and harmful content, accountability, transparency, distributive justice, prevention of inequalities, dual use, and conceptual changes of familiarity. These issues should not be ignored when considering the social implementation of CAs.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1724149"},"PeriodicalIF":3.0,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12812702/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146012845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wetware network-based AI: a chemical approach to embodied cognition for robotics and artificial intelligence. 基于湿软件网络的人工智能:机器人和人工智能具身认知的化学方法。
IF 3 Q2 ROBOTICS Pub Date : 2026-01-05 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1694338
Luisa Damiano, Antonio Fleres, Andrea Roli, Pasquale Stano

Wetware Network-Based Artificial Intelligence (WNAI) introduces a new approach to robotic cognition and artificial intelligence: autonomous cognitive agents built from synthetic chemical networks. Rooted in Wetware Neuromorphic Engineering, WNAI shifts the focus of this emerging field from disembodied computation and biological mimicry to reticular chemical self-organization as a substrate for cognition. At the theoretical level, WNAI integrates insights from network cybernetics, autopoietic theory and enaction to frame cognition as a materially grounded, emergent phenomenon. At the heuristic level, WNAI defines its role as complementary to existing leading approaches. On the one hand, it complements embodied AI and xenobotics by expanding the design space of artificial embodied cognition into fully synthetic domains. On the other hand, it engages in mutual exchange with neural network architectures, advancing cross-substrate principles of network-based cognition. At the technological level, WNAI offers a roadmap for implementing chemical neural networks and protocellular agents, with potential applications in robotic systems requiring minimal, adaptive, and substrate-sensitive intelligence. By situating wetware neuromorphic engineering within the broader landscape of robotics and AI, this article outlines a programmatic framework that highlights its potential to expand artificial cognition beyond silicon and biohybrid systems.

基于湿件网络的人工智能(WNAI)为机器人认知和人工智能引入了一种新的方法:基于合成化学网络构建的自主认知代理。基于Wetware神经形态工程,WNAI将这一新兴领域的重点从无实体计算和生物模拟转移到作为认知基础的网状化学自组织。在理论层面,WNAI整合了来自网络控制论、自创生理论和行为的见解,将认知构建为一种基于物质的、涌现的现象。在启发式层面,WNAI将其角色定义为对现有领先方法的补充。一方面,它通过将人工具身认知的设计空间扩展到全合成领域,对具身AI和异种机器人进行了补充。另一方面,它与神经网络架构相互交流,推进基于网络的认知的跨基原则。在技术层面上,WNAI为实现化学神经网络和原细胞因子提供了一个路线图,在机器人系统中有潜在的应用,需要最小的、自适应的和对基质敏感的智能。通过将湿软件神经形态工程置于更广阔的机器人和人工智能领域,本文概述了一个程序化框架,强调了其将人工认知扩展到硅和生物混合系统之外的潜力。
{"title":"Wetware network-based AI: a chemical approach to embodied cognition for robotics and artificial intelligence.","authors":"Luisa Damiano, Antonio Fleres, Andrea Roli, Pasquale Stano","doi":"10.3389/frobt.2025.1694338","DOIUrl":"10.3389/frobt.2025.1694338","url":null,"abstract":"<p><p><i>Wetware Network-Based Artificial Intelligence</i> (WNAI) introduces a new approach to robotic cognition and artificial intelligence: autonomous cognitive agents built from synthetic chemical networks. Rooted in <i>Wetware Neuromorphic Engineering</i>, WNAI shifts the focus of this emerging field from disembodied computation and biological mimicry to reticular chemical self-organization as a substrate for cognition. At the <i>theoretical level</i>, WNAI integrates insights from network cybernetics, autopoietic theory and enaction to frame cognition as a materially grounded, emergent phenomenon. At the <i>heuristic level</i>, WNAI defines its role as complementary to existing leading approaches. On the one hand, it complements embodied AI and xenobotics by expanding the design space of artificial embodied cognition into fully synthetic domains. On the other hand, it engages in mutual exchange with neural network architectures, advancing cross-substrate principles of network-based cognition. At the <i>technological level</i>, WNAI offers a roadmap for implementing chemical neural networks and protocellular agents, with potential applications in robotic systems requiring minimal, adaptive, and substrate-sensitive intelligence. By situating wetware neuromorphic engineering within the broader landscape of robotics and AI, this article outlines a programmatic framework that highlights its potential to expand artificial cognition beyond silicon and biohybrid systems.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1694338"},"PeriodicalIF":3.0,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12812610/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146012858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Embarrassment in HRI: remediation and the role of robot responses in emotion control. HRI中的尴尬:修复和机器人反应在情绪控制中的作用。
IF 3 Q2 ROBOTICS Pub Date : 2026-01-02 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1569040
Ahmed Salem, Kaoru Sumi

As robots became increasingly integrated into daily life, their ability to influence human emotions through verbal and nonverbal expressions is gaining attention. While robots have been explored for their role in emotional expression, their potential in emotion regulation particularly in mitigating or amplifying embarrassment remains under-explored in human-robot interaction. To address this gap, this study investigates whether and how robots can regulate the embarrassment emotion through their responses. A between-subjects experiment was conducted with 96 participants (48 males and 48 females) using the social robot Furhat. Participants experienced an embarrassing situation induced by a failure of meshing scenario, followed by the robot adopting one of three response attitudes: neutral, empathic, or ridiculing. Additionally, the robot's social agency was manipulated by varying its facial appearance between a human-like and an anime-like appearances. The findings indicate that embarrassment was effectively induced, as evidenced by physiological data, body movements, facial expressions, and participants' verbal responses. The anime-faced robot elicited lower embarrassment and arousal due to its lower perceived social agency and anthropomorphism. The robot's attitude was the dominant factor shaping participants' emotional responses and perceptions. The neutral and empathic attitudes with an anime face were found to be the most effective in eliciting mild emotions and mitigating embarrassment. Interestingly, an empathic attitude is suspected to be favored over a neutral one as it elicited the lowest embarrassment. However, an empathic attitude risks shaming the participant due to its indirect confrontation that inherently acknowledges the embarrassing incident which is undesirable in Japanese culture. Nevertheless, in terms of the robot's perceived evaluation by participants, a neutral attitude was the most favored. This study highlights the role of robot responses in emotion regulation, particularly in embarrassment control, and provides insights into designing socially intelligent robots that can modulate human emotions effectively.

随着机器人越来越多地融入日常生活,它们通过语言和非语言表达影响人类情绪的能力越来越受到关注。虽然机器人在情感表达方面的作用已经被探索,但它们在情感调节方面的潜力,特别是在减轻或放大尴尬方面,在人机交互方面仍未得到充分的探索。为了解决这一差距,本研究调查了机器人是否以及如何通过他们的反应来调节尴尬情绪。研究人员利用社交机器人Furhat对96名参与者(48名男性和48名女性)进行了一项受试者间实验。参与者经历了一个由网格失败引起的尴尬情景,随后机器人采取了三种反应态度之一:中立、移情或嘲笑。此外,机器人的社会代理被操纵,通过改变其面部外观之间的人类和类似动画的外观。研究结果表明,从生理数据、身体动作、面部表情和参与者的语言反应来看,尴尬是有效诱导的。由于其较低的感知社会能动性和拟人化,具有动漫面孔的机器人引起的尴尬和兴奋程度较低。机器人的态度是影响参与者情绪反应和感知的主要因素。研究发现,动画脸的中性和移情态度在引发温和情绪和减轻尴尬方面最有效。有趣的是,移情态度被怀疑比中立态度更受青睐,因为它引起的尴尬最少。然而,移情态度可能会让参与者感到羞耻,因为它的间接对抗本质上承认了尴尬的事件,这在日本文化中是不受欢迎的。然而,就参与者对机器人的感知评价而言,中立的态度是最受欢迎的。本研究强调了机器人反应在情绪调节中的作用,特别是在尴尬控制中的作用,并为设计能够有效调节人类情绪的社交智能机器人提供了见解。
{"title":"Embarrassment in HRI: remediation and the role of robot responses in emotion control.","authors":"Ahmed Salem, Kaoru Sumi","doi":"10.3389/frobt.2025.1569040","DOIUrl":"10.3389/frobt.2025.1569040","url":null,"abstract":"<p><p>As robots became increasingly integrated into daily life, their ability to influence human emotions through verbal and nonverbal expressions is gaining attention. While robots have been explored for their role in emotional expression, their potential in emotion regulation particularly in mitigating or amplifying embarrassment remains under-explored in human-robot interaction. To address this gap, this study investigates whether and how robots can regulate the embarrassment emotion through their responses. A between-subjects experiment was conducted with 96 participants (48 males and 48 females) using the social robot Furhat. Participants experienced an embarrassing situation induced by a failure of meshing scenario, followed by the robot adopting one of three response attitudes: neutral, empathic, or ridiculing. Additionally, the robot's social agency was manipulated by varying its facial appearance between a human-like and an anime-like appearances. The findings indicate that embarrassment was effectively induced, as evidenced by physiological data, body movements, facial expressions, and participants' verbal responses. The anime-faced robot elicited lower embarrassment and arousal due to its lower perceived social agency and anthropomorphism. The robot's attitude was the dominant factor shaping participants' emotional responses and perceptions. The neutral and empathic attitudes with an anime face were found to be the most effective in eliciting mild emotions and mitigating embarrassment. Interestingly, an empathic attitude is suspected to be favored over a neutral one as it elicited the lowest embarrassment. However, an empathic attitude risks shaming the participant due to its indirect confrontation that inherently acknowledges the embarrassing incident which is undesirable in Japanese culture. Nevertheless, in terms of the robot's perceived evaluation by participants, a neutral attitude was the most favored. This study highlights the role of robot responses in emotion regulation, particularly in embarrassment control, and provides insights into designing socially intelligent robots that can modulate human emotions effectively.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1569040"},"PeriodicalIF":3.0,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12807912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145999440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human intention recognition by deep LSTM and transformer networks for real-time human-robot collaboration. 基于深度LSTM和变压器网络的人机实时协作人类意图识别。
IF 3 Q2 ROBOTICS Pub Date : 2025-12-19 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1708987
Matija Mavsar, Mihael Simonič, Aleš Ude

Collaboration between humans and robots is essential for optimizing the performance of complex tasks in industrial environments, reducing worker strain, and improving safety. This paper presents an integrated human-robot collaboration (HRC) system that leverages advanced intention recognition for real-time task sharing and interaction. By utilizing state-of-the-art human pose estimation combined with deep learning models, we developed a robust framework for detecting and predicting worker intentions. Specifically, we employed LSTM-based and transformer-based neural networks with convolutional and pooling layers to classify human hand trajectories, achieving higher accuracy compared to previous approaches. Additionally, our system integrates dynamic movement primitives (DMPs) for smooth robot motion transitions, collision prevention, and automatic motion onset/cessation detection. We validated the system in a real-world industrial assembly task, demonstrating its effectiveness in enhancing the fluency, safety, and efficiency of human-robot collaboration. The proposed method shows promise in improving real-time decision-making in collaborative environments, offering a safer and more intuitive interaction between humans and robots.

人类和机器人之间的协作对于优化工业环境中复杂任务的性能、减少工人压力和提高安全性至关重要。本文提出了一种集成的人机协作(HRC)系统,该系统利用先进的意图识别进行实时任务共享和交互。通过利用最先进的人体姿势估计与深度学习模型相结合,我们开发了一个强大的框架来检测和预测工人的意图。具体来说,我们采用基于lstm和基于变压器的神经网络,结合卷积和池化层对人手轨迹进行分类,与之前的方法相比,获得了更高的精度。此外,我们的系统集成了动态运动原语(dmp),用于平滑机器人运动转换,碰撞预防和自动运动开始/停止检测。我们在一个真实的工业装配任务中验证了该系统,证明了它在提高人机协作的流畅性、安全性和效率方面的有效性。该方法有望改善协作环境中的实时决策,提供人与机器人之间更安全、更直观的交互。
{"title":"Human intention recognition by deep LSTM and transformer networks for real-time human-robot collaboration.","authors":"Matija Mavsar, Mihael Simonič, Aleš Ude","doi":"10.3389/frobt.2025.1708987","DOIUrl":"10.3389/frobt.2025.1708987","url":null,"abstract":"<p><p>Collaboration between humans and robots is essential for optimizing the performance of complex tasks in industrial environments, reducing worker strain, and improving safety. This paper presents an integrated human-robot collaboration (HRC) system that leverages advanced intention recognition for real-time task sharing and interaction. By utilizing state-of-the-art human pose estimation combined with deep learning models, we developed a robust framework for detecting and predicting worker intentions. Specifically, we employed LSTM-based and transformer-based neural networks with convolutional and pooling layers to classify human hand trajectories, achieving higher accuracy compared to previous approaches. Additionally, our system integrates dynamic movement primitives (DMPs) for smooth robot motion transitions, collision prevention, and automatic motion onset/cessation detection. We validated the system in a real-world industrial assembly task, demonstrating its effectiveness in enhancing the fluency, safety, and efficiency of human-robot collaboration. The proposed method shows promise in improving real-time decision-making in collaborative environments, offering a safer and more intuitive interaction between humans and robots.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1708987"},"PeriodicalIF":3.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12757248/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145900997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive imitation learning for dexterous robotic manipulation: challenges and perspectives-a survey. 交互式模仿学习在灵巧机器人操作中的应用:挑战与展望。
IF 3 Q2 ROBOTICS Pub Date : 2025-12-19 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1682437
Edgar Welte, Rania Rayyes

Dexterous manipulation is a crucial yet highly complex challenge in humanoid robotics, demanding precise, adaptable, and sample-efficient learning methods. As humanoid robots are usually designed to operate in human-centric environments and interact with everyday objects, mastering dexterous manipulation is critical for real-world deployment. Traditional approaches, such as reinforcement learning and imitation learning, have made significant strides, but they often struggle due to the unique challenges of real-world dexterous manipulation, including high-dimensional control, limited training data, and covariate shift. This survey provides a comprehensive overview of these challenges and reviews existing learning-based methods for real-world dexterous manipulation, spanning imitation learning, reinforcement learning, and hybrid approaches. A promising yet underexplored direction is interactive imitation learning, where human feedback actively refines a robot's behavior during training. While interactive imitation learning has shown success in various robotic tasks, its application to dexterous manipulation remains limited. To address this gap, we examine current interactive imitation learning techniques applied to other robotic tasks and discuss how these methods can be adapted to enhance dexterous manipulation. By synthesizing state-of-the-art research, this paper highlights key challenges, identifies gaps in current methodologies, and outlines potential directions for leveraging interactive imitation learning to improve dexterous robotic skills.

在类人机器人中,灵巧的操作是一个关键而又高度复杂的挑战,它要求精确、适应性强、样本效率高的学习方法。由于类人机器人通常被设计为在以人为中心的环境中工作,并与日常物品进行交互,因此掌握灵巧的操作对于现实世界的部署至关重要。传统的方法,如强化学习和模仿学习,已经取得了重大进展,但由于现实世界灵巧操作的独特挑战,包括高维控制,有限的训练数据和协变量移位,它们经常挣扎。本调查提供了这些挑战的全面概述,并回顾了现有的基于学习的方法,用于现实世界的灵巧操作,跨越模仿学习,强化学习和混合方法。交互式模仿学习是一个很有前途但尚未得到充分开发的方向,在这个方向上,人类的反馈可以在训练过程中积极地改进机器人的行为。虽然交互式模仿学习在各种机器人任务中取得了成功,但它在灵巧操作中的应用仍然有限。为了解决这一差距,我们研究了目前应用于其他机器人任务的交互式模仿学习技术,并讨论了如何适应这些方法来增强灵巧操作。通过综合最新的研究,本文强调了关键挑战,确定了当前方法中的差距,并概述了利用交互式模仿学习来提高灵巧机器人技能的潜在方向。
{"title":"Interactive imitation learning for dexterous robotic manipulation: challenges and perspectives-a survey.","authors":"Edgar Welte, Rania Rayyes","doi":"10.3389/frobt.2025.1682437","DOIUrl":"10.3389/frobt.2025.1682437","url":null,"abstract":"<p><p>Dexterous manipulation is a crucial yet highly complex challenge in humanoid robotics, demanding precise, adaptable, and sample-efficient learning methods. As humanoid robots are usually designed to operate in human-centric environments and interact with everyday objects, mastering dexterous manipulation is critical for real-world deployment. Traditional approaches, such as reinforcement learning and imitation learning, have made significant strides, but they often struggle due to the unique challenges of real-world dexterous manipulation, including high-dimensional control, limited training data, and covariate shift. This survey provides a comprehensive overview of these challenges and reviews existing learning-based methods for real-world dexterous manipulation, spanning imitation learning, reinforcement learning, and hybrid approaches. A promising yet underexplored direction is interactive imitation learning, where human feedback actively refines a robot's behavior during training. While interactive imitation learning has shown success in various robotic tasks, its application to dexterous manipulation remains limited. To address this gap, we examine current interactive imitation learning techniques applied to other robotic tasks and discuss how these methods can be adapted to enhance dexterous manipulation. By synthesizing state-of-the-art research, this paper highlights key challenges, identifies gaps in current methodologies, and outlines potential directions for leveraging interactive imitation learning to improve dexterous robotic skills.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1682437"},"PeriodicalIF":3.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12757213/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145900952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive mapless mobile robot navigation using deep reinforcement learning based improved TD3 algorithm. 基于深度强化学习改进TD3算法的自适应无地图移动机器人导航。
IF 3 Q2 ROBOTICS Pub Date : 2025-12-18 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1625968
Shoaib Mohd Nasti, Zahoor Ahmad Najar, Mohammad Ahsan Chishti

Navigating in unknown environments without prior maps poses a significant challenge for mobile robots due to sparse rewards, dynamic obstacles, and limited prior knowledge. This paper presents an Improved Deep Reinforcement Learning (DRL) framework based on the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm for adaptive mapless navigation. In addition to architectural enhancements, the proposed method offers theoretical benefits byincorporates a latent-state encoder and predictor module to transform high-dimensional sensor inputs into compact embeddings. This compact representation reduces the effective dimensionality of the state space, enabling smoother value-function approximation and mitigating overestimation errors common in actor-critic methods. It uses intrinsic rewards derived from prediction error in the latent space to promote exploration of novel states. The intrinsic reward encourages the agent to prioritize uncertain yet informative regions, improving exploration efficiency under sparse extrinsic reward signals and accelerating convergence. Furthermore, training stability is achieved through regularization of the latent space via maximum mean discrepancy (MMD) loss. By enforcing consistent latent dynamics, the MMD constraint reduces variance in target value estimation and results in more stable policy updates. Experimental results in simulated ROS2/Gazebo environments demonstrate that the proposed framework outperforms standard TD3 and other improved TD3 variants. Our model achieves a 93.1% success rate and a low 6.8% collision rate, reflecting efficient and safe goal-directed navigation. These findings confirm that combining intrinsic motivation, structured representation learning, and regularization-based stabilization produces more robust and generalizable policies for mapless mobile robot navigation.

由于奖励稀疏、动态障碍和有限的先验知识,在没有事先地图的未知环境中导航对移动机器人提出了重大挑战。提出了一种基于双延迟深度确定性策略梯度(TD3)算法的改进深度强化学习(DRL)框架,用于自适应无地图导航。除了架构上的改进,所提出的方法还提供了理论上的好处,它结合了一个潜在状态编码器和预测器模块,将高维传感器输入转换为紧凑的嵌入。这种紧凑的表示减少了状态空间的有效维数,实现了更平滑的值-函数近似,并减轻了actor-critic方法中常见的高估错误。它使用来自潜在空间预测误差的内在奖励来促进对新状态的探索。内在奖励鼓励智能体优先考虑不确定但信息丰富的区域,提高了在稀疏的外部奖励信号下的探索效率,加速了收敛。此外,通过最大平均差异(MMD)损失对潜在空间进行正则化,实现训练稳定性。通过执行一致的潜在动态,MMD约束减少了目标值估计中的方差,并导致更稳定的策略更新。在模拟ROS2/Gazebo环境下的实验结果表明,该框架优于标准TD3和其他改进的TD3变体。我们的模型实现了93.1%的成功率和6.8%的低碰撞率,反映了高效和安全的目标导向导航。这些发现证实,将内在动机、结构化表征学习和基于正则化的稳定化相结合,可以为无地图移动机器人导航提供更强大、更通用的策略。
{"title":"Adaptive mapless mobile robot navigation using deep reinforcement learning based improved TD3 algorithm.","authors":"Shoaib Mohd Nasti, Zahoor Ahmad Najar, Mohammad Ahsan Chishti","doi":"10.3389/frobt.2025.1625968","DOIUrl":"10.3389/frobt.2025.1625968","url":null,"abstract":"<p><p>Navigating in unknown environments without prior maps poses a significant challenge for mobile robots due to sparse rewards, dynamic obstacles, and limited prior knowledge. This paper presents an Improved Deep Reinforcement Learning (DRL) framework based on the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm for adaptive mapless navigation. In addition to architectural enhancements, the proposed method offers theoretical benefits byincorporates a latent-state encoder and predictor module to transform high-dimensional sensor inputs into compact embeddings. This compact representation reduces the effective dimensionality of the state space, enabling smoother value-function approximation and mitigating overestimation errors common in actor-critic methods. It uses intrinsic rewards derived from prediction error in the latent space to promote exploration of novel states. The intrinsic reward encourages the agent to prioritize uncertain yet informative regions, improving exploration efficiency under sparse extrinsic reward signals and accelerating convergence. Furthermore, training stability is achieved through regularization of the latent space via maximum mean discrepancy (MMD) loss. By enforcing consistent latent dynamics, the MMD constraint reduces variance in target value estimation and results in more stable policy updates. Experimental results in simulated ROS2/Gazebo environments demonstrate that the proposed framework outperforms standard TD3 and other improved TD3 variants. Our model achieves a 93.1% success rate and a low 6.8% collision rate, reflecting efficient and safe goal-directed navigation. These findings confirm that combining intrinsic motivation, structured representation learning, and regularization-based stabilization produces more robust and generalizable policies for mapless mobile robot navigation.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1625968"},"PeriodicalIF":3.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12756063/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145901283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From AIBO to robosphere. Organizational interdependencies in sustainable robotics. 从AIBO到机器人世界。可持续机器人的组织相互依赖关系。
IF 3 Q2 ROBOTICS Pub Date : 2025-12-18 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1716801
Antonio Fleres, Luisa Damiano

The challenge of sustainability in robotics is usually addressed in terms of materials, energy, and efficiency. Yet the long-term viability of robotic systems also depends on organizational interdependencies that shape how they are maintained, experienced, and integrated into human environments. The present article develops this systemic perspective by advancing the hypothesis that such interdependencies can be understood as self-organizing dynamics. To examine this hypothesis, we analyze the case of Sony's AIBO robotic dogs. Originally designed for social companionship, AIBO units gave rise to a hybrid socio-technical ecosystem in which owners, repair specialists, and ritual practices sustained the robots long after their commercial discontinuation. Building on self-organization theory, we introduce the concept of the "robosphere" as an evolving network of relations in which robotic and human agents co-constitute resilient, sustainability-oriented ecosystems. Extending self-organization beyond its classical biological and technical domains, we argue that robotic sustainability must be framed not as a narrow technical issue but as a complex, multifactorial, and distributed process grounded in organizational interdependencies that integrate technical, cognitive, social, and affective dimensions of human life. Our contribution is twofold. First, we propose a modeling perspective that interprets sustainability in robotics as an emergent property of these interdependencies, exemplified by repair, reuse, and ritual practices that prolonged AIBO's lifecycle. Second, we outline a set of systemic design principles to inform the development of future human-robot ecosystems. By situating the AIBO case within the robospheric framework, this Hypothesis and Theory article advances the view that hybrid socio-technical collectives can generate sustainability from within. It outlines a programmatic horizon for rethinking social robotics not as disposable products, but as integral nodes of co-evolving, sustainable human-robot ecologies.

机器人的可持续性挑战通常是在材料、能源和效率方面解决的。然而,机器人系统的长期生存能力也取决于组织的相互依赖性,这种依赖性决定了它们如何被维护、体验和融入人类环境。本文通过提出这种相互依赖可以被理解为自组织动力学的假设来发展这种系统观点。为了检验这一假设,我们分析了索尼AIBO机器狗的案例。AIBO最初是为了社交而设计的,它产生了一个混合的社会技术生态系统,在这个生态系统中,主人、维修专家和仪式实践在机器人商业停产后很长一段时间内维持着它们。在自组织理论的基础上,我们引入了“机器人圈”的概念,作为一个不断发展的关系网络,在这个网络中,机器人和人类代理人共同构成了有弹性的、面向可持续发展的生态系统。将自组织扩展到经典的生物和技术领域之外,我们认为机器人的可持续性不能被视为一个狭隘的技术问题,而是一个复杂的、多因素的、分布式的过程,这个过程建立在组织相互依赖的基础上,整合了人类生活的技术、认知、社会和情感维度。我们的贡献是双重的。首先,我们提出了一个建模视角,将机器人技术的可持续性解释为这些相互依赖关系的紧急属性,例如修复,重用和延长AIBO生命周期的仪式实践。其次,我们概述了一套系统设计原则,为未来人机生态系统的发展提供信息。通过将AIBO案例置于机器人圈框架内,这篇假设和理论文章提出了混合社会技术集体可以从内部产生可持续性的观点。它概述了一个程序化的视野,重新思考社会机器人不是一次性产品,而是作为共同进化的整体节点,可持续的人-机器人生态。
{"title":"From AIBO to robosphere. Organizational interdependencies in sustainable robotics.","authors":"Antonio Fleres, Luisa Damiano","doi":"10.3389/frobt.2025.1716801","DOIUrl":"10.3389/frobt.2025.1716801","url":null,"abstract":"<p><p>The challenge of sustainability in robotics is usually addressed in terms of materials, energy, and efficiency. Yet the long-term viability of robotic systems also depends on organizational interdependencies that shape how they are maintained, experienced, and integrated into human environments. The present article develops this systemic perspective by advancing the hypothesis that such interdependencies can be understood as self-organizing dynamics. To examine this hypothesis, we analyze the case of Sony's AIBO robotic dogs. Originally designed for social companionship, AIBO units gave rise to a hybrid socio-technical ecosystem in which owners, repair specialists, and ritual practices sustained the robots long after their commercial discontinuation. Building on self-organization theory, we introduce the concept of the \"robosphere\" as an evolving network of relations in which robotic and human agents co-constitute resilient, sustainability-oriented ecosystems. Extending self-organization beyond its classical biological and technical domains, we argue that robotic sustainability must be framed not as a narrow technical issue but as a complex, multifactorial, and distributed process grounded in organizational interdependencies that integrate technical, cognitive, social, and affective dimensions of human life. Our contribution is twofold. First, we propose a modeling perspective that interprets sustainability in robotics as an emergent property of these interdependencies, exemplified by repair, reuse, and ritual practices that prolonged AIBO's lifecycle. Second, we outline a set of systemic design principles to inform the development of future human-robot ecosystems. By situating the AIBO case within the robospheric framework, this Hypothesis and Theory article advances the view that hybrid socio-technical collectives can generate sustainability from within. It outlines a programmatic horizon for rethinking social robotics not as disposable products, but as integral nodes of co-evolving, sustainable human-robot ecologies.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1716801"},"PeriodicalIF":3.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12756144/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145900511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Slip detection for compliant robotic hands using inertial signals and deep learning. 基于惯性信号和深度学习的柔顺机械手滑移检测。
IF 3 Q2 ROBOTICS Pub Date : 2025-12-18 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1698591
Miranda Cravetz, Purva Vyas, Cindy Grimm, Joseph R Davidson

When a passively compliant hand grasps an object, slip events are often accompanied by flexion or extension of the finger or finger joints. This paper investigates whether a combination of orientation change and slip-induced vibration at the fingertip, as sensed by an inertial measurement unit (IMU), can be used as a slip indicator. Using a tendon-driven hand, which achieves passive compliance through underactuation, we performed 195 manipulation trials involving both slip and non-slip conditions. We then labeled this data automatically using motion-tracking data, and trained a convolutional neural network (CNN) to detect the slip events. Our results show that slip can be successfully detected from IMU data, even in the presence of other disturbances. This remains the case when deploying the trained network on data from a different gripper performing a new manipulation task on a previously unseen object.

当一只被动顺从的手抓住一个物体时,滑动事件通常伴随着手指或手指关节的弯曲或伸展。本文研究了由惯性测量单元(IMU)感知的指尖方向变化和滑移引起的振动的组合是否可以用作滑移指示器。使用肌腱驱动的手,通过欠驱动实现被动顺应,我们进行了195次涉及滑移和防滑条件的操作试验。然后,我们使用运动跟踪数据自动标记这些数据,并训练卷积神经网络(CNN)来检测滑动事件。我们的结果表明,即使在存在其他干扰的情况下,也可以从IMU数据中成功地检测到滑移。当将训练好的网络部署到来自不同抓取器的数据上时,这种情况仍然存在,这些抓取器对以前未见过的对象执行新的操作任务。
{"title":"Slip detection for compliant robotic hands using inertial signals and deep learning.","authors":"Miranda Cravetz, Purva Vyas, Cindy Grimm, Joseph R Davidson","doi":"10.3389/frobt.2025.1698591","DOIUrl":"10.3389/frobt.2025.1698591","url":null,"abstract":"<p><p>When a passively compliant hand grasps an object, slip events are often accompanied by flexion or extension of the finger or finger joints. This paper investigates whether a combination of orientation change and slip-induced vibration at the fingertip, as sensed by an inertial measurement unit (IMU), can be used as a slip indicator. Using a tendon-driven hand, which achieves passive compliance through underactuation, we performed 195 manipulation trials involving both slip and non-slip conditions. We then labeled this data automatically using motion-tracking data, and trained a convolutional neural network (CNN) to detect the slip events. Our results show that slip can be successfully detected from IMU data, even in the presence of other disturbances. This remains the case when deploying the trained network on data from a different gripper performing a new manipulation task on a previously unseen object.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1698591"},"PeriodicalIF":3.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12756126/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145900958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating human perceptions of android robot facial expressions based on variations in instruction styles. 基于教学风格的变化评估人类对机器人面部表情的感知。
IF 3 Q2 ROBOTICS Pub Date : 2025-12-16 eCollection Date: 2025-01-01 DOI: 10.3389/frobt.2025.1728647
Ayaka Fujii, Carlos Toshinori Ishi, Kurima Sakai, Tomo Funayama, Ritsuko Iwai, Yusuke Takahashi, Takatsune Kumada, Takashi Minato

Robots that interact with humans are required to express emotions in ways that are appropriate to the context. While most prior research has focused primarily on basic emotions, real-life interactions demand more nuanced expressions. In this study, we extended the expressive capabilities of the android robot Nikola by implementing 63 facial expressions, covering not only complex emotions and physical conditions, but also differences in intensity. At Expo 2025 in Japan, more than 600 participants interacted with Nikola by describing situations in which they wanted the robot to perform facial expressions. The robot inferred emotions using a large language model and performed corresponding facial expressions. Questionnaire responses revealed that participants rated the robot's behavior as more appropriate and emotionally expressive when their instructions were abstract, compared to when they explicitly included emotions or physical states. This suggests that abstract instructions enhance perceived agency in the robot. We also investigated and discussed how impressions towards the robot varied depending on the expressions it performed and the personality traits of participants. This study contributes to the research field of human-robot interaction by demonstrating how adaptive facial expressions, in association with instruction styles, are linked to shaping human perceptions of social robots.

与人类互动的机器人被要求以适合情境的方式表达情感。虽然大多数先前的研究主要集中在基本情绪上,但现实生活中的互动需要更微妙的表达。在这项研究中,我们通过实现63种面部表情来扩展机器人Nikola的表达能力,这些表情不仅涵盖了复杂的情绪和身体状况,而且还涵盖了强度的差异。在日本2025年世博会上,600多名参与者通过描述他们希望机器人做出面部表情的情景,与尼古拉进行了互动。机器人使用大型语言模型推断情绪,并做出相应的面部表情。问卷调查结果显示,与明确包含情绪或身体状态的指令相比,当机器人的指令是抽象的时,参与者认为机器人的行为更合适,更能表达情感。这表明抽象指令增强了机器人的感知代理能力。我们还调查并讨论了对机器人的印象是如何根据它的表现和参与者的个性特征而变化的。本研究通过展示与教学风格相关的适应性面部表情如何与塑造人类对社交机器人的看法相关联,为人机交互研究领域做出了贡献。
{"title":"Evaluating human perceptions of android robot facial expressions based on variations in instruction styles.","authors":"Ayaka Fujii, Carlos Toshinori Ishi, Kurima Sakai, Tomo Funayama, Ritsuko Iwai, Yusuke Takahashi, Takatsune Kumada, Takashi Minato","doi":"10.3389/frobt.2025.1728647","DOIUrl":"10.3389/frobt.2025.1728647","url":null,"abstract":"<p><p>Robots that interact with humans are required to express emotions in ways that are appropriate to the context. While most prior research has focused primarily on basic emotions, real-life interactions demand more nuanced expressions. In this study, we extended the expressive capabilities of the android robot Nikola by implementing 63 facial expressions, covering not only complex emotions and physical conditions, but also differences in intensity. At Expo 2025 in Japan, more than 600 participants interacted with Nikola by describing situations in which they wanted the robot to perform facial expressions. The robot inferred emotions using a large language model and performed corresponding facial expressions. Questionnaire responses revealed that participants rated the robot's behavior as more appropriate and emotionally expressive when their instructions were abstract, compared to when they explicitly included emotions or physical states. This suggests that abstract instructions enhance perceived agency in the robot. We also investigated and discussed how impressions towards the robot varied depending on the expressions it performed and the personality traits of participants. This study contributes to the research field of human-robot interaction by demonstrating how adaptive facial expressions, in association with instruction styles, are linked to shaping human perceptions of social robots.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1728647"},"PeriodicalIF":3.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12747908/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Robotics and AI
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1