首页 > 最新文献

Robotics and Autonomous Systems最新文献

英文 中文
Early detection of human handover intentions in human–robot collaboration: Comparing EEG, gaze, and hand motion 人机协作中人类交接意图的早期检测:脑电图、凝视和手部动作的比较
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-11-05 DOI: 10.1016/j.robot.2025.105244
Parag Khanna , Nona Rajabi , Sumeyra U. Demir Kanik , Danica Kragic , Mårten Björkman , Christian Smith
Human–robot collaboration (HRC) relies on accurate and timely recognition of human intentions to ensure seamless interactions. Among common HRC tasks, human-to-robot object handovers have been studied extensively for planning the robot’s actions during object reception, assuming the human intention for object handover. However, distinguishing handover intentions from other actions has received limited attention. Most research on handovers has focused on visually detecting motion trajectories, which often results in delays or false detections when trajectories overlap. This paper investigates whether human intentions for object handovers are reflected in non-movement-based physiological signals. We conduct a multimodal analysis comparing three data modalities: electroencephalogram (EEG), gaze, and hand-motion signals. Our study aims to distinguish between handover-intended human motions and non-handover motions in an HRC setting, evaluating each modality’s performance in predicting and classifying these actions before and after human movement initiation. We develop and evaluate human intention detectors based on these modalities, comparing their accuracy and timing in identifying handover intentions. To the best of our knowledge, this is the first study to systematically develop and test intention detectors across multiple modalities within the same experimental context of human–robot handovers. Our analysis reveals that handover intention can be detected from all three modalities. Nevertheless, gaze signals are the earliest as well as the most accurate to classify the motion as intended for handover or non-handover.
人机协作(HRC)依赖于对人类意图的准确和及时的识别,以确保无缝交互。在常见的HRC任务中,人类与机器人之间的物体切换被广泛研究,以规划机器人在物体接收过程中的行动,假设人类意图进行物体切换。然而,将移交意图与其他行为区分开来的问题受到的关注有限。大多数关于切换的研究都集中在视觉检测运动轨迹上,当运动轨迹重叠时,往往会导致延迟或错误检测。本文研究了人类对物体移交的意图是否反映在非运动的生理信号中。我们进行了一项多模态分析,比较了三种数据模式:脑电图(EEG)、凝视和手部运动信号。我们的研究旨在区分在HRC环境下的移交意图的人类动作和非移交动作,评估每种模式在人类动作开始之前和之后预测和分类这些动作的表现。我们基于这些模式开发和评估人类意图检测器,比较它们在识别移交意图方面的准确性和时间。据我们所知,这是第一个在人-机器人移交的相同实验背景下系统地开发和测试跨多种模式的意图探测器的研究。我们的分析表明,移交意图可以从这三种模式中检测出来。然而,凝视信号是最早的,也是最准确的,可以将动作区分为交接或非交接。
{"title":"Early detection of human handover intentions in human–robot collaboration: Comparing EEG, gaze, and hand motion","authors":"Parag Khanna ,&nbsp;Nona Rajabi ,&nbsp;Sumeyra U. Demir Kanik ,&nbsp;Danica Kragic ,&nbsp;Mårten Björkman ,&nbsp;Christian Smith","doi":"10.1016/j.robot.2025.105244","DOIUrl":"10.1016/j.robot.2025.105244","url":null,"abstract":"<div><div>Human–robot collaboration (HRC) relies on accurate and timely recognition of human intentions to ensure seamless interactions. Among common HRC tasks, human-to-robot object handovers have been studied extensively for planning the robot’s actions during object reception, assuming the human intention for object handover. However, distinguishing handover <em>intentions</em> from other actions has received limited attention. Most research on handovers has focused on visually detecting motion trajectories, which often results in delays or false detections when trajectories overlap. This paper investigates whether human intentions for object handovers are reflected in non-movement-based physiological signals. We conduct a multimodal analysis comparing three data modalities: electroencephalogram (EEG), gaze, and hand-motion signals. Our study aims to distinguish between handover-intended human motions and non-handover motions in an HRC setting, evaluating each modality’s performance in predicting and classifying these actions before and after human movement initiation. We develop and evaluate human intention detectors based on these modalities, comparing their accuracy and timing in identifying handover intentions. To the best of our knowledge, this is the first study to systematically develop and test intention detectors across multiple modalities within the same experimental context of human–robot handovers. Our analysis reveals that handover intention can be detected from all three modalities. Nevertheless, gaze signals are the earliest as well as the most accurate to classify the motion as intended for handover or non-handover.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"196 ","pages":"Article 105244"},"PeriodicalIF":5.2,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145529292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mobile robot defensive wayfinding for incomplete and ambiguous route instructions 移动机器人不完整和模糊路线指示的防御寻路
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-11-04 DOI: 10.1016/j.robot.2025.105246
Hao Liang Chen , Elena Torta , Herman Bruyninckx , René van de Molengraft
Humans typically convey route instructions as a sequence of locations where actions need to be executed, e.g., turn left at the second crossing. It is not uncommon that humans generalize or forget parts of the environment or route instructions. Consequently, route instructions can be incomplete as decision points, in the form of action-location descriptions, are omitted. In addition, omitted location descriptions may be similar to one already present in the route instructions, thus making it ambiguous to the wayfinder at which specific location to execute the action. Defensive wayfinding then characterizes the procedure to deal with such uncertainties in the route instructions. The state of art is not capable of performing defensive wayfinding for mobile robots with route instructions that are incomplete and ambiguous regarding the specific location for action execution. This work tackles this problem by taking inspiration from practices of human wayfinding literature and incorporating it into the robotics context. More in particular by adding three types of knowledge to the route instructions: (1) the types of locations that can be encountered by the robot, (2) the action models to leave those locations, and (3) the temporal and spatial constraints on the (topological) sequence of encountered locations. (In the context of human defensive wayfinding, this additional information is “background knowledge”, or it is provided in the form of a rough sketch.) Our defensive wayfinding approach relies on a hypothesis tree that associates (parts of) the executed robot path to the route instructed path. We experimentally validate defensive wayfinding in simulation with a mobile robot equipped with a 2D laser range finder in a corridor-junction environment that can be topologically inconsistent with the provided route instructions.
人类通常会将路线指令传达为需要执行动作的一系列位置,例如,在第二个十字路口向左拐。人类概括或忘记部分环境或路线指示的情况并不罕见。因此,路线指令可能是不完整的,因为决策点,以行动-位置描述的形式,被省略了。此外,省略的位置描述可能与路线指令中已经存在的位置描述相似,从而使寻路者在哪个特定位置执行操作变得模棱两可。防御性寻路是处理路线指示中这种不确定性的过程的特征。目前的技术水平是无法执行防御性寻路的移动机器人的路线指令是不完整的和模糊的具体位置的行动执行。这项工作通过从人类寻路文学的实践中获得灵感,并将其纳入机器人环境来解决这个问题。更具体地说,通过在路线指令中添加三种类型的知识:(1)机器人可能遇到的位置类型,(2)离开这些位置的动作模型,以及(3)遇到位置(拓扑)序列的时空约束。(在人类防御性寻路的背景下,这些额外的信息是“背景知识”,或者以粗略的草图的形式提供。)我们的防御性寻路方法依赖于一个假设树,该假设树将执行的机器人路径(部分)与路线指示路径相关联。我们在模拟实验中验证了防御性寻路,在走廊连接环境中配备了2D激光测距仪的移动机器人,该环境可能在拓扑上与提供的路线指示不一致。
{"title":"Mobile robot defensive wayfinding for incomplete and ambiguous route instructions","authors":"Hao Liang Chen ,&nbsp;Elena Torta ,&nbsp;Herman Bruyninckx ,&nbsp;René van de Molengraft","doi":"10.1016/j.robot.2025.105246","DOIUrl":"10.1016/j.robot.2025.105246","url":null,"abstract":"<div><div>Humans typically convey <em>route instructions</em> as a sequence of <em>locations</em> where <em>actions</em> need to be executed, e.g., turn left at the second crossing. It is not uncommon that humans generalize or forget parts of the environment or route instructions. Consequently, route instructions can be <em>incomplete</em> as decision points, in the form of action-location descriptions, are omitted. In addition, omitted location descriptions may be similar to one already present in the route instructions, thus making it <em>ambiguous</em> to the wayfinder at which specific location to execute the action. <em>Defensive wayfinding</em> then characterizes the procedure to deal with such uncertainties in the route instructions. The state of art is not capable of performing defensive wayfinding for mobile robots with route instructions that are incomplete and ambiguous regarding the specific location for action execution. This work tackles this problem by taking inspiration from practices of human wayfinding literature and incorporating it into the robotics context. More in particular by adding three types of knowledge to the route instructions: (1) the types of locations that can be encountered by the robot, (2) the action models to leave those locations, and (3) the temporal and spatial constraints on the (topological) sequence of encountered locations. (In the context of human defensive wayfinding, this additional information is “background knowledge”, or it is provided in the form of a rough sketch.) Our defensive wayfinding approach relies on a hypothesis tree that associates (parts of) the executed robot path to the route instructed path. We experimentally validate defensive wayfinding in simulation with a mobile robot equipped with a 2D laser range finder in a corridor-junction environment that can be topologically inconsistent with the provided route instructions.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"196 ","pages":"Article 105246"},"PeriodicalIF":5.2,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Formation control of swarm robotics: A survey from biological inspirations to design automation methods 群体机器人的编队控制:从生物学启示到设计自动化方法的综述
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-11-04 DOI: 10.1016/j.robot.2025.105245
Wenji Li , Zhaojun Wang , Chaotao Guan , Chuangbin Chen , Boxi Wang , Pengxiang Ren , Yifeng Qiu , Qinchang Zhang , Haoyu Wang , Dongliang Wang , Jiafan Zhuang , Biao Xu , Zhifeng Hao , Zhun Fan
Swarm robotics usually consist of multiple collaborating robots that interact and cooperate to accomplish complex tasks beyond the capabilities of individual robots. This cooperation often leads to the emergence of intelligent behaviors at the collective level. This paper explores the potential of leveraging intelligent behaviors inspired by biological collective behaviors to design formation control strategies for swarm robotics. Specifically, we provide a systematic review of advancements in behavior control strategies for the formation control of swarm robotics, drawing insights from animal collective behaviors and multi-cellular organisms. We then delve into design automation methods for formation control in swarm robotics, which have emerged as a primary research focus due to the growing demand for autonomy and the increasing complexity and variety of tasks and environments. Finally, we analyze and summarize the challenges and future directions of swarm robotics, especially emphasizing the emergence of collective intelligence and design automation for the formation control of swarm robotics.
群机器人通常由多个协作机器人组成,这些协作机器人相互作用和合作来完成超出单个机器人能力的复杂任务。这种合作往往会导致集体层面上智能行为的出现。本文探讨了利用受生物集体行为启发的智能行为来设计群体机器人的编队控制策略的潜力。具体来说,我们系统地回顾了群体机器人在群体控制方面的行为控制策略的进展,并从动物集体行为和多细胞生物中汲取了见解。然后,我们深入研究了群体机器人中群体控制的设计自动化方法,由于对自主性的需求日益增长,任务和环境的复杂性和多样性日益增加,这已经成为主要的研究焦点。最后,分析和总结了群体机器人面临的挑战和未来的发展方向,特别强调了集体智能的出现和群体机器人编队控制的设计自动化。
{"title":"Formation control of swarm robotics: A survey from biological inspirations to design automation methods","authors":"Wenji Li ,&nbsp;Zhaojun Wang ,&nbsp;Chaotao Guan ,&nbsp;Chuangbin Chen ,&nbsp;Boxi Wang ,&nbsp;Pengxiang Ren ,&nbsp;Yifeng Qiu ,&nbsp;Qinchang Zhang ,&nbsp;Haoyu Wang ,&nbsp;Dongliang Wang ,&nbsp;Jiafan Zhuang ,&nbsp;Biao Xu ,&nbsp;Zhifeng Hao ,&nbsp;Zhun Fan","doi":"10.1016/j.robot.2025.105245","DOIUrl":"10.1016/j.robot.2025.105245","url":null,"abstract":"<div><div>Swarm robotics usually consist of multiple collaborating robots that interact and cooperate to accomplish complex tasks beyond the capabilities of individual robots. This cooperation often leads to the emergence of intelligent behaviors at the collective level. This paper explores the potential of leveraging intelligent behaviors inspired by biological collective behaviors to design formation control strategies for swarm robotics. Specifically, we provide a systematic review of advancements in behavior control strategies for the formation control of swarm robotics, drawing insights from animal collective behaviors and multi-cellular organisms. We then delve into design automation methods for formation control in swarm robotics, which have emerged as a primary research focus due to the growing demand for autonomy and the increasing complexity and variety of tasks and environments. Finally, we analyze and summarize the challenges and future directions of swarm robotics, especially emphasizing the emergence of collective intelligence and design automation for the formation control of swarm robotics.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"196 ","pages":"Article 105245"},"PeriodicalIF":5.2,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discovering antagonists in networks of systems: Robot deployment 发现系统网络中的对手:机器人部署
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-11-03 DOI: 10.1016/j.robot.2025.105235
Ingeborg Wenger , Peter Eberhard , Henrik Ebel
A contextual anomaly detection method is proposed and applied to the physical motions of a robot swarm executing a coverage task. Using simulations of a swarm’s normal behavior, a normalizing flow is trained to predict the likelihood of a robot motion within the current context of its environment. During application, the predicted likelihood of the observed motions is used by a detection criterion that categorizes a robot agent as normal or antagonistic. The proposed method is evaluated on five different strategies of antagonistic behavior. Importantly, only readily available simulated data of normal robot behavior is used for training such that the nature of the anomalies need not be known beforehand. The best detection criterion correctly categorizes at least 80% of each antagonistic type while maintaining a false positive rate of less than 5% for normal robot agents. Additionally, the method is validated in hardware experiments, yielding results similar to the simulated scenarios. Compared to the state-of-the-art approach, both the predictive performance of the normalizing flow and the robustness of the detection criterion are increased.
提出了一种上下文异常检测方法,并将其应用于机器人群执行覆盖任务的物理运动中。通过模拟一个群体的正常行为,一个规范化流被训练来预测机器人在当前环境中运动的可能性。在应用过程中,观察到的运动的预测可能性被检测标准使用,该检测标准将机器人代理分类为正常或对抗。该方法在五种不同的对抗行为策略上进行了评估。重要的是,只有容易获得的正常机器人行为的模拟数据被用于训练,这样就不需要事先知道异常的性质。最佳检测标准对每种拮抗类型至少正确分类80%,同时对正常机器人代理保持小于5%的假阳性率。此外,该方法在硬件实验中得到了验证,得到了与模拟场景相似的结果。与最先进的方法相比,归一化流的预测性能和检测准则的鲁棒性都得到了提高。
{"title":"Discovering antagonists in networks of systems: Robot deployment","authors":"Ingeborg Wenger ,&nbsp;Peter Eberhard ,&nbsp;Henrik Ebel","doi":"10.1016/j.robot.2025.105235","DOIUrl":"10.1016/j.robot.2025.105235","url":null,"abstract":"<div><div>A contextual anomaly detection method is proposed and applied to the physical motions of a robot swarm executing a coverage task. Using simulations of a swarm’s normal behavior, a normalizing flow is trained to predict the likelihood of a robot motion within the current context of its environment. During application, the predicted likelihood of the observed motions is used by a detection criterion that categorizes a robot agent as normal or antagonistic. The proposed method is evaluated on five different strategies of antagonistic behavior. Importantly, only readily available simulated data of normal robot behavior is used for training such that the nature of the anomalies need not be known beforehand. The best detection criterion correctly categorizes at least 80% of each antagonistic type while maintaining a false positive rate of less than 5% for normal robot agents. Additionally, the method is validated in hardware experiments, yielding results similar to the simulated scenarios. Compared to the state-of-the-art approach, both the predictive performance of the normalizing flow and the robustness of the detection criterion are increased.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"196 ","pages":"Article 105235"},"PeriodicalIF":5.2,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linear conversions of nonlinear camera models for robotic vision applications 用于机器人视觉应用的非线性相机模型的线性转换
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-10-31 DOI: 10.1016/j.robot.2025.105223
Eva Goichon , Guillaume Caron , Pascal Vasseur , Fumio Kanehiro
Camera models play a crucial role in robot vision applications. Yet their diversity poses a challenge when working with data captured with cameras calibrated using different models. In this paper, we address this issue by introducing a mathematical framework that enables conversion between various camera projection models. This approach allows algorithms designed for a specific model to process data from cameras calibrated with other models, eliminating the need for recalibration and enabling the reuse of pre-existing datasets that do not provide access to calibration images.
We present the general conversion method for state-of-the-art camera models that we derive for three new camera model conversions, covering various camera types, including fisheye and catadioptric systems. Quantitative evaluation is conducted with respect to well-known calibration methods. We compare our method on image undistortion, as well as in practical applications such as SLAM, visual servoing, and visual odometry. The results demonstrate that our conversion approach achieves performances comparable to calibration without the need for explicit calibration.
This work contributes to a more flexible and adaptive use of cameras in robot applications. The proposed camera model conversion framework is implemented in the open-source libPeR library, available at:
https://github.com/PerceptionRobotique/libPeR_base.
相机模型在机器人视觉应用中起着至关重要的作用。然而,在处理使用不同模型校准的相机捕获的数据时,它们的多样性带来了挑战。在本文中,我们通过引入一个数学框架来解决这个问题,该框架可以在各种相机投影模型之间进行转换。这种方法允许为特定模型设计的算法处理来自与其他模型校准的相机的数据,消除了重新校准的需要,并允许重用不提供校准图像访问的预先存在的数据集。我们提出了最先进的相机模型的一般转换方法,我们推导了三种新的相机模型转换,涵盖各种相机类型,包括鱼眼和反射光学系统。对已知的标定方法进行了定量评价。我们比较了我们的方法在图像不失真,以及在实际应用中,如SLAM,视觉伺服和视觉里程计。结果表明,我们的转换方法在不需要显式校准的情况下实现了与校准相当的性能。这项工作有助于在机器人应用中更灵活和自适应地使用相机。提出的相机模型转换框架在开源libPeR库中实现,可在https://github.com/PerceptionRobotique/libPeR_base获得。
{"title":"Linear conversions of nonlinear camera models for robotic vision applications","authors":"Eva Goichon ,&nbsp;Guillaume Caron ,&nbsp;Pascal Vasseur ,&nbsp;Fumio Kanehiro","doi":"10.1016/j.robot.2025.105223","DOIUrl":"10.1016/j.robot.2025.105223","url":null,"abstract":"<div><div>Camera models play a crucial role in robot vision applications. Yet their diversity poses a challenge when working with data captured with cameras calibrated using different models. In this paper, we address this issue by introducing a mathematical framework that enables conversion between various camera projection models. This approach allows algorithms designed for a specific model to process data from cameras calibrated with other models, eliminating the need for recalibration and enabling the reuse of pre-existing datasets that do not provide access to calibration images.</div><div>We present the general conversion method for state-of-the-art camera models that we derive for three new camera model conversions, covering various camera types, including fisheye and catadioptric systems. Quantitative evaluation is conducted with respect to well-known calibration methods. We compare our method on image undistortion, as well as in practical applications such as SLAM, visual servoing, and visual odometry. The results demonstrate that our conversion approach achieves performances comparable to calibration without the need for explicit calibration.</div><div>This work contributes to a more flexible and adaptive use of cameras in robot applications. The proposed camera model conversion framework is implemented in the open-source <span>libPeR</span> library, available at:</div><div><span><span>https://github.com/PerceptionRobotique/libPeR_base</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"196 ","pages":"Article 105223"},"PeriodicalIF":5.2,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of visual perception for robotic bin-picking 机器人捡垃圾视觉感知研究进展
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-10-30 DOI: 10.1016/j.robot.2025.105236
Artur Cordeiro , Luís Freitas Rocha , José Boaventura-Cunha , Daniel Figueiredo , João Pedro Souza
Robotic bin-picking is a critical operation in modern industry, which is characterised by the detection, selection, and placement of items from a disordered and cluttered environment, which can be boundary limited or not, e.g. bins, boxes or containers. In this context, perception systems are employed to localise, detect and estimate grasping points. Despite the considerable progress made, from analytical approaches to recent deep learning methods, challenges still remain. This is evidenced by the growing innovation proposing distinct solutions. This paper aims to review perception methodologies developed since 2009, providing detailed descriptions and discussions of their implementation. Additionally, it presents an extensive study, detailing each work, along with a comprehensive overview of the advancements in bin-picking perception.
机器人拾取垃圾箱是现代工业中的一项关键操作,其特点是从无序和杂乱的环境中检测,选择和放置物品,这些环境可以是有边界限制的,也可以不是,例如垃圾箱,盒子或容器。在这种情况下,感知系统被用来定位、检测和估计抓取点。尽管取得了相当大的进展,从分析方法到最近的深度学习方法,挑战仍然存在。越来越多的创新提出了独特的解决方案,证明了这一点。本文旨在回顾自2009年以来开发的感知方法,提供其实施的详细描述和讨论。此外,它还提出了一个广泛的研究,详细介绍了每项工作,以及对捡垃圾桶感知的进步的全面概述。
{"title":"A review of visual perception for robotic bin-picking","authors":"Artur Cordeiro ,&nbsp;Luís Freitas Rocha ,&nbsp;José Boaventura-Cunha ,&nbsp;Daniel Figueiredo ,&nbsp;João Pedro Souza","doi":"10.1016/j.robot.2025.105236","DOIUrl":"10.1016/j.robot.2025.105236","url":null,"abstract":"<div><div>Robotic bin-picking is a critical operation in modern industry, which is characterised by the detection, selection, and placement of items from a disordered and cluttered environment, which can be boundary limited or not, e.g. bins, boxes or containers. In this context, perception systems are employed to localise, detect and estimate grasping points. Despite the considerable progress made, from analytical approaches to recent deep learning methods, challenges still remain. This is evidenced by the growing innovation proposing distinct solutions. This paper aims to review perception methodologies developed since 2009, providing detailed descriptions and discussions of their implementation. Additionally, it presents an extensive study, detailing each work, along with a comprehensive overview of the advancements in bin-picking perception.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"196 ","pages":"Article 105236"},"PeriodicalIF":5.2,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145419837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A model-based approach for co-simulation-driven digital twins in robotics 基于模型的机器人协同仿真驱动数字孪生方法
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-10-28 DOI: 10.1016/j.robot.2025.105240
Santiago Gil , Arjun Badyal , Alvaro Miyazawa , Peter Gorm Larsen , Ana Cavalcanti
A digital twin (DT) for a robot can support its development and deployment; it is a valuable resource for simulation and monitoring. Creating a DT for a robot, however, is not an easy task, involving heterogeneous simulation models potentially developed by several stakeholders. This paper proposes a systematic and highly automated approach to develop a DT for a robot based on diagrammatic models and on an industry standard for co-simulation: the Functional Mockup Interface (FMI). Our modelling notation is RoboSim, a tool-independent framework to model, verify, and generate code for control software and for simulations of physical robotic platforms. We take advantage of RoboSim’s facilities for structured modelling and code generation to obtain results that help bridge the reality gap and produce DTs with less engineering effort. We present here our technique, using a manufacturing cell as a case study, and its assessment based on existing criteria for DT frameworks. The evaluation establishes that our technique provides significant coverage (specifically, 60%) of the Digital Twinning spectrum.
机器人的数字孪生(DT)可以支持其开发和部署;它是仿真和监测的宝贵资源。然而,为机器人创建DT并不是一件容易的事情,涉及到由几个利益相关者开发的异构仿真模型。本文提出了一种系统的和高度自动化的方法来开发基于图解模型和工业标准的机器人DT联合仿真:功能模型接口(FMI)。我们的建模符号是RoboSim,这是一个独立于工具的框架,用于建模,验证和生成控制软件和物理机器人平台模拟的代码。我们利用RoboSim的结构化建模和代码生成功能来获得有助于弥合现实差距的结果,并以更少的工程努力生成dtd。我们在这里介绍我们的技术,使用制造单元作为案例研究,并根据现有的DT框架标准对其进行评估。评估表明,我们的技术提供了数字孪生频谱的显著覆盖(具体而言,60%)。
{"title":"A model-based approach for co-simulation-driven digital twins in robotics","authors":"Santiago Gil ,&nbsp;Arjun Badyal ,&nbsp;Alvaro Miyazawa ,&nbsp;Peter Gorm Larsen ,&nbsp;Ana Cavalcanti","doi":"10.1016/j.robot.2025.105240","DOIUrl":"10.1016/j.robot.2025.105240","url":null,"abstract":"<div><div>A digital twin (DT) for a robot can support its development and deployment; it is a valuable resource for simulation and monitoring. Creating a DT for a robot, however, is not an easy task, involving heterogeneous simulation models potentially developed by several stakeholders. This paper proposes a systematic and highly automated approach to develop a DT for a robot based on diagrammatic models and on an industry standard for co-simulation: the Functional Mockup Interface (FMI). Our modelling notation is RoboSim, a tool-independent framework to model, verify, and generate code for control software and for simulations of physical robotic platforms. We take advantage of RoboSim’s facilities for structured modelling and code generation to obtain results that help bridge the reality gap and produce DTs with less engineering effort. We present here our technique, using a manufacturing cell as a case study, and its assessment based on existing criteria for DT frameworks. The evaluation establishes that our technique provides significant coverage (specifically, 60%) of the Digital Twinning spectrum.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"196 ","pages":"Article 105240"},"PeriodicalIF":5.2,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145419838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding Symbols: Towards a Compact Vocabulary Learned from Robot Experience for Task Planning 符号理解:从机器人经验中学习任务规划的紧凑词汇
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-10-28 DOI: 10.1016/j.robot.2025.105242
Gloria Beraldo, Riccardo Rasconi, Angelo Oddi
Symbolic Learning for robotic agents is inherently challenging due to several factors including the complexity of the physical world, the safety regulations, the level of experiences, the kind of features extracted from the noisy observed data. However, preventing robots from making incorrect generalizations during learning is essential for the reliable reuse of acquired knowledge, in accordance with the effective and logical causal-effect relations inferred from their interactions with the physical world. In this work, we tackle the problem of achieving a practical vocabulary that is not only usable by the robots but also “accessible” to other agents and humans, promoting mutual understanding and communication. With this purpose, a data-driven approach is presented that converts the robot’s sensory data into symbols that represent both the preconditions and effects of its experienced actions, by highlighting the challenges in achieving a “compact” representation. To allow the reuse of the acquired knowledge and in particular to improve the understanding of the generated vocabulary, different similarity metrics are investigated in order to filter symbols that apparently convey an identical semantics, inferred from a human-readable graphic representation. Finally, some experiments are performed in order to demonstrate the feasibility of using the generated symbolic representation to both make queries beyond what was originally experienced, and provide explicit high-level commands to the robot.
由于物理世界的复杂性、安全法规、经验水平、从嘈杂的观察数据中提取的特征类型等因素,机器人代理的符号学习本身就具有挑战性。然而,根据从机器人与物理世界的相互作用中推断出的有效和逻辑的因果关系,防止机器人在学习过程中做出错误的概括对于可靠地重用所获得的知识至关重要。在这项工作中,我们解决了实现一个实用词汇的问题,这个词汇不仅可以被机器人使用,而且可以被其他代理和人类“访问”,促进相互理解和沟通。为此,提出了一种数据驱动的方法,通过突出实现“紧凑”表示的挑战,将机器人的感官数据转换为代表其经验动作的前提条件和效果的符号。为了重用获得的知识,特别是为了提高对生成的词汇表的理解,研究了不同的相似性度量,以便过滤从人类可读的图形表示中推断出的明显传达相同语义的符号。最后,进行了一些实验,以证明使用生成的符号表示进行超出原始体验的查询,并向机器人提供明确的高级命令的可行性。
{"title":"Understanding Symbols: Towards a Compact Vocabulary Learned from Robot Experience for Task Planning","authors":"Gloria Beraldo,&nbsp;Riccardo Rasconi,&nbsp;Angelo Oddi","doi":"10.1016/j.robot.2025.105242","DOIUrl":"10.1016/j.robot.2025.105242","url":null,"abstract":"<div><div>Symbolic Learning for robotic agents is inherently challenging due to several factors including the complexity of the physical world, the safety regulations, the level of experiences, the kind of features extracted from the noisy observed data. However, preventing robots from making incorrect generalizations during learning is essential for the reliable reuse of acquired knowledge, in accordance with the effective and logical causal-effect relations inferred from their interactions with the physical world. In this work, we tackle the problem of achieving a practical vocabulary that is not only usable by the robots but also “accessible” to other agents and humans, promoting mutual understanding and communication. With this purpose, a data-driven approach is presented that converts the robot’s sensory data into symbols that represent both the preconditions and effects of its experienced actions, by highlighting the challenges in achieving a “compact” representation. To allow the reuse of the acquired knowledge and in particular to improve the understanding of the generated vocabulary, different similarity metrics are investigated in order to filter symbols that apparently convey an identical semantics, inferred from a human-readable graphic representation. Finally, some experiments are performed in order to demonstrate the feasibility of using the generated symbolic representation to both make queries beyond what was originally experienced, and provide explicit high-level commands to the robot.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"196 ","pages":"Article 105242"},"PeriodicalIF":5.2,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GBAGC-RL: Goal-based arm-gripper coordination reinforcement learning approach for robotic manipulation skills GBAGC-RL:基于目标的机械手操作技能协调强化学习方法
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-10-28 DOI: 10.1016/j.robot.2025.105239
Xiaofan Yang, Yubin Liu, Guoqing Chu, Junyu Wu, Zhuoqi Man, Xuanming Cao, Jie Zhao
Recent research on robotic manipulation via reinforcement learning (RL) has garnered significant attention. However, RL faces hurdles in complex tasks because of high state–action dimensions and reward design complexities. It is important to find an easy-to-use framework to quickly achieve the representation, learning and generalization of robotic manipulation skills. This article proposes a novel manipulation learning method for complex robotic tasks. The key insight is that all complex manipulation tasks involve coordinated arm-gripper collaborative movements in the task space. By using a task representation and subgoal extraction algorithm to discern motion patterns and subgoals, this method addresses the “what to do” aspect of robotic manipulation tasks. Subsequently, it integrates goal-based hierarchical reinforcement learning (HRL) with pretrained foundational skills to address the challenge of “how to do”. This framework, called “goal-based arm-gripper coordination RL” (GBAGC-RL), integrates task representation, subgoal extraction, and goal-based hierarchical reinforcement learning to attain efficient and transferable robotic manipulation skills while drastically simplifying the design of the reward function. Simulation evaluations on multiple complex manipulation tasks demonstrate that the proposed framework exhibits strong generalization and transfer capabilities, outperforms many leading RL methods, and achieves higher task success rates with more stable manipulation skills.
最近通过强化学习(RL)对机器人操作的研究引起了极大的关注。然而,由于高状态-行动维度和奖励设计的复杂性,强化学习在复杂任务中面临障碍。寻找一个易于使用的框架来快速实现机器人操作技能的表示、学习和推广是非常重要的。针对复杂机器人任务,提出了一种新的操作学习方法。关键的观点是,所有复杂的操作任务都涉及在任务空间中协调的手臂-抓手协同运动。通过使用任务表示和子目标提取算法来识别运动模式和子目标,该方法解决了机器人操作任务的“做什么”问题。随后,它将基于目标的分层强化学习(HRL)与预训练的基础技能相结合,以解决“如何做”的挑战。这个框架被称为“基于目标的手臂-抓手协调强化学习”(GBAGC-RL),它集成了任务表示、子目标提取和基于目标的分层强化学习,以获得高效和可转移的机器人操作技能,同时大大简化了奖励函数的设计。对多个复杂操作任务的仿真评估表明,该框架具有较强的泛化和迁移能力,优于许多领先的强化学习方法,并且在更稳定的操作技能下实现了更高的任务成功率。
{"title":"GBAGC-RL: Goal-based arm-gripper coordination reinforcement learning approach for robotic manipulation skills","authors":"Xiaofan Yang,&nbsp;Yubin Liu,&nbsp;Guoqing Chu,&nbsp;Junyu Wu,&nbsp;Zhuoqi Man,&nbsp;Xuanming Cao,&nbsp;Jie Zhao","doi":"10.1016/j.robot.2025.105239","DOIUrl":"10.1016/j.robot.2025.105239","url":null,"abstract":"<div><div>Recent research on robotic manipulation via reinforcement learning (RL) has garnered significant attention. However, RL faces hurdles in complex tasks because of high state–action dimensions and reward design complexities. It is important to find an easy-to-use framework to quickly achieve the representation, learning and generalization of robotic manipulation skills. This article proposes a novel manipulation learning method for complex robotic tasks. The key insight is that all complex manipulation tasks involve coordinated arm-gripper collaborative movements in the task space. By using a task representation and subgoal extraction algorithm to discern motion patterns and subgoals, this method addresses the “what to do” aspect of robotic manipulation tasks. Subsequently, it integrates goal-based hierarchical reinforcement learning (HRL) with pretrained foundational skills to address the challenge of “how to do”. This framework, called “goal-based arm-gripper coordination RL” (GBAGC-RL), integrates task representation, subgoal extraction, and goal-based hierarchical reinforcement learning to attain efficient and transferable robotic manipulation skills while drastically simplifying the design of the reward function. Simulation evaluations on multiple complex manipulation tasks demonstrate that the proposed framework exhibits strong generalization and transfer capabilities, outperforms many leading RL methods, and achieves higher task success rates with more stable manipulation skills.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"196 ","pages":"Article 105239"},"PeriodicalIF":5.2,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145419836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and motion stability analysis of a straddle-type live working robot for power distribution lines 跨座式配电线路带电作业机器人设计及运动稳定性分析
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-10-25 DOI: 10.1016/j.robot.2025.105241
Shangkun Cheng, Daozhu Wei, Zhaowen Hu, Hui Song, Qi Chen, Wei Wang
The motion stability of live working robots on power cables is a key factor influencing operational efficiency. Currently, most of these robots employ a suspended structure, which limits the accuracy of position localization and impairs the clarity of cable condition observation. Additionally, most stability analyses focus solely on the effects of external loads, while overlooking the impact of cable stiffness variations on the robot’s motion stability along the cable. This study first proposes a straddle-type live working robot and develops its control system, enabling safer and more reliable live operations directly on power cables. Subsequently, the robot’s tipping stability during cable traversal is analyzed, and dynamic models are established under three different cable stiffness conditions, leading to the identification of key factors influencing the robot’s motion stability. Finally, a simulated utility pole test bench is constructed to conduct experiments on motion control performance, tipping performance, and motion stability. Experimental results show that the robot’s control error remains below 6%, the minimum tipping angle ranges from 29° to 32°, and the average Jerk is less than 2 m/s³. These findings are expected to contribute to substantial advancements in the development of live working robots.
带电作业机器人在电缆上的运动稳定性是影响作业效率的关键因素。目前,这些机器人大多采用悬挂式结构,这限制了位置定位的准确性,损害了电缆状态观察的清晰度。此外,大多数稳定性分析只关注外部载荷的影响,而忽略了电缆刚度变化对机器人沿电缆运动稳定性的影响。本研究首先提出了一种跨座式现场作业机器人,并开发了其控制系统,实现了更安全、更可靠的直接在电力电缆上现场作业。随后,分析了机器人在缆索穿越过程中的倾侧稳定性,建立了三种不同缆索刚度条件下的动力学模型,识别了影响机器人运动稳定性的关键因素。最后搭建了电线杆模拟试验台,对电线杆的运动控制性能、倾翻性能和运动稳定性进行了实验研究。实验结果表明,机器人的控制误差保持在6%以下,最小倾斜角在29°~ 32°之间,平均震动小于2 m/s³。这些发现预计将有助于实时工作机器人的发展取得实质性进展。
{"title":"Design and motion stability analysis of a straddle-type live working robot for power distribution lines","authors":"Shangkun Cheng,&nbsp;Daozhu Wei,&nbsp;Zhaowen Hu,&nbsp;Hui Song,&nbsp;Qi Chen,&nbsp;Wei Wang","doi":"10.1016/j.robot.2025.105241","DOIUrl":"10.1016/j.robot.2025.105241","url":null,"abstract":"<div><div>The motion stability of live working robots on power cables is a key factor influencing operational efficiency. Currently, most of these robots employ a suspended structure, which limits the accuracy of position localization and impairs the clarity of cable condition observation. Additionally, most stability analyses focus solely on the effects of external loads, while overlooking the impact of cable stiffness variations on the robot’s motion stability along the cable. This study first proposes a straddle-type live working robot and develops its control system, enabling safer and more reliable live operations directly on power cables. Subsequently, the robot’s tipping stability during cable traversal is analyzed, and dynamic models are established under three different cable stiffness conditions, leading to the identification of key factors influencing the robot’s motion stability. Finally, a simulated utility pole test bench is constructed to conduct experiments on motion control performance, tipping performance, and motion stability. Experimental results show that the robot’s control error remains below 6%, the minimum tipping angle ranges from 29° to 32°, and the average <em>Jerk</em> is less than 2 <strong>m/s</strong>³. These findings are expected to contribute to substantial advancements in the development of live working robots.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"196 ","pages":"Article 105241"},"PeriodicalIF":5.2,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145419979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Robotics and Autonomous Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1