Pub Date : 2025-12-12DOI: 10.1016/j.robot.2025.105298
Xitong Gao , Luwen Yang , Zhijun Zhang
In order to solve the double ended (end-effector and mobile platform) motion planning problem of mobile robot manipulator, a composite motion planning (CMP) scheme based on time-varying recurrent neural network is proposed and analyzed. In traditional schemes, motion planning of the end-effector is common, the route of the platform is generated from the end-trajectory calculation. However, in realistic tasks, the end trajectory and platform route are often independent to each other. To do so, kinematic models of the double ended are first derived in detail and formulated as equality constraints, respectively. Secondly, the posture constraint and combined physical constraint are designed and formulated as an equation and inequality constraint, respectively. Then, the CMP scheme is proposed and formulated as a constrained quadratic programming problem. Thirdly, the optimal solution of the quadratic programming problem is obtained by the designed time-varying recurrent neural network. Finally, experiments verify that the proposed CMP scheme can simultaneously plan the double ended of the mobile manipulator.
{"title":"A composite motion planning scheme based on time-varying recurrent neural network for mobile robot manipulators","authors":"Xitong Gao , Luwen Yang , Zhijun Zhang","doi":"10.1016/j.robot.2025.105298","DOIUrl":"10.1016/j.robot.2025.105298","url":null,"abstract":"<div><div>In order to solve the double ended (end-effector and mobile platform) motion planning problem of mobile robot manipulator, a composite motion planning (CMP) scheme based on time-varying recurrent neural network is proposed and analyzed. In traditional schemes, motion planning of the end-effector is common, the route of the platform is generated from the end-trajectory calculation. However, in realistic tasks, the end trajectory and platform route are often independent to each other. To do so, kinematic models of the double ended are first derived in detail and formulated as equality constraints, respectively. Secondly, the posture constraint and combined physical constraint are designed and formulated as an equation and inequality constraint, respectively. Then, the CMP scheme is proposed and formulated as a constrained quadratic programming problem. Thirdly, the optimal solution of the quadratic programming problem is obtained by the designed time-varying recurrent neural network. Finally, experiments verify that the proposed CMP scheme can simultaneously plan the double ended of the mobile manipulator.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"197 ","pages":"Article 105298"},"PeriodicalIF":5.2,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1016/j.robot.2025.105291
Lorenzo Busellato , Federico Cunico , Diego Dall’Alba , Marco Emporio , Andrea Giachetti , Riccardo Muradore , Marco Cristani
To enable flexible, high-throughput automation in settings where people and robots share workspaces, collaborative robotic cells must reconcile stringent safety guarantees with the need for responsive and effective behavior. A dynamic obstacle is the stochastic, task-dependent variability of human motion: when robots fall back on purely reactive or worst-case envelopes, they brake unnecessarily, stall task progress, and tamper with the fluidity that true Human–Robot Interaction (HRI) demands. In recent years, learning-based human-motion prediction has rapidly advanced, although most approaches produce worst-case scenario forecasts that often do not treat prediction uncertainty in a well-structured way, resulting in over-conservative planning algorithms, limiting their flexibility. This paper introduces Uncertainty-Aware Predictive Control Barrier Functions (UA-PCBFs), a unified framework that fuses probabilistic human hand motion forecasting with the formal safety guarantees of Control Barrier Functions (CBFs). In contrast to CBFs and other variants, our framework allows for a dynamic adjustment of the safety margin thanks to the human motion uncertainty estimation provided by the deep-learning forecasting module. Thanks to the awareness of prediction uncertainty, UA-PCBFs empower collaborative robots with a deeper understanding of future human states, facilitating more fluid and intelligent interactions through informed motion planning. Our key contribution is the first integration of epistemic prediction uncertainty directly into predictive CBFs, dynamically adjusting safety margins based on forecast confidence without assumptions about uncertainty evolution. We validate UA-PCBFs through comprehensive real-world experiments with an increasing level of realism, including automated setups (to perform exactly repeatable motions) with a robotic hand and direct human–robot interactions (to validate promptness, usability, and human confidence). Relative to state-of-the-art HRI architectures, UA-PCBFs show better performance in task-critical metrics, significantly reducing the number of violations of the robot’s safe space during interaction with respect to the state-of-the-art. Data and code will be released upon acceptance.
{"title":"Uncertainty Aware-Predictive Control Barrier Functions: Safer human–robot interaction through probabilistic motion forecasting","authors":"Lorenzo Busellato , Federico Cunico , Diego Dall’Alba , Marco Emporio , Andrea Giachetti , Riccardo Muradore , Marco Cristani","doi":"10.1016/j.robot.2025.105291","DOIUrl":"10.1016/j.robot.2025.105291","url":null,"abstract":"<div><div>To enable flexible, high-throughput automation in settings where people and robots share workspaces, collaborative robotic cells must reconcile stringent safety guarantees with the need for responsive and effective behavior. A dynamic obstacle is the stochastic, task-dependent variability of human motion: when robots fall back on purely reactive or worst-case envelopes, they brake unnecessarily, stall task progress, and tamper with the fluidity that true Human–Robot Interaction (HRI) demands. In recent years, learning-based human-motion prediction has rapidly advanced, although most approaches produce worst-case scenario forecasts that often do not treat prediction uncertainty in a well-structured way, resulting in over-conservative planning algorithms, limiting their flexibility. This paper introduces Uncertainty-Aware Predictive Control Barrier Functions (UA-PCBFs), a unified framework that fuses probabilistic human hand motion forecasting with the formal safety guarantees of Control Barrier Functions (CBFs). In contrast to CBFs and other variants, our framework allows for a dynamic adjustment of the safety margin thanks to the human motion uncertainty estimation provided by the deep-learning forecasting module. Thanks to the awareness of prediction uncertainty, UA-PCBFs empower collaborative robots with a deeper understanding of future human states, facilitating more fluid and intelligent interactions through informed motion planning. Our key contribution is the first integration of epistemic prediction uncertainty directly into predictive CBFs, dynamically adjusting safety margins based on forecast confidence without assumptions about uncertainty evolution. We validate UA-PCBFs through comprehensive real-world experiments with an increasing level of realism, including automated setups (to perform exactly repeatable motions) with a robotic hand and direct human–robot interactions (to validate promptness, usability, and human confidence). Relative to state-of-the-art HRI architectures, UA-PCBFs show better performance in task-critical metrics, significantly reducing the number of violations of the robot’s safe space during interaction with respect to the state-of-the-art. Data and code will be released upon acceptance.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"197 ","pages":"Article 105291"},"PeriodicalIF":5.2,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1016/j.robot.2025.105292
Luca Geretti , Stefano Centomo , Michele Boldo , Enrico Martini , Nicola Bombieri , Davide Quaglia , Tiziano Villa
In mixed human–robot work cells the emphasis is traditionally on collision avoidance to circumvent injuries and production down times. In this paper we discuss how long in advance a collision can be predicted given the behavior of a robotic arm and the current occupancy of both the robot and the human. The behavior of the robot is a sequence of predefined operations that constitute its plan, each one with a given trajectory. However, we do not know the exact trajectory or the plan a priori. Under the assumption that the plan has a cyclic character, we propose an approach to learn it in real time from state samples and use the resulting model to estimate the time before a collision. The pose of the human is obtained by a multi-camera inference application based on neural networks at the edge to preserve privacy and enforce scalability. The occupancy of the manipulator and of the human are modeled through the composition of segments which overcomes the traditional “virtual cage” and can be adapted to different human beings and robots. The system has been implemented in a real factory scenario to demonstrate its readiness regarding both industrial constraints and computational complexity.
{"title":"Collision prediction using plan learning in mixed human–robot work cells","authors":"Luca Geretti , Stefano Centomo , Michele Boldo , Enrico Martini , Nicola Bombieri , Davide Quaglia , Tiziano Villa","doi":"10.1016/j.robot.2025.105292","DOIUrl":"10.1016/j.robot.2025.105292","url":null,"abstract":"<div><div>In mixed human–robot work cells the emphasis is traditionally on collision avoidance to circumvent injuries and production down times. In this paper we discuss how long in advance a collision can be predicted given the behavior of a robotic arm and the current occupancy of both the robot and the human. The behavior of the robot is a sequence of predefined operations that constitute its plan, each one with a given trajectory. However, we do not know the exact trajectory or the plan a priori. Under the assumption that the plan has a cyclic character, we propose an approach to learn it in real time from state samples and use the resulting model to estimate the time before a collision. The pose of the human is obtained by a multi-camera inference application based on neural networks at the edge to preserve privacy and enforce scalability. The occupancy of the manipulator and of the human are modeled through the composition of segments which overcomes the traditional “virtual cage” and can be adapted to different human beings and robots. The system has been implemented in a real factory scenario to demonstrate its readiness regarding both industrial constraints and computational complexity.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"197 ","pages":"Article 105292"},"PeriodicalIF":5.2,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since the proposal of transcranial magnetic stimulation (TMS), it has found extensive applications in the treatment of brain disorders and other fields. Robot-assisted TMS therapy ensures the accuracy and durability of treatment positioning. However, the use of robot-assisted therapy has also raised potential collisions and path-planning challenges. Due to the redundant degree of freedom (DOF), the robot has infinite configurations while performing each given stimulation. The problem that involves visiting multiple target points and returning to the initial position in TMS therapy is a combination of a generalized traveling salesman problem with neighborhoods (GTSPN) and a collision-free path planning problem. A global method based on the probabilistic roadmap (PRM) and the A* algorithm is proposed. To safely visit target points, the proposed method samples uniformly on a sphere. A special nearest neighbor definition and delayed collision queries for edges are introduced to accelerate the roadmap construction. Then, the proposed method utilizes a modified A* algorithm with a multi-goal heuristic to rapidly search for a global path through all target points. Finally, a strategy combining local re-search and global re-search is proposed to get the final collision-free path. Experiments are conducted on both simulation and physical platforms using a typical model of the TMS therapy system. The results indicate that our proposed algorithm can effectively avoid collisions and produce optimal planning results for TMS therapy in a short time.
经颅磁刺激(transcranial magnetic stimulation, TMS)自提出以来,在脑部疾病的治疗等领域得到了广泛的应用。机器人辅助TMS治疗确保了治疗定位的准确性和耐久性。然而,机器人辅助疗法的使用也带来了潜在的碰撞和路径规划方面的挑战。由于冗余自由度(DOF)的存在,机器人在执行每次给定的激励时具有无限种构型。在TMS治疗中,涉及到访问多个目标点并返回初始位置的问题是一个带邻域的广义旅行商问题(GTSPN)和无碰撞路径规划问题的结合。提出了一种基于概率路线图(PRM)和A*算法的全局方法。为了安全访问目标点,该方法在球体上均匀采样。引入了一种特殊的最近邻定义和边的延迟碰撞查询来加速路线图的构建。然后,利用改进的a *算法和多目标启发式算法,通过所有目标点快速搜索全局路径。最后,提出局部研究与全局研究相结合的策略,得到最终的无碰撞路径。利用典型的经颅磁刺激治疗系统模型,在仿真和物理平台上进行了实验。结果表明,本文提出的算法可以有效避免碰撞,并在短时间内为TMS治疗提供最优规划结果。
{"title":"Multi-goal path planning for robot-aided transcranial magnetic stimulation","authors":"Lixia Wang , Qing Tang , Haoyang Xing , Jiarui Dong","doi":"10.1016/j.robot.2025.105294","DOIUrl":"10.1016/j.robot.2025.105294","url":null,"abstract":"<div><div>Since the proposal of transcranial magnetic stimulation (TMS), it has found extensive applications in the treatment of brain disorders and other fields. Robot-assisted TMS therapy ensures the accuracy and durability of treatment positioning. However, the use of robot-assisted therapy has also raised potential collisions and path-planning challenges. Due to the redundant degree of freedom (DOF), the robot has infinite configurations while performing each given stimulation. The problem that involves visiting multiple target points and returning to the initial position in TMS therapy is a combination of a generalized traveling salesman problem with neighborhoods (GTSPN) and a collision-free path planning problem. A global method based on the probabilistic roadmap (PRM) and the A* algorithm is proposed. To safely visit target points, the proposed method samples uniformly on a sphere. A special nearest neighbor definition and delayed collision queries for edges are introduced to accelerate the roadmap construction. Then, the proposed method utilizes a modified A* algorithm with a multi-goal heuristic to rapidly search for a global path through all target points. Finally, a strategy combining local re-search and global re-search is proposed to get the final collision-free path. Experiments are conducted on both simulation and physical platforms using a typical model of the TMS therapy system. The results indicate that our proposed algorithm can effectively avoid collisions and produce optimal planning results for TMS therapy in a short time.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"197 ","pages":"Article 105294"},"PeriodicalIF":5.2,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1016/j.robot.2025.105300
Mohammad Jabari , Carmen Visconte , Giuseppe Quaglia , Med Amine Laribi
Tendon-driven continuum robots (TDCRs) face a critical trade-off between energy efficiency and static performance for navigating constrained environments, a challenge in medical and industrial applications. This study proposes a bi-objective optimization framework to enhance tendon placement and dimensional synthesis in a two-segment TDCR, featuring seven disks and four tendons per segment. Leveraging a kineto-static model based on piecewise constant curvature (PCC) theory and a multi-objective genetic algorithm (MOGA), radial tendon distances and angular offsets have been optimized. These solutions achieve up to 30 % reduction in mechanical work and a 3–5 % workspace expansion, validated through 100 randomized tendon force samples. The results offer practical guidelines for improving TDCR performance in both minimally invasive surgery and industrial inspection.
{"title":"Multi-objective optimization for dimensional synthesis of tendon placement and structural design for energy-efficient and feasible static workspace in continuum robots","authors":"Mohammad Jabari , Carmen Visconte , Giuseppe Quaglia , Med Amine Laribi","doi":"10.1016/j.robot.2025.105300","DOIUrl":"10.1016/j.robot.2025.105300","url":null,"abstract":"<div><div>Tendon-driven continuum robots (TDCRs) face a critical trade-off between energy efficiency and static performance for navigating constrained environments, a challenge in medical and industrial applications. This study proposes a bi-objective optimization framework to enhance tendon placement and dimensional synthesis in a two-segment TDCR, featuring seven disks and four tendons per segment. Leveraging a kineto-static model based on piecewise constant curvature (PCC) theory and a multi-objective genetic algorithm (MOGA), radial tendon distances and angular offsets have been optimized. These solutions achieve up to 30 % reduction in mechanical work and a 3–5 % workspace expansion, validated through 100 randomized tendon force samples. The results offer practical guidelines for improving TDCR performance in both minimally invasive surgery and industrial inspection.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"197 ","pages":"Article 105300"},"PeriodicalIF":5.2,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1016/j.robot.2025.105296
Jianjun Ni , Yonghao Zhao , Ziru Zhang , Chunyan Ke , Simon X. Yang
Multi-robot cooperative hunting problem has gained widespread attention due to its potential applications in fields such as surveillance, search and rescue, and military. Recently, many excellent solutions have been proposed in this field. Thus, this paper provides a comprehensive review of the latest advancements and challenges in the field of multi-robot cooperative hunting. This survey provides an overview of the key elements in this field, such as robot types, team compositions, system architectures, communication mechanisms, cooperative modes, and so on. In addition, the key issues in multi-robot cooperative hunting are analyzed, including perception and environmental understanding, task allocation, formation control, path planning, trajectory tracking, and decision-making. Then, some representative theories and methods for these issues are summarized. Finally, some potential solutions are discussed and future research directions in this field are presented.
{"title":"A survey on theories and applications for multi-robot cooperative hunting","authors":"Jianjun Ni , Yonghao Zhao , Ziru Zhang , Chunyan Ke , Simon X. Yang","doi":"10.1016/j.robot.2025.105296","DOIUrl":"10.1016/j.robot.2025.105296","url":null,"abstract":"<div><div>Multi-robot cooperative hunting problem has gained widespread attention due to its potential applications in fields such as surveillance, search and rescue, and military. Recently, many excellent solutions have been proposed in this field. Thus, this paper provides a comprehensive review of the latest advancements and challenges in the field of multi-robot cooperative hunting. This survey provides an overview of the key elements in this field, such as robot types, team compositions, system architectures, communication mechanisms, cooperative modes, and so on. In addition, the key issues in multi-robot cooperative hunting are analyzed, including perception and environmental understanding, task allocation, formation control, path planning, trajectory tracking, and decision-making. Then, some representative theories and methods for these issues are summarized. Finally, some potential solutions are discussed and future research directions in this field are presented.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"197 ","pages":"Article 105296"},"PeriodicalIF":5.2,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-10DOI: 10.1016/j.robot.2025.105297
Socratis Gkelios , Savvas D. Apostolidis , Pavlos Ch. Kapoutsis , Elias B. Kosmatopoulos , Athanasios Ch. Kapoutsis
Unmanned Aerial Vehicles (UAVs) have revolutionized inspection tasks by offering a safer, more efficient, and flexible alternative to traditional methods. However, battery limitations often constrain their effectiveness, necessitating the development of optimized flight paths and data collection techniques. While existing approaches like coverage path planning (CPP) ensure comprehensive data collection, they can be inefficient, especially when inspecting multiple non-connected Regions of Interest (ROIs). This paper introduces the Fast Inspection of Scattered Regions (FISR) problem and proposes a novel solution, the multi-UAV Disjoint Areas Inspection (mUDAI) method. The introduced approach implements a two-fold optimization procedure, for calculating the best image capturing positions and the most efficient UAV trajectories, balancing data resolution and operational time, minimizing redundant data collection and resource consumption. The mUDAI method is designed to enable rapid, efficient inspections of scattered ROIs, making it ideal for applications such as security infrastructure assessments, agricultural inspections, and emergency site evaluations. A combination of simulated evaluations and real-world deployments is used to validate and quantify the method’s ability to improve operational efficiency while preserving high-quality data capture, demonstrating its effectiveness in real-world operations. An open-source Python implementation of the mUDAI method can be found on GitHub1 and the collected and processed data from the real-world experiments are all hosted on Zenodo2. Finally, this on-line platform3 allows the interested readers to interact with the mUDAI method and generate their own multi-UAV FISR missions.
{"title":"Beyond coverage path planning: Can UAV swarms perfect scattered regions inspections?","authors":"Socratis Gkelios , Savvas D. Apostolidis , Pavlos Ch. Kapoutsis , Elias B. Kosmatopoulos , Athanasios Ch. Kapoutsis","doi":"10.1016/j.robot.2025.105297","DOIUrl":"10.1016/j.robot.2025.105297","url":null,"abstract":"<div><div>Unmanned Aerial Vehicles (UAVs) have revolutionized inspection tasks by offering a safer, more efficient, and flexible alternative to traditional methods. However, battery limitations often constrain their effectiveness, necessitating the development of optimized flight paths and data collection techniques. While existing approaches like coverage path planning (CPP) ensure comprehensive data collection, they can be inefficient, especially when inspecting multiple non-connected Regions of Interest (ROIs). This paper introduces the Fast Inspection of Scattered Regions (FISR) problem and proposes a novel solution, the multi-UAV Disjoint Areas Inspection (mUDAI) method. The introduced approach implements a two-fold optimization procedure, for calculating the best image capturing positions and the most efficient UAV trajectories, balancing data resolution and operational time, minimizing redundant data collection and resource consumption. The mUDAI method is designed to enable rapid, efficient inspections of scattered ROIs, making it ideal for applications such as security infrastructure assessments, agricultural inspections, and emergency site evaluations. A combination of simulated evaluations and real-world deployments is used to validate and quantify the method’s ability to improve operational efficiency while preserving high-quality data capture, demonstrating its effectiveness in real-world operations. An open-source Python implementation of the mUDAI method can be found on GitHub<span><span><sup>1</sup></span></span> and the collected and processed data from the real-world experiments are all hosted on Zenodo<span><span><sup>2</sup></span></span>. Finally, this on-line platform<span><span><sup>3</sup></span></span> allows the interested readers to interact with the mUDAI method and generate their own multi-UAV FISR missions.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"197 ","pages":"Article 105297"},"PeriodicalIF":5.2,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-09DOI: 10.1016/j.robot.2025.105289
Dianfan Zhang , Lei Zhou , Yu Rong , Xiaodi Wang , Yingfeng Wang , Junbo Wang
The rapid advancement of intelligent driving technology has enabled functions such as autonomous parking, adaptive cruise control, and lane-keeping, which rely on precise vehicle localization. However, traditional satellite-based positioning systems fail to provide accurate localization in complex environments such as tunnels and urban canyons. To overcome these limitations, Simultaneous Localization and Mapping (SLAM) combined with sensor technologies has been widely explored for high-precision localization. However, conventional SLAM methods in outdoor environments are often susceptible to interference from dynamic objects and sensor noise, leading to degraded localization accuracy. To address these challenges, this study proposes an improved localization framework that integrates YOLOv8 with ORB-SLAM2, utilizing detected road signs as robust feature points for enhanced vehicle localization. To mitigate false positives and missed detections in YOLOv8, we optimize the dataset, enhance the feature fusion network, and refine the loss function to improve object detection accuracy. Furthermore, an edge-feature-based point-matching algorithm is introduced to reduce feature point mismatches and improve localization precision. Experimental results on the Apollo Scape dataset and real-world scenarios demonstrate that the proposed approach significantly enhances localization accuracy in dynamic environments with abundant road signs, outperforming conventional SLAM methods.
{"title":"Research on VSLAM algorithm based on landmark assistance","authors":"Dianfan Zhang , Lei Zhou , Yu Rong , Xiaodi Wang , Yingfeng Wang , Junbo Wang","doi":"10.1016/j.robot.2025.105289","DOIUrl":"10.1016/j.robot.2025.105289","url":null,"abstract":"<div><div>The rapid advancement of intelligent driving technology has enabled functions such as autonomous parking, adaptive cruise control, and lane-keeping, which rely on precise vehicle localization. However, traditional satellite-based positioning systems fail to provide accurate localization in complex environments such as tunnels and urban canyons. To overcome these limitations, Simultaneous Localization and Mapping (SLAM) combined with sensor technologies has been widely explored for high-precision localization. However, conventional SLAM methods in outdoor environments are often susceptible to interference from dynamic objects and sensor noise, leading to degraded localization accuracy. To address these challenges, this study proposes an improved localization framework that integrates YOLOv8 with ORB-SLAM2, utilizing detected road signs as robust feature points for enhanced vehicle localization. To mitigate false positives and missed detections in YOLOv8, we optimize the dataset, enhance the feature fusion network, and refine the loss function to improve object detection accuracy. Furthermore, an edge-feature-based point-matching algorithm is introduced to reduce feature point mismatches and improve localization precision. Experimental results on the Apollo Scape dataset and real-world scenarios demonstrate that the proposed approach significantly enhances localization accuracy in dynamic environments with abundant road signs, outperforming conventional SLAM methods.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105289"},"PeriodicalIF":5.2,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.1016/j.robot.2025.105287
Sofia Hustiu , Joaquín Ezpeleta , Cristian Mahulea , Marius Kloetzer
This paper focuses on designing motion plans for a heterogeneous team of robots that must cooperate to fulfill a global mission. Robots move in an environment that contains some regions of interest, while the specification for the entire team can include avoidance, visits, or sequencing of these regions of interest. The mission is expressed in terms of a Petri net corresponding to an automaton, while each robot is also modeled by a state machine Petri net. The current work brings about the following contributions with respect to existing solutions for related problems. First, we propose a novel model, denoted High-Level robot team Petri Net (HLrtPN) system, to incorporate the specification and robot models into the Nets-within-Nets paradigm. A guard function, named Global Enabling Function, is designed to synchronize the firing of transitions so that robot motions do not violate the specification. Then, the solution is found by simulating the HLrtPN system in a specific software tool that accommodates Nets-within-Nets. Illustrative examples based on Linear Temporal Logic missions support the computational feasibility of the proposed framework.
{"title":"Multi-robot motion planning based on Nets-within-Nets modeling and simulation","authors":"Sofia Hustiu , Joaquín Ezpeleta , Cristian Mahulea , Marius Kloetzer","doi":"10.1016/j.robot.2025.105287","DOIUrl":"10.1016/j.robot.2025.105287","url":null,"abstract":"<div><div>This paper focuses on designing motion plans for a heterogeneous team of robots that must cooperate to fulfill a global mission. Robots move in an environment that contains some regions of interest, while the specification for the entire team can include avoidance, visits, or sequencing of these regions of interest. The mission is expressed in terms of a Petri net corresponding to an automaton, while each robot is also modeled by a state machine Petri net. The current work brings about the following contributions with respect to existing solutions for related problems. First, we propose a novel model, denoted <em>High-Level robot team Petri Net</em> (HLrtPN) system, to incorporate the specification and robot models into the Nets-within-Nets paradigm. A guard function, named <em>Global Enabling Function</em>, is designed to synchronize the firing of transitions so that robot motions do not violate the specification. Then, the solution is found by simulating the HLrtPN system in a specific software tool that accommodates Nets-within-Nets. Illustrative examples based on Linear Temporal Logic missions support the computational feasibility of the proposed framework.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"197 ","pages":"Article 105287"},"PeriodicalIF":5.2,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.1016/j.robot.2025.105288
Baptiste Toussaint, Maxime Raison
Table tennis with collaborative robots has been a challenge in robotics for decades, due to its unique challenges, especially high-speed movements and real-time ball trajectory predictions for responsive and accurate gameplay. Over the years, several table tennis robots have been developed, showing progressively enhanced abilities for returning balls, hitting specific targets, rallying with collaborative human users, and playing amateur-level games. However, these robotic systems remain costly for individuals, often relying on industrial components, or specialized designs. Emerging AI-integrated personal desktop robotic arms could help bridge the performance gap between affordable personal robotic systems and traditional industrial robots, particularly in terms of dexterity, speed, and precision. Despite this potential, desktop robotic arms have not yet been used for table tennis. However, existing table tennis algorithms require specific adaptations to accommodate the constraints of desktop robots. This paper aims to develop a dedicated algorithm for a collaborative table tennis system using a desktop robotic arm to demonstrate the achievable performance of AI-integrated desktop robots. The proposed system utilizes a 5-degree-of-freedom (DOF) serial robot, integrating advanced algorithms and machine learning models to improve performance. This system enables short collaborative rallies, returning 71.3% of balls overall, improving to 81.4% after fine-tuning system parameters — approaching the best one from the literature (88%) using a 7-DOF industrial robotic arm. This underscores the potential of affordable, AI-integrated desktop robotic arms for high-speed human–robot collaboration. Future works will focus on adapting the algorithm for specialized desktop hardware, expanding desktop robots to other applications, and further enhancing their performance.
{"title":"Real-time algorithm for table tennis with a desktop robotic arm","authors":"Baptiste Toussaint, Maxime Raison","doi":"10.1016/j.robot.2025.105288","DOIUrl":"10.1016/j.robot.2025.105288","url":null,"abstract":"<div><div>Table tennis with collaborative robots has been a challenge in robotics for decades, due to its unique challenges, especially high-speed movements and real-time ball trajectory predictions for responsive and accurate gameplay. Over the years, several table tennis robots have been developed, showing progressively enhanced abilities for returning balls, hitting specific targets, rallying with collaborative human users, and playing amateur-level games. However, these robotic systems remain costly for individuals, often relying on industrial components, or specialized designs. Emerging AI-integrated personal desktop robotic arms could help bridge the performance gap between affordable personal robotic systems and traditional industrial robots, particularly in terms of dexterity, speed, and precision. Despite this potential, desktop robotic arms have not yet been used for table tennis. However, existing table tennis algorithms require specific adaptations to accommodate the constraints of desktop robots. This paper aims to develop a dedicated algorithm for a collaborative table tennis system using a desktop robotic arm to demonstrate the achievable performance of AI-integrated desktop robots. The proposed system utilizes a 5-degree-of-freedom (DOF) serial robot, integrating advanced algorithms and machine learning models to improve performance. This system enables short collaborative rallies, returning 71.3% of balls overall, improving to 81.4% after fine-tuning system parameters — approaching the best one from the literature (88%) using a 7-DOF industrial robotic arm. This underscores the potential of affordable, AI-integrated desktop robotic arms for high-speed human–robot collaboration. Future works will focus on adapting the algorithm for specialized desktop hardware, expanding desktop robots to other applications, and further enhancing their performance.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"197 ","pages":"Article 105288"},"PeriodicalIF":5.2,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}