Bernardo Manuel Pirozzo, Mariano De Paula, Sebastián Aldo Villar, Carola de Benito, Gerardo Gabriel Acosta, Rodrigo Picos
Autonomous systems have demonstrated high performance in several applications. One of the most important is localisation systems, which are necessary for the safe navigation of autonomous cars or mobile robots. However, despite significant advances in this field, there are still areas open for research and improvement. Two of the most important challenges include the precise traversal of a bounded route and emergencies arising from the breakdown or failure of one or more sensors, which can lead to malfunction or system localisation failure. To address these issues, auxiliary assistance systems are necessary, enabling localisation for a safe return to the starting point, completing the trajectory, or facilitating an emergency stop in a designated area for such situations. Motivated by the exploration of applying artificial intelligence to pose estimation in a navigation system, this article introduces a monocular visual odometry method that, through teach and repeat, learns and autonomously replicates trajectories. Our proposal can serve as either a primary localisation system or an auxiliary assistance system. In the first case, our approach is applicable in scenarios where the traversing route remains unchanged. In the second case, the goal is to achieve a safe return to the starting point or to reach the end point of the trajectory. We initially utilised a publicly available dataset to showcase the learning capability and robustness under different visibility conditions to validate our proposal. Subsequently, we compared our approach with other well-known methods to assess performance metrics. Finally, we evaluated real-time trajectory replication on a ground robot, both simulated and real, across multiple trajectories of increasing complexity.
{"title":"A Visual Odometry Artificial Intelligence-Based Method for Trajectory Learning and Tracking Applied to Mobile Robots","authors":"Bernardo Manuel Pirozzo, Mariano De Paula, Sebastián Aldo Villar, Carola de Benito, Gerardo Gabriel Acosta, Rodrigo Picos","doi":"10.1049/csy2.70028","DOIUrl":"10.1049/csy2.70028","url":null,"abstract":"<p>Autonomous systems have demonstrated high performance in several applications. One of the most important is localisation systems, which are necessary for the safe navigation of autonomous cars or mobile robots. However, despite significant advances in this field, there are still areas open for research and improvement. Two of the most important challenges include the precise traversal of a bounded route and emergencies arising from the breakdown or failure of one or more sensors, which can lead to malfunction or system localisation failure. To address these issues, auxiliary assistance systems are necessary, enabling localisation for a safe return to the starting point, completing the trajectory, or facilitating an emergency stop in a designated area for such situations. Motivated by the exploration of applying artificial intelligence to pose estimation in a navigation system, this article introduces a monocular visual odometry method that, through teach and repeat, learns and autonomously replicates trajectories. Our proposal can serve as either a primary localisation system or an auxiliary assistance system. In the first case, our approach is applicable in scenarios where the traversing route remains unchanged. In the second case, the goal is to achieve a safe return to the starting point or to reach the end point of the trajectory. We initially utilised a publicly available dataset to showcase the learning capability and robustness under different visibility conditions to validate our proposal. Subsequently, we compared our approach with other well-known methods to assess performance metrics. Finally, we evaluated real-time trajectory replication on a ground robot, both simulated and real, across multiple trajectories of increasing complexity.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"7 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.70028","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145038123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiyuan Fu, Ping Liu, Qinglang Xie, Shidong Zhai, Mingjie Liu
The tool path trajectory serves as a cornerstone of three-dimensional (3D) printing robot technology, and path optimisation algorithms are instrumental in enabling faster, more precise and higher-quality prints. This work proposes a clustering path optimisation-based 2-opt rapid wax-drawing trajectory planning method for 3D drawing robots. Firstly, the input wax-drawing image is preprocessed to extract contour information, which is then simplified into polygons. Next, the spiral and filling trajectory algorithms are used to convert the polygons into corresponding spiral and filling paths, which are modelled as nodes in the travelling salesman problem (TSP). An improved k-means++ clustering algorithm is then designed to adaptively divide the nodes into multiple clusters. Each cluster is subsequently planned using the improved ant colony optimisation (ACO) algorithm to find the shortest path. Afterwards, the nearest-neighbour algorithm is employed to connect the shortest paths of each cluster, forming an initial tool path. Finally, the 2-opt optimisation algorithm is incorporated to optimise the preliminary path, resulting in the optimal motion trajectory for the wax-drawing tool. The verification tests show that the proposed method achieves an average reduction in path length of 30.75% compared with the parallel scanning method, traditional ant colony optimisation, Christofides with 2-opt algorithm. Meanwhile, the 3D robot wax-drawing experiments demonstrate a 17.9% reduction in drawing time, significantly improving the efficiency of large-scale production and highlighting the practical value of 3D drawing robots.
{"title":"Clustering Path Optimisation-Based 2-Opt Rapid Wax-Drawing Trajectory Planning for Industrial 3D Wax-Drawing Robots","authors":"Qiyuan Fu, Ping Liu, Qinglang Xie, Shidong Zhai, Mingjie Liu","doi":"10.1049/csy2.70025","DOIUrl":"10.1049/csy2.70025","url":null,"abstract":"<p>The tool path trajectory serves as a cornerstone of three-dimensional (3D) printing robot technology, and path optimisation algorithms are instrumental in enabling faster, more precise and higher-quality prints. This work proposes a clustering path optimisation-based 2-opt rapid wax-drawing trajectory planning method for 3D drawing robots. Firstly, the input wax-drawing image is preprocessed to extract contour information, which is then simplified into polygons. Next, the spiral and filling trajectory algorithms are used to convert the polygons into corresponding spiral and filling paths, which are modelled as nodes in the travelling salesman problem (TSP). An improved k-means++ clustering algorithm is then designed to adaptively divide the nodes into multiple clusters. Each cluster is subsequently planned using the improved ant colony optimisation (ACO) algorithm to find the shortest path. Afterwards, the nearest-neighbour algorithm is employed to connect the shortest paths of each cluster, forming an initial tool path. Finally, the 2-opt optimisation algorithm is incorporated to optimise the preliminary path, resulting in the optimal motion trajectory for the wax-drawing tool. The verification tests show that the proposed method achieves an average reduction in path length of 30.75% compared with the parallel scanning method, traditional ant colony optimisation, Christofides with 2-opt algorithm. Meanwhile, the 3D robot wax-drawing experiments demonstrate a 17.9% reduction in drawing time, significantly improving the efficiency of large-scale production and highlighting the practical value of 3D drawing robots.</p>","PeriodicalId":34110,"journal":{"name":"IET Cybersystems and Robotics","volume":"7 1","pages":""},"PeriodicalIF":1.2,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.70025","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145037520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The perception of Bird's Eye View (BEV) has become a widely adopted approach in 3D object detection due to its spatial and dimensional consistency. However, the increasing complexity of neural network architectures has resulted in higher training memory, thereby limiting the scalability of model training. To address these challenges, we propose a novel model, RevFB-BEV, which is based on the Reversible Swin Transformer (RevSwin) with Forward-Backward View Transformation (FBVT) and LiDAR Guided Back Projection (LGBP). This approach includes the RevSwin backbone network, which employs a reversible architecture to minimise training memory by recomputing intermediate parameters. Moreover, we introduce the FBVT module that refines BEV features extracted from forward projection, yielding denser and more precise camera BEV representations. The LGBP module further utilises LiDAR BEV guidance for back projection to achieve more accurate camera BEV features. Extensive experiments on the nuScenes dataset demonstrate notable performance improvements, with our model achieving over a