Pub Date : 2024-11-09DOI: 10.1016/j.robot.2024.104855
Giovanni Boschetti , Riccardo Minto
Cable-driven parallel robots (CDPRs) are a particular class of parallel robots that provide several advantages that may well be received in the industrial field. However, the risk of damage due to cable failure is not negligible, thus procedures that move the end-effector to a safe pose after failure are required. This work aims to provide a sensorless failure detection and identification strategy to properly recognize the cable failure event without adding additional devices. This approach is paired with an end-effector recovery strategy to move the end-effector towards a safe position, thus providing for a complete cable failure recovery strategy, which detects the failure event and controls the end-effector accordingly. The proposed strategy is tested on a cable-driven suspended parallel robot prototype composed of industrial-grade components. The experimental results show the feasibility of the proposed approach.
{"title":"A sensorless approach for cable failure detection and identification in cable-driven parallel robots","authors":"Giovanni Boschetti , Riccardo Minto","doi":"10.1016/j.robot.2024.104855","DOIUrl":"10.1016/j.robot.2024.104855","url":null,"abstract":"<div><div>Cable-driven parallel robots (CDPRs) are a particular class of parallel robots that provide several advantages that may well be received in the industrial field. However, the risk of damage due to cable failure is not negligible, thus procedures that move the end-effector to a safe pose after failure are required. This work aims to provide a sensorless failure detection and identification strategy to properly recognize the cable failure event without adding additional devices. This approach is paired with an end-effector recovery strategy to move the end-effector towards a safe position, thus providing for a complete cable failure recovery strategy, which detects the failure event and controls the end-effector accordingly. The proposed strategy is tested on a cable-driven suspended parallel robot prototype composed of industrial-grade components. The experimental results show the feasibility of the proposed approach.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"183 ","pages":"Article 104855"},"PeriodicalIF":4.3,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-08DOI: 10.1016/j.robot.2024.104854
Alessandro Navone, Mauro Martini, Marco Ambrosio, Andrea Ostuni, Simone Angarano, Marcello Chiaberge
Segmentation-based autonomous navigation has recently been presented as an appealing approach to guiding robotic platforms through crop rows without requiring perfect GPS localization. Nevertheless, current techniques are restricted to situations where the distinct separation between the plants and the sky allows for the identification of the row’s center. However, tall, dense vegetation, such as high tree rows and orchards, is the primary cause of GPS signal blockage. In this study, we increase the overall robustness and adaptability of the control algorithm by extending the segmentation-based robotic guiding to those cases where canopies and branches occlude the sky and prevent the utilization of GPS and earlier approaches. An efficient Deep Neural Network architecture has been used to address semantic segmentation, performing the training with synthetic data only. Numerous vineyards and tree fields have undergone extensive testing in both simulation and real world to show the solution’s competitive benefits. The system achieved unseen results in orchards, with a Mean Average Error smaller than 9% of the maximum width of each row, improving state-of-the-art algorithms by disclosing new scenarios such as close canopy crops. The official code can be found at: https://github.com/PIC4SeR/SegMinNavigation.git.
{"title":"GPS-free autonomous navigation in cluttered tree rows with deep semantic segmentation","authors":"Alessandro Navone, Mauro Martini, Marco Ambrosio, Andrea Ostuni, Simone Angarano, Marcello Chiaberge","doi":"10.1016/j.robot.2024.104854","DOIUrl":"10.1016/j.robot.2024.104854","url":null,"abstract":"<div><div>Segmentation-based autonomous navigation has recently been presented as an appealing approach to guiding robotic platforms through crop rows without requiring perfect GPS localization. Nevertheless, current techniques are restricted to situations where the distinct separation between the plants and the sky allows for the identification of the row’s center. However, tall, dense vegetation, such as high tree rows and orchards, is the primary cause of GPS signal blockage. In this study, we increase the overall robustness and adaptability of the control algorithm by extending the segmentation-based robotic guiding to those cases where canopies and branches occlude the sky and prevent the utilization of GPS and earlier approaches. An efficient Deep Neural Network architecture has been used to address semantic segmentation, performing the training with synthetic data only. Numerous vineyards and tree fields have undergone extensive testing in both simulation and real world to show the solution’s competitive benefits. The system achieved unseen results in orchards, with a Mean Average Error smaller than 9% of the maximum width of each row, improving state-of-the-art algorithms by disclosing new scenarios such as close canopy crops. The official code can be found at: <span><span>https://github.com/PIC4SeR/SegMinNavigation.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"183 ","pages":"Article 104854"},"PeriodicalIF":4.3,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142658818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-31DOI: 10.1016/j.robot.2024.104842
Mario Ramírez-Neria , Rafal Madonski , Eduardo Gamaliel Hernández-Martínez , Norma Lozada-Castillo , Guillermo Fernández-Anaya , Alberto Luviano-Juárez
This article presents a Linear Active Disturbance Rejection scheme for the robust trajectory tracking control of an Omnidirectional robot, including an additional saturation element in the control design to improve the transient closed-loop response by including a saturation-input strategy in the Extended State Observer design, mitigating the possible arising peaking phenomenon. In addition, the controller is implemented in the kinematic model of the robotic system, assuming as the available information the position and orientation measurement and concerning the system structure, it is just known the order of the system and the control gain matrix as well. A wide set of laboratory experiments, including a comparison with a standard ADRC (i.e. without the proposed anti-peaking mechanism) and a PI-based control including an anti-peaking proposal, in the presence of different disturbance elements in the terrain of smooth and abrupt nature is carried out to formulate a comprehensive assessment of the proposal which validate the practical advantages of the proposal in robust trajectory tracking of the kind of robots.
本文提出了一种线性主动干扰抑制方案,用于全向机器人的鲁棒轨迹跟踪控制,在控制设计中加入了额外的饱和元素,通过在扩展状态观测器设计中加入饱和输入策略来改善瞬态闭环响应,缓解可能出现的峰值现象。此外,控制器是在机器人系统的运动学模型中实现的,假定位置和方向测量为可用信息,关于系统结构,只知道系统的阶次和控制增益矩阵。通过大量的实验室实验,包括与标准 ADRC(即不含提议的防抖动机制)和基于 PI 的控制(包括防抖动提议)的比较,在平滑和突变地形中存在不同干扰因素的情况下,对该提议进行了全面评估,验证了该提议在机器人稳健轨迹跟踪方面的实际优势。
{"title":"Robust trajectory tracking for omnidirectional robots by means of anti-peaking linear active disturbance rejection","authors":"Mario Ramírez-Neria , Rafal Madonski , Eduardo Gamaliel Hernández-Martínez , Norma Lozada-Castillo , Guillermo Fernández-Anaya , Alberto Luviano-Juárez","doi":"10.1016/j.robot.2024.104842","DOIUrl":"10.1016/j.robot.2024.104842","url":null,"abstract":"<div><div>This article presents a Linear Active Disturbance Rejection scheme for the robust trajectory tracking control of an Omnidirectional robot, including an additional saturation element in the control design to improve the transient closed-loop response by including a saturation-input strategy in the Extended State Observer design, mitigating the possible arising peaking phenomenon. In addition, the controller is implemented in the kinematic model of the robotic system, assuming as the available information the position and orientation measurement and concerning the system structure, it is just known the order of the system and the control gain matrix as well. A wide set of laboratory experiments, including a comparison with a standard ADRC (<em>i.e</em>. without the proposed anti-peaking mechanism) and a PI-based control including an anti-peaking proposal, in the presence of different disturbance elements in the terrain of smooth and abrupt nature is carried out to formulate a comprehensive assessment of the proposal which validate the practical advantages of the proposal in robust trajectory tracking of the kind of robots.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"183 ","pages":"Article 104842"},"PeriodicalIF":4.3,"publicationDate":"2024-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1016/j.robot.2024.104844
Fenglei Zheng , Aijun Yin , Chuande Zhou
Object detection is the most important part in intelligent assembly tasks, accurate and fast detection for different targets can complete positioning and assembly tasks more automatically and efficiently. In this paper, a feature enhancement object detection model based on YOLO is proposed. Firstly, the expression ability of feature layer is enhanced through RFP (Recursive Feature Pyramid) structure. The ARSPP (Atrous Residual Spatial Pyramid Pooling) is proposed to have a further enhancement for the feature layers output by the backbone network, it improves the recognition performance for multi-scale targets of model by using different size of dilated convolution and residual connection. Finally, the contiguous pyramid features are fused and enhanced through the attention mechanism, the results are used for the input of next recursive or predictive output. The model proposed in this paper effectively improves the detection accuracy of YOLO, it has 3% MAP improvement in PASCAL VOC dataset. The validity and accuracy of the model are verified in the robot intelligent assembly recognition task.
{"title":"YOLO with feature enhancement and its application in intelligent assembly","authors":"Fenglei Zheng , Aijun Yin , Chuande Zhou","doi":"10.1016/j.robot.2024.104844","DOIUrl":"10.1016/j.robot.2024.104844","url":null,"abstract":"<div><div>Object detection is the most important part in intelligent assembly tasks, accurate and fast detection for different targets can complete positioning and assembly tasks more automatically and efficiently. In this paper, a feature enhancement object detection model based on YOLO is proposed. Firstly, the expression ability of feature layer is enhanced through RFP (Recursive Feature Pyramid) structure. The ARSPP (Atrous Residual Spatial Pyramid Pooling) is proposed to have a further enhancement for the feature layers output by the backbone network, it improves the recognition performance for multi-scale targets of model by using different size of dilated convolution and residual connection. Finally, the contiguous pyramid features are fused and enhanced through the attention mechanism, the results are used for the input of next recursive or predictive output. The model proposed in this paper effectively improves the detection accuracy of YOLO, it has 3% MAP improvement in PASCAL VOC dataset. The validity and accuracy of the model are verified in the robot intelligent assembly recognition task.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"183 ","pages":"Article 104844"},"PeriodicalIF":4.3,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-28DOI: 10.1016/j.robot.2024.104843
Yongqiang Zhu , Junru Zhu , Pingxia Zhang
Due to the length of the body, multiple number of wheels and the complexity of controlling, it is difficult for a multi-axle wheeled robot to avoid obstacles autonomously in narrow space. To solve this problem, this article presents window-zone division and gap-seeking strategies for local obstacle avoidance of a multi-axle multi-steering-mode all-wheel-steering wheeled robot. Firstly, according to the influence degree of lidar points on the robot, combining with the human driving characteristics of avoiding obstacles, a window-zone division strategy is proposed. The lidar points are selected and divided according to the degree of emergency. By eliminating irrelevant points, the work of obstacle avoidance calculation is reduced. Thus, this increases the response speed of obstacle avoidance. Based on this, the robot uses a multi-steering-mode to avoid emergency obstacle. Secondly, the gap-seeking theory of normal obstacle avoidance is proposed. It can seek the passable gap among the surrounding lidar points according to the prediction of the robot's driving trajectory corresponding to different steering angles. Thirdly, the on-board control system and the upper computer program of the robot were designed. Thereafter a multi-steering-mode algorithm was designed based on the front and rear wheel steering angles and speed, as well as the travel trajectory forecast-drawing module. Finally, the proposed methods have been implemented on a five-axle all-wheel steering wheeled robot. Some obstacle avoidance experiments are carried out with S-shaped, Z-Shaped, U-Shaped, and Random obstacle distribution. The results show that the proposed strategy can finish all obstacle avoidance successfully.
由于车身长、轮子多、控制复杂,多轴轮式机器人很难在狭窄空间内自主避障。为解决这一问题,本文提出了多轴多转向模式全轮转向轮式机器人局部避障的窗区划分和间隙寻找策略。首先,根据激光雷达点对机器人的影响程度,结合人类驾驶避障的特点,提出了窗口区域划分策略。根据紧急程度对激光雷达点进行选择和划分。通过剔除无关点,减少了避障计算的工作量。因此,这提高了避障响应速度。在此基础上,机器人采用多转向模式避开紧急障碍物。其次,提出了正常避障的间隙寻找理论。它可以根据不同转向角度对应的机器人行驶轨迹预测,在周围激光雷达点中寻找可通过的间隙。第三,设计了机器人的车载控制系统和上位机程序。之后,设计了基于前后轮转向角和速度的多转向模式算法,以及行驶轨迹预测绘制模块。最后,在一个五轴全轮转向轮式机器人上实现了所提出的方法。在 S 形、Z 形、U 形和随机障碍物分布情况下,进行了一些避障实验。结果表明,所提出的策略可以成功完成所有障碍物的避让。
{"title":"Local obstacle avoidance control for multi-axle and multi-steering-mode wheeled robot based on window-zone division strategy","authors":"Yongqiang Zhu , Junru Zhu , Pingxia Zhang","doi":"10.1016/j.robot.2024.104843","DOIUrl":"10.1016/j.robot.2024.104843","url":null,"abstract":"<div><div>Due to the length of the body, multiple number of wheels and the complexity of controlling, it is difficult for a multi-axle wheeled robot to avoid obstacles autonomously in narrow space. To solve this problem, this article presents window-zone division and gap-seeking strategies for local obstacle avoidance of a multi-axle multi-steering-mode all-wheel-steering wheeled robot. Firstly, according to the influence degree of lidar points on the robot, combining with the human driving characteristics of avoiding obstacles, a window-zone division strategy is proposed. The lidar points are selected and divided according to the degree of emergency. By eliminating irrelevant points, the work of obstacle avoidance calculation is reduced. Thus, this increases the response speed of obstacle avoidance. Based on this, the robot uses a multi-steering-mode to avoid emergency obstacle. Secondly, the gap-seeking theory of normal obstacle avoidance is proposed. It can seek the passable gap among the surrounding lidar points according to the prediction of the robot's driving trajectory corresponding to different steering angles. Thirdly, the on-board control system and the upper computer program of the robot were designed. Thereafter a multi-steering-mode algorithm was designed based on the front and rear wheel steering angles and speed, as well as the travel trajectory forecast-drawing module. Finally, the proposed methods have been implemented on a five-axle all-wheel steering wheeled robot. Some obstacle avoidance experiments are carried out with S-shaped, Z-Shaped, U-Shaped, and Random obstacle distribution. The results show that the proposed strategy can finish all obstacle avoidance successfully.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"183 ","pages":"Article 104843"},"PeriodicalIF":4.3,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-28DOI: 10.1016/j.robot.2024.104840
Jinghui Pan
A fractional-order sliding mode control (FSMC) method for a manipulator based on disturbance and state observers is proposed. First, a state estimator is designed that can estimate the velocity and acceleration, and only joint position feedback and the mathematical model of the manipulator are needed. The state estimator converges in finite time. Then, the disturbance observer is designed. By designing the nominal system model of the manipulator, a disturbance observation error is introduced into the closed-loop control so that the performance of the manipulator can track the nominal system. Finally, a sliding mode controller (SMC) based on the fractional differential operator theory is also designed. The value of the sliding mode variable in the derivation of the controller is composed of the fractional derivative of the trajectory tracking error of the manipulator, whereas the fractional differentiation operation uses integration in its realization, and the integration is a low-pass filter, thus, high-frequency noise is suppressed. In the experimental section, the method designed is compared with the conventional sliding mode, which further reveals the rapidity and control accuracy of FSMC.
{"title":"Fractional-order sliding mode control of manipulator combined with disturbance and state observer","authors":"Jinghui Pan","doi":"10.1016/j.robot.2024.104840","DOIUrl":"10.1016/j.robot.2024.104840","url":null,"abstract":"<div><div>A fractional-order sliding mode control (FSMC) method for a manipulator based on disturbance and state observers is proposed. First, a state estimator is designed that can estimate the velocity and acceleration, and only joint position feedback and the mathematical model of the manipulator are needed. The state estimator converges in finite time. Then, the disturbance observer is designed. By designing the nominal system model of the manipulator, a disturbance observation error is introduced into the closed-loop control so that the performance of the manipulator can track the nominal system. Finally, a sliding mode controller (SMC) based on the fractional differential operator theory is also designed. The value of the sliding mode variable in the derivation of the controller is composed of the fractional derivative of the trajectory tracking error of the manipulator, whereas the fractional differentiation operation uses integration in its realization, and the integration is a low-pass filter, thus, high-frequency noise is suppressed. In the experimental section, the method designed is compared with the conventional sliding mode, which further reveals the rapidity and control accuracy of FSMC.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"183 ","pages":"Article 104840"},"PeriodicalIF":4.3,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.1016/j.robot.2024.104841
Ferhat Sadak
The rapid advancements of untethered microrobots offer exciting opportunities in fields such as targeted drug delivery and minimally invasive surgical procedures. However, several challenges remain, especially in achieving precise localization and classification of microrobots within living organisms using ultrasound (US) imaging. Current US-based detection algorithms often suffer from inaccurate visual feedback, causing positioning errors. This paper presents a novel explainable deep learning model for the localization and classification of eight different types of microrobots using US images. We introduce the Attention-Fused Bottleneck Module (AFBM), which enhances feature extraction and improves the performance of microrobot classification and localization tasks. Our model consistently outperforms baseline models such as YOLOR, YOLOv5-C3HB, YOLOv5-TBH, YOLOv5 m, and YOLOv7. The proposed model achieved mean Average Precision (mAP) of 0.861 and 0.909 at an IoU threshold of 0.95 which is 2% and 1.5% higher than the YOLOv5 m model in training and testing, respectively. Multi-thresh IoU analysis was performed at IoU thresholds of 0.6, 0.75, and 0.95, and demonstrated that the microrobot localization accuracy of our model is superior. A robustness analysis was performed based on high and low frequencies, gain, and speckle in our test data set, and our model demonstrated higher overall accuracy. UsingScore-CAM in our framework enhances interpretability, allowing for transparent insights into the model’s decision-making process. Our work signifies a notable advancement in microrobot classification and detection, with potential applications in real-world scenarios using the newly available USMicroMagset dataset for benchmarking.
无系微型机器人的快速发展为靶向药物输送和微创外科手术等领域提供了令人兴奋的机遇。然而,目前仍存在一些挑战,特别是如何利用超声波(US)成像技术实现微机器人在生物体内的精确定位和分类。目前基于 US 的检测算法往往存在视觉反馈不准确的问题,从而导致定位错误。本文提出了一种新颖的可解释深度学习模型,用于利用 US 图像对八种不同类型的微型机器人进行定位和分类。我们引入了注意力融合瓶颈模块(AFBM),该模块增强了特征提取,提高了微型机器人分类和定位任务的性能。我们的模型始终优于 YOLOR、YOLOv5-C3HB、YOLOv5-TBH、YOLOv5 m 和 YOLOv7 等基线模型。在 IoU 阈值为 0.95 时,拟议模型的平均精度 (mAP) 分别为 0.861 和 0.909,在训练和测试中分别比 YOLOv5 m 模型高出 2% 和 1.5%。在 0.6、0.75 和 0.95 的 IoU 阈值下进行了多阈值 IoU 分析,结果表明我们的模型的微机器人定位精度更高。根据测试数据集中的高频、低频、增益和斑点进行了鲁棒性分析,结果表明我们的模型总体精度更高。在我们的框架中使用 Score-CAM 增强了可解释性,使人们能够透明地了解模型的决策过程。我们的工作标志着在微型机器人分类和检测方面取得了显著进步,并有可能在现实世界的应用场景中使用最新可用的 USMicroMagset 数据集作为基准。
{"title":"An explainable deep learning model for automated classification and localization of microrobots by functionality using ultrasound images","authors":"Ferhat Sadak","doi":"10.1016/j.robot.2024.104841","DOIUrl":"10.1016/j.robot.2024.104841","url":null,"abstract":"<div><div>The rapid advancements of untethered microrobots offer exciting opportunities in fields such as targeted drug delivery and minimally invasive surgical procedures. However, several challenges remain, especially in achieving precise localization and classification of microrobots within living organisms using ultrasound (US) imaging. Current US-based detection algorithms often suffer from inaccurate visual feedback, causing positioning errors. This paper presents a novel explainable deep learning model for the localization and classification of eight different types of microrobots using US images. We introduce the Attention-Fused Bottleneck Module (AFBM), which enhances feature extraction and improves the performance of microrobot classification and localization tasks. Our model consistently outperforms baseline models such as YOLOR, YOLOv5-C3HB, YOLOv5-TBH, YOLOv5 m, and YOLOv7. The proposed model achieved mean Average Precision (mAP) of 0.861 and 0.909 at an IoU threshold of 0.95 which is 2% and 1.5% higher than the YOLOv5 m model in training and testing, respectively. Multi-thresh IoU analysis was performed at IoU thresholds of 0.6, 0.75, and 0.95, and demonstrated that the microrobot localization accuracy of our model is superior. A robustness analysis was performed based on high and low frequencies, gain, and speckle in our test data set, and our model demonstrated higher overall accuracy. UsingScore-CAM in our framework enhances interpretability, allowing for transparent insights into the model’s decision-making process. Our work signifies a notable advancement in microrobot classification and detection, with potential applications in real-world scenarios using the newly available USMicroMagset dataset for benchmarking.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"183 ","pages":"Article 104841"},"PeriodicalIF":4.3,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-23DOI: 10.1016/j.robot.2024.104837
Ivan Moskalenko , Anastasiia Kornilova , Gonzalo Ferrer
Aerial imagery and its direct application to visual localization is an essential problem for many Robotics and Computer Vision tasks. While Global Navigation Satellite Systems (GNSS) are the standard default solution for solving the aerial localization problem, it is subject to a number of limitations, such as, signal instability or solution unreliability that make this option not so desirable. Consequently, visual geolocalization is emerging as a viable alternative. However, adapting Visual Place Recognition (VPR) task to aerial imagery presents significant challenges, including weather variations and repetitive patterns. Current VPR reviews largely neglect the specific context of aerial data. This paper introduces a methodology tailored for evaluating VPR techniques specifically in the domain of aerial imagery, providing a comprehensive assessment of various methods and their performance. However, we not only compare various VPR methods, but also demonstrate the importance of selecting appropriate zoom and overlap levels when constructing map tiles to achieve maximum efficiency of VPR algorithms in the case of aerial imagery. The code is available on our GitHub repository — https://github.com/prime-slam/aero-vloc.
{"title":"Visual place recognition for aerial imagery: A survey","authors":"Ivan Moskalenko , Anastasiia Kornilova , Gonzalo Ferrer","doi":"10.1016/j.robot.2024.104837","DOIUrl":"10.1016/j.robot.2024.104837","url":null,"abstract":"<div><div>Aerial imagery and its direct application to visual localization is an essential problem for many Robotics and Computer Vision tasks. While Global Navigation Satellite Systems (GNSS) are the standard default solution for solving the aerial localization problem, it is subject to a number of limitations, such as, signal instability or solution unreliability that make this option not so desirable. Consequently, visual geolocalization is emerging as a viable alternative. However, adapting <em>Visual Place Recognition</em> (VPR) task to aerial imagery presents significant challenges, including weather variations and repetitive patterns. Current VPR reviews largely neglect the specific context of aerial data. This paper introduces a methodology tailored for evaluating VPR techniques specifically in the domain of aerial imagery, providing a comprehensive assessment of various methods and their performance. However, we not only compare various VPR methods, but also demonstrate the importance of selecting appropriate zoom and overlap levels when constructing map tiles to achieve maximum efficiency of VPR algorithms in the case of aerial imagery. The code is available on our GitHub repository — <span><span>https://github.com/prime-slam/aero-vloc</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"183 ","pages":"Article 104837"},"PeriodicalIF":4.3,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142553696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1016/j.robot.2024.104839
Mingxuan Ding , Qinyun Tang , Kaixin Liu , Xi Chen , Dake Lu , Changda Tian , Liquan Wang , Yingxuan Li , Gang Wang
The advancement and safeguarding of the water-land interface region is of paramount importance, and amphibious robots with the capacity for autonomous operation can play a pivotal role in this domain. However, the inability of the majority of reliable navigation sensors to adapt to the water-land interface environment presents a significant challenge for amphibious robots, as obtaining positional information is crucial for autonomous operation. To address this issue, we have proposed a positioning and navigation framework, designated as NAWR (Navigation Algorithm for Amphibious Wheeled Robots), with the objective of enhancing the navigation capabilities of amphibious robots. Firstly, a method for representing the odometer's confidence based on a simplified wheel-terrain interaction model has been developed. This method quantitatively assesses the reliability of each odometer by estimating the slip rate. Secondly, we have introduced an improved split covariance intersection filter (I-SCIF), which maximizes the utilization of navigation information sources to enhance the accuracy of positional estimation. Finally, we will integrate these two methods to form the NAWR framework and validate the effectiveness of the proposed methods through multiple robot field trials. The results from both field trials and ablation tests collectively demonstrate that the modules and overall approach within the NAWR framework effectively enhance the navigation capabilities of amphibious robots.
{"title":"Advancements in amphibious robot navigation through wheeled odometer uncertainty extension and distributed information fusion","authors":"Mingxuan Ding , Qinyun Tang , Kaixin Liu , Xi Chen , Dake Lu , Changda Tian , Liquan Wang , Yingxuan Li , Gang Wang","doi":"10.1016/j.robot.2024.104839","DOIUrl":"10.1016/j.robot.2024.104839","url":null,"abstract":"<div><div>The advancement and safeguarding of the water-land interface region is of paramount importance, and amphibious robots with the capacity for autonomous operation can play a pivotal role in this domain. However, the inability of the majority of reliable navigation sensors to adapt to the water-land interface environment presents a significant challenge for amphibious robots, as obtaining positional information is crucial for autonomous operation. To address this issue, we have proposed a positioning and navigation framework, designated as NAWR (Navigation Algorithm for Amphibious Wheeled Robots), with the objective of enhancing the navigation capabilities of amphibious robots. Firstly, a method for representing the odometer's confidence based on a simplified wheel-terrain interaction model has been developed. This method quantitatively assesses the reliability of each odometer by estimating the slip rate. Secondly, we have introduced an improved split covariance intersection filter (I-SCIF), which maximizes the utilization of navigation information sources to enhance the accuracy of positional estimation. Finally, we will integrate these two methods to form the NAWR framework and validate the effectiveness of the proposed methods through multiple robot field trials. The results from both field trials and ablation tests collectively demonstrate that the modules and overall approach within the NAWR framework effectively enhance the navigation capabilities of amphibious robots.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"183 ","pages":"Article 104839"},"PeriodicalIF":4.3,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142586363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1016/j.robot.2024.104835
Michele Perrelli, Francesco Lago, Salvatore Garofalo, Luigi Bruno, Domenico Mundo, Giuseppe Carbone
This paper conducts a thorough literature review and assessment of prevailing upper-limb rehabilitation devices, scrutinizing their strengths and limitations. The focus of this work is mainly on soft exosuit devices but some rigid and hybrid exoskeleton devices are also discussed as a comparative mean. Subsequently, this manuscript delineates explicit design guidelines with the intent of fostering a systematic approach toward innovation in the realm of upper-limb rehabilitation technology. Through an examination of current concepts and technological paradigms, this study seeks to contribute nuanced insights aimed at optimizing both efficacy and user experience in rehabilitation device design. The culmination of this critical analysis results in the proposal of a systematic design procedure to inform and influence the trajectory of specific user-tailored innovations within the domain of upper-limb rehabilitation devices.The proposed approach enables the identification of features and weaknesses in existing devices, facilitating also the design of innovative solutions for unsolved issues in the field of wearable robotics. A design example is presented to clarify the proposed design procedure.
{"title":"A critical review and systematic design approach for innovative upper-limb rehabilitation devices","authors":"Michele Perrelli, Francesco Lago, Salvatore Garofalo, Luigi Bruno, Domenico Mundo, Giuseppe Carbone","doi":"10.1016/j.robot.2024.104835","DOIUrl":"10.1016/j.robot.2024.104835","url":null,"abstract":"<div><div>This paper conducts a thorough literature review and assessment of prevailing upper-limb rehabilitation devices, scrutinizing their strengths and limitations. The focus of this work is mainly on soft exosuit devices but some rigid and hybrid exoskeleton devices are also discussed as a comparative mean. Subsequently, this manuscript delineates explicit design guidelines with the intent of fostering a systematic approach toward innovation in the realm of upper-limb rehabilitation technology. Through an examination of current concepts and technological paradigms, this study seeks to contribute nuanced insights aimed at optimizing both efficacy and user experience in rehabilitation device design. The culmination of this critical analysis results in the proposal of a systematic design procedure to inform and influence the trajectory of specific user-tailored innovations within the domain of upper-limb rehabilitation devices.The proposed approach enables the identification of features and weaknesses in existing devices, facilitating also the design of innovative solutions for unsolved issues in the field of wearable robotics. A design example is presented to clarify the proposed design procedure.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"183 ","pages":"Article 104835"},"PeriodicalIF":4.3,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142527644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}