首页 > 最新文献

Robotics最新文献

英文 中文
Evaluation of a Voice-Enabled Autonomous Camera Control System for the da Vinci Surgical Robot 评估达芬奇手术机器人的语音自主相机控制系统
IF 3.7 Q2 ROBOTICS Pub Date : 2024-01-01 DOI: 10.3390/robotics13010010
Reenu Arikkat Paul, Luay Jawad, Abhishek Shankar, Maitreyee Majumdar, Troy Herrick-Thomason, Abhilash Pandya
Robotic surgery involves significant task switching between tool control and camera control, which can be a source of distraction and error. This study evaluated the performance of a voice-enabled autonomous camera control system compared to a human-operated camera for the da Vinci surgical robot. Twenty subjects performed a series of tasks that required them to instruct the camera to move to specific locations to complete the tasks. The subjects performed the tasks (1) using an automated camera system that could be tailored based on keywords; and (2) directing a human camera operator using voice commands. The data were analyzed using task completion measures and the NASA Task Load Index (TLX) human performance metrics. The human-operated camera control method was able to outperform an automated algorithm in terms of task completion (6.96 vs. 7.71 correct insertions; p-value = 0.044). However, subjective feedback suggests that a voice-enabled autonomous camera control system is comparable to a human-operated camera control system. Based on the subjects’ feedback, thirteen out of the twenty subjects preferred the voice-enabled autonomous camera control system including the surgeon. This study is a step towards a more natural language interface for surgical robotics as these systems become better partners during surgery.
机器人手术需要在工具控制和摄像头控制之间进行大量任务切换,这可能会分散注意力并造成错误。这项研究评估了达芬奇手术机器人的语音自主相机控制系统与人工操作相机的性能比较。20 名受试者完成了一系列任务,要求他们指示摄像头移动到特定位置以完成任务。受试者完成了以下任务:(1) 使用可根据关键字定制的自动照相机系统;(2) 使用语音命令指挥人类照相机操作员。数据分析采用了任务完成度测量和美国宇航局任务负荷指数(TLX)人类性能指标。就任务完成情况而言,人工操作的相机控制方法优于自动算法(6.96 对 7.71;p 值 = 0.044)。不过,主观反馈表明,语音自主照相机控制系统与人工操作的照相机控制系统不相上下。根据受试者的反馈,20 名受试者中有 13 名更喜欢包括外科医生在内的语音自主照相机控制系统。随着这些系统在手术过程中成为更好的合作伙伴,这项研究为手术机器人向更自然的语言界面迈进迈出了一步。
{"title":"Evaluation of a Voice-Enabled Autonomous Camera Control System for the da Vinci Surgical Robot","authors":"Reenu Arikkat Paul, Luay Jawad, Abhishek Shankar, Maitreyee Majumdar, Troy Herrick-Thomason, Abhilash Pandya","doi":"10.3390/robotics13010010","DOIUrl":"https://doi.org/10.3390/robotics13010010","url":null,"abstract":"Robotic surgery involves significant task switching between tool control and camera control, which can be a source of distraction and error. This study evaluated the performance of a voice-enabled autonomous camera control system compared to a human-operated camera for the da Vinci surgical robot. Twenty subjects performed a series of tasks that required them to instruct the camera to move to specific locations to complete the tasks. The subjects performed the tasks (1) using an automated camera system that could be tailored based on keywords; and (2) directing a human camera operator using voice commands. The data were analyzed using task completion measures and the NASA Task Load Index (TLX) human performance metrics. The human-operated camera control method was able to outperform an automated algorithm in terms of task completion (6.96 vs. 7.71 correct insertions; p-value = 0.044). However, subjective feedback suggests that a voice-enabled autonomous camera control system is comparable to a human-operated camera control system. Based on the subjects’ feedback, thirteen out of the twenty subjects preferred the voice-enabled autonomous camera control system including the surgeon. This study is a step towards a more natural language interface for surgical robotics as these systems become better partners during surgery.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"20 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139127196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NU-Biped-4.5: A Lightweight and Low-Prototyping-Cost Full-Size Bipedal Robot NU-Biped-4.5:轻量级、低原型成本的全尺寸双足机器人
IF 3.7 Q2 ROBOTICS Pub Date : 2023-12-31 DOI: 10.3390/robotics13010009
Michele Folgheraiter, Sharafatdin Yessirkepov, T. Umurzakov
This paper presents the design of a new lightweight, full-size bipedal robot developed in the Humanoid Robotics Laboratory at Nazarbayev University. The robot, equipped with 12 degrees of freedom (DOFs), stands at 1.1 m tall and weighs only 15 kg (excluding the battery). Through the implementation of a simple mechanical design and the utilization of off-the-shelf components, the overall prototype cost remained under USD 5000. The incorporation of high-performance in-house-developed servomotors enables the robot’s actuation system to generate up to 2400 W of mechanical power, resulting in a power-to-weight ratio of 160 W/kg. The details of the mechanical and electrical design are presented alongside the formalization of the forward kinematic model using the successive screw displacement method and the solution of the inverse kinematics. Tests conducted in both a simulation environment and on the real prototype demonstrate that the robot is capable of accurately following the reference joint trajectories to execute a quasi-static gait, achieving an average power consumption of 496 W.
本文介绍了纳扎尔巴耶夫大学仿人机器人实验室开发的新型轻型全尺寸双足机器人的设计。该机器人配备 12 个自由度 (DOF),高 1.1 米,重仅 15 千克(不包括电池)。由于采用了简单的机械设计并使用了现成的部件,原型机的总成本保持在 5000 美元以下。内部开发的高性能伺服电机使机器人的执行系统能够产生高达 2400 W 的机械功率,功率重量比为 160 W/kg。在介绍机械和电气设计细节的同时,还介绍了使用连续螺杆位移法建立的正向运动学模型,以及逆向运动学的求解方法。在模拟环境和实际原型上进行的测试表明,机器人能够准确地按照参考关节轨迹执行准静态步态,平均功耗为 496 瓦。
{"title":"NU-Biped-4.5: A Lightweight and Low-Prototyping-Cost Full-Size Bipedal Robot","authors":"Michele Folgheraiter, Sharafatdin Yessirkepov, T. Umurzakov","doi":"10.3390/robotics13010009","DOIUrl":"https://doi.org/10.3390/robotics13010009","url":null,"abstract":"This paper presents the design of a new lightweight, full-size bipedal robot developed in the Humanoid Robotics Laboratory at Nazarbayev University. The robot, equipped with 12 degrees of freedom (DOFs), stands at 1.1 m tall and weighs only 15 kg (excluding the battery). Through the implementation of a simple mechanical design and the utilization of off-the-shelf components, the overall prototype cost remained under USD 5000. The incorporation of high-performance in-house-developed servomotors enables the robot’s actuation system to generate up to 2400 W of mechanical power, resulting in a power-to-weight ratio of 160 W/kg. The details of the mechanical and electrical design are presented alongside the formalization of the forward kinematic model using the successive screw displacement method and the solution of the inverse kinematics. Tests conducted in both a simulation environment and on the real prototype demonstrate that the robot is capable of accurately following the reference joint trajectories to execute a quasi-static gait, achieving an average power consumption of 496 W.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"121 13","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139133999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Probability-Based Strategy for a Football Multi-Agent Autonomous Robot System 基于概率的足球多代理自主机器人系统策略
IF 3.7 Q2 ROBOTICS Pub Date : 2023-12-23 DOI: 10.3390/robotics13010005
António Fernando Alcântara Ribeiro, Ana Carolina Coelho Lopes, Tiago Alcântara Ribeiro, Nino Sancho Sampaio Martins Pereira, Gil Teixeira Lopes, António Fernando Alcântara Ribeiro
The strategies of multi-autonomous cooperative robots in a football game can be solved in multiple ways. Still, the most common is the “Skills, Tactics and Plays (STP)” architecture, developed so that robots could easily cooperate based on a group of predefined plays, called the playbook. The development of the new strategy algorithm presented in this paper, used by the RoboCup Middle Size League LAR@MSL team, had a completely different approach from most other teams for multiple reasons. Contrary to the typical STP architecture, this strategy, called the Probability-Based Strategy (PBS), uses only skills and decides the outcome of the tactics and plays in real-time based on the probability of arbitrary values given to the possible actions in each situation. The action probability values also affect the robot’s positioning in a way that optimizes the overall probability of scoring a goal. It uses a centralized decision-making strategy rather than the robot’s self-control. The robot is still fully autonomous in the skills assigned to it and uses a communication system with the main computer to synchronize all robots. Also, calibration or any strategy improvements are independent of the robots themselves. The robots’ performance affects the results but does not interfere with the strategy outcome. Moreover, the strategy outcome depends primarily on the opponent team and the probability calibration for each action. The strategy presented has been fully implemented on the team and tested in multiple scenarios, such as simulators, a controlled environment, against humans in a simulator, and in the RoboCup competition.
多自主合作机器人在足球比赛中的策略可以通过多种方式解决。不过,最常见的还是 "技能、战术和战术(STP)"架构,该架构的开发是为了让机器人能够根据一组预定义的战术(称为战术手册)轻松开展合作。本文介绍的新策略算法是由机器人杯中等规模联赛 LAR@MSL 团队开发的,由于多种原因,该团队采用了与其他大多数团队完全不同的方法。与典型的 STP 架构相反,这种被称为 "基于概率的策略"(Probability-Based Strategy,PBS)的策略只使用技能,并根据在每种情况下可能采取的行动的任意值的概率来实时决定战术和比赛的结果。行动概率值还会影响机器人的定位,从而优化进球的整体概率。它使用的是集中决策策略,而不是机器人的自我控制。机器人仍然可以完全自主地掌握分配给它的技能,并使用与主计算机的通信系统来同步所有机器人。此外,校准或任何策略改进都与机器人本身无关。机器人的表现会影响结果,但不会干扰策略结果。此外,策略结果主要取决于对手队伍和每个动作的概率校准。所介绍的策略已在团队中全面实施,并在多种场景中进行了测试,如模拟器、受控环境、在模拟器中与人类对抗以及 RoboCup 竞赛。
{"title":"Probability-Based Strategy for a Football Multi-Agent Autonomous Robot System","authors":"António Fernando Alcântara Ribeiro, Ana Carolina Coelho Lopes, Tiago Alcântara Ribeiro, Nino Sancho Sampaio Martins Pereira, Gil Teixeira Lopes, António Fernando Alcântara Ribeiro","doi":"10.3390/robotics13010005","DOIUrl":"https://doi.org/10.3390/robotics13010005","url":null,"abstract":"The strategies of multi-autonomous cooperative robots in a football game can be solved in multiple ways. Still, the most common is the “Skills, Tactics and Plays (STP)” architecture, developed so that robots could easily cooperate based on a group of predefined plays, called the playbook. The development of the new strategy algorithm presented in this paper, used by the RoboCup Middle Size League LAR@MSL team, had a completely different approach from most other teams for multiple reasons. Contrary to the typical STP architecture, this strategy, called the Probability-Based Strategy (PBS), uses only skills and decides the outcome of the tactics and plays in real-time based on the probability of arbitrary values given to the possible actions in each situation. The action probability values also affect the robot’s positioning in a way that optimizes the overall probability of scoring a goal. It uses a centralized decision-making strategy rather than the robot’s self-control. The robot is still fully autonomous in the skills assigned to it and uses a communication system with the main computer to synchronize all robots. Also, calibration or any strategy improvements are independent of the robots themselves. The robots’ performance affects the results but does not interfere with the strategy outcome. Moreover, the strategy outcome depends primarily on the opponent team and the probability calibration for each action. The strategy presented has been fully implemented on the team and tested in multiple scenarios, such as simulators, a controlled environment, against humans in a simulator, and in the RoboCup competition.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"37 4","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139162879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Enhanced Multi-Sensor Simultaneous Localization and Mapping (SLAM) Framework with Coarse-to-Fine Loop Closure Detection Based on a Tightly Coupled Error State Iterative Kalman Filter 基于紧密耦合误差态迭代卡尔曼滤波器的增强型多传感器同时定位与绘图(SLAM)框架,具有从粗到细的闭环检测功能
IF 3.7 Q2 ROBOTICS Pub Date : 2023-12-21 DOI: 10.3390/robotics13010002
Changhao Yu, Zichen Chao, Haoran Xie, Yue Hua, Weitao Wu
In order to attain precise and robust transformation estimation in simultaneous localization and mapping (SLAM) tasks, the integration of multiple sensors has demonstrated effectiveness and significant potential in robotics applications. Our work emerges as a rapid tightly coupled LIDAR-inertial-visual SLAM system, comprising three tightly coupled components: the LIO module, the VIO module, and the loop closure detection module. The LIO module directly constructs raw scanning point increments into a point cloud map for matching. The VIO component performs image alignment by aligning the observed points and the loop closure detection module imparts real-time cumulative error correction through factor graph optimization using the iSAM2 optimizer. The three components are integrated via an error state iterative Kalman filter (ESIKF). To alleviate computational efforts in loop closure detection, a coarse-to-fine point cloud matching approach is employed, leverging Quatro for deriving a priori state for keyframe point clouds and NanoGICP for detailed transformation computation. Experimental evaluations conducted on both open and private datasets substantiate the superior performance of the proposed method compared to similar approaches. The results indicate the adaptability of this method to various challenging situations.
为了在同步定位和测绘(SLAM)任务中实现精确而稳健的变换估计,多种传感器的集成在机器人应用中显示出了有效性和巨大潜力。我们的研究成果是一种快速、紧密耦合的激光雷达-惯性-视觉 SLAM 系统,由三个紧密耦合的组件组成:LIO 模块、VIO 模块和闭环检测模块。LIO 模块直接将原始扫描点增量构建为点云图,以便进行匹配。VIO 组件通过对齐观测点来执行图像对齐,而环路闭合检测模块则通过使用 iSAM2 优化器进行因子图优化来实现实时累积误差校正。这三个组件通过误差状态迭代卡尔曼滤波器(ESIKF)进行整合。为了减轻闭环检测的计算工作量,采用了一种从粗到细的点云匹配方法,利用 Quatro 为关键帧点云推导先验状态,利用 NanoGICP 进行详细的变换计算。在公开和私有数据集上进行的实验评估证明,与类似方法相比,所提出的方法性能更优越。结果表明,该方法可适应各种具有挑战性的情况。
{"title":"An Enhanced Multi-Sensor Simultaneous Localization and Mapping (SLAM) Framework with Coarse-to-Fine Loop Closure Detection Based on a Tightly Coupled Error State Iterative Kalman Filter","authors":"Changhao Yu, Zichen Chao, Haoran Xie, Yue Hua, Weitao Wu","doi":"10.3390/robotics13010002","DOIUrl":"https://doi.org/10.3390/robotics13010002","url":null,"abstract":"In order to attain precise and robust transformation estimation in simultaneous localization and mapping (SLAM) tasks, the integration of multiple sensors has demonstrated effectiveness and significant potential in robotics applications. Our work emerges as a rapid tightly coupled LIDAR-inertial-visual SLAM system, comprising three tightly coupled components: the LIO module, the VIO module, and the loop closure detection module. The LIO module directly constructs raw scanning point increments into a point cloud map for matching. The VIO component performs image alignment by aligning the observed points and the loop closure detection module imparts real-time cumulative error correction through factor graph optimization using the iSAM2 optimizer. The three components are integrated via an error state iterative Kalman filter (ESIKF). To alleviate computational efforts in loop closure detection, a coarse-to-fine point cloud matching approach is employed, leverging Quatro for deriving a priori state for keyframe point clouds and NanoGICP for detailed transformation computation. Experimental evaluations conducted on both open and private datasets substantiate the superior performance of the proposed method compared to similar approaches. The results indicate the adaptability of this method to various challenging situations.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"49 11","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138948908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Playing Checkers with an Intelligent and Collaborative Robotic System 用智能协作机器人系统下跳棋
IF 3.7 Q2 ROBOTICS Pub Date : 2023-12-21 DOI: 10.3390/robotics13010004
Giuliano Fabris, Lorenzo Scalera, Alessandro Gasparetto
Collaborative robotics represents a modern and efficient framework in which machines can safely interact with humans. Coupled with artificial intelligence (AI) systems, collaborative robots can solve problems that require a certain degree of intelligence not only in industry but also in the entertainment and educational fields. Board games like chess or checkers are a good example. When playing these games, a robotic system has to recognize the board and pieces and estimate their position in the robot reference frame, decide autonomously which is the best move to make (respecting the game rules), and physically execute it. In this paper, an intelligent and collaborative robotic system is presented to play Italian checkers. The system is able to acquire the game state using a camera, select the best move among all the possible ones through a decision-making algorithm, and physically manipulate the game pieces on the board, performing pick-and-place operations. Minimum-time trajectories are optimized online for each pick-and-place operation of the robot so as to make the game more fluent and interactive while meeting the kinematic constraints of the manipulator. The developed system is tested in a real-world setup using a Franka Emika arm with seven degrees of freedom. The experimental results demonstrate the feasibility and performance of the proposed approach.
协作机器人技术代表了一种现代高效的框架,在这种框架中,机器可以安全地与人类互动。协作机器人与人工智能(AI)系统相结合,不仅可以解决工业领域的问题,还可以解决娱乐和教育领域需要一定智能的问题。象棋或跳棋等棋类游戏就是一个很好的例子。在玩这些游戏时,机器人系统必须识别棋盘和棋子,并估算它们在机器人参照系中的位置,自主决定哪一步是最好的(遵守游戏规则),并实际执行。本文介绍了一种玩意大利跳棋的智能协作机器人系统。该系统能够使用摄像头获取棋局状态,通过决策算法从所有可能的棋局中选择最佳棋步,并在棋盘上实际操作棋子,执行取放操作。机器人的每次拾放操作都会在线优化最小时间轨迹,从而使游戏更加流畅、互动性更强,同时满足机械手的运动学约束。开发的系统在实际设置中使用具有七个自由度的 Franka Emika 机械臂进行了测试。实验结果证明了所提方法的可行性和性能。
{"title":"Playing Checkers with an Intelligent and Collaborative Robotic System","authors":"Giuliano Fabris, Lorenzo Scalera, Alessandro Gasparetto","doi":"10.3390/robotics13010004","DOIUrl":"https://doi.org/10.3390/robotics13010004","url":null,"abstract":"Collaborative robotics represents a modern and efficient framework in which machines can safely interact with humans. Coupled with artificial intelligence (AI) systems, collaborative robots can solve problems that require a certain degree of intelligence not only in industry but also in the entertainment and educational fields. Board games like chess or checkers are a good example. When playing these games, a robotic system has to recognize the board and pieces and estimate their position in the robot reference frame, decide autonomously which is the best move to make (respecting the game rules), and physically execute it. In this paper, an intelligent and collaborative robotic system is presented to play Italian checkers. The system is able to acquire the game state using a camera, select the best move among all the possible ones through a decision-making algorithm, and physically manipulate the game pieces on the board, performing pick-and-place operations. Minimum-time trajectories are optimized online for each pick-and-place operation of the robot so as to make the game more fluent and interactive while meeting the kinematic constraints of the manipulator. The developed system is tested in a real-world setup using a Franka Emika arm with seven degrees of freedom. The experimental results demonstrate the feasibility and performance of the proposed approach.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"15 5","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138952242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review of Trajectory Prediction Methods for the Vulnerable Road User 弱势道路使用者轨迹预测方法综述
IF 3.7 Q2 ROBOTICS Pub Date : 2023-12-19 DOI: 10.3390/robotics13010001
Erik Schuetz, Fabian B. Flohr
Predicting the trajectory of other road users, especially vulnerable road users (VRUs), is an important aspect of safety and planning efficiency for autonomous vehicles. With recent advances in Deep-Learning-based approaches in this field, physics- and classical Machine-Learning-based methods cannot exhibit competitive results compared to the former. Hence, this paper provides an extensive review of recent Deep-Learning-based methods in trajectory prediction for VRUs and autonomous driving in general. We review the state and context representations and architectural insights of selected methods, divided into categories according to their primary prediction scheme. Additionally, we summarize reported results on popular datasets for all methods presented in this review. The results show that conditional variational autoencoders achieve the best overall results on both pedestrian and autonomous driving datasets. Finally, we outline possible future research directions for the field of trajectory prediction in autonomous driving.
预测其他道路使用者,尤其是易受伤害的道路使用者(VRUs)的轨迹,是自动驾驶车辆安全和规划效率的一个重要方面。随着基于深度学习的方法在这一领域的最新进展,基于物理和经典机器学习的方法与前者相比,无法展现出具有竞争力的结果。因此,本文对最近基于深度学习的 VRU 轨迹预测方法和一般自动驾驶方法进行了广泛评述。我们回顾了所选方法的状态和上下文表示以及架构见解,并根据其主要预测方案进行了分类。此外,我们还总结了本综述中介绍的所有方法在流行数据集上的报告结果。结果表明,条件变分自动编码器在行人和自动驾驶数据集上都取得了最佳的总体结果。最后,我们概述了自动驾驶轨迹预测领域未来可能的研究方向。
{"title":"A Review of Trajectory Prediction Methods for the Vulnerable Road User","authors":"Erik Schuetz, Fabian B. Flohr","doi":"10.3390/robotics13010001","DOIUrl":"https://doi.org/10.3390/robotics13010001","url":null,"abstract":"Predicting the trajectory of other road users, especially vulnerable road users (VRUs), is an important aspect of safety and planning efficiency for autonomous vehicles. With recent advances in Deep-Learning-based approaches in this field, physics- and classical Machine-Learning-based methods cannot exhibit competitive results compared to the former. Hence, this paper provides an extensive review of recent Deep-Learning-based methods in trajectory prediction for VRUs and autonomous driving in general. We review the state and context representations and architectural insights of selected methods, divided into categories according to their primary prediction scheme. Additionally, we summarize reported results on popular datasets for all methods presented in this review. The results show that conditional variational autoencoders achieve the best overall results on both pedestrian and autonomous driving datasets. Finally, we outline possible future research directions for the field of trajectory prediction in autonomous driving.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":" 406","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138960544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Control Architecture Based on Behavior Trees for an Omni-Directional Mobile Robot 基于行为树的全向移动机器人新型控制架构
IF 3.7 Q2 ROBOTICS Pub Date : 2023-12-16 DOI: 10.3390/robotics12060170
Rodrigo Bernardo, João M. C. Sousa, M. Botto, Paulo J. S. Gonçalves
Robotic systems are increasingly present in dynamic environments. This paper proposes a hierarchical control structure wherein a behavior tree (BT) is used to improve the flexibility and adaptability of an omni-directional mobile robot for point stabilization. Flexibility and adaptability are crucial at each level of the sense–plan–act loop to implement robust and effective robotic solutions in dynamic environments. The proposed BT combines high-level decision making and continuous execution monitoring while applying non-linear model predictive control (NMPC) for the point stabilization of an omni-directional mobile robot. The proposed control architecture can guide the mobile robot to any configuration within the workspace while satisfying state constraints (e.g., obstacle avoidance) and input constraints (e.g., motor limits). The effectiveness of the controller was validated through a set of realistic simulation scenarios and experiments in a real environment, where an industrial omni-directional mobile robot performed a point stabilization task with obstacle avoidance in a workspace.
机器人系统越来越多地出现在动态环境中。本文提出了一种分层控制结构,其中行为树(BT)用于提高全向移动机器人的灵活性和适应性,以实现点稳定。要想在动态环境中实施稳健有效的机器人解决方案,灵活性和适应性在感知-计划-行动环路的每个层级都至关重要。所提出的 BT 结合了高层决策和连续执行监控,同时应用非线性模型预测控制(NMPC)来实现全向移动机器人的点稳定。所提出的控制架构可以引导移动机器人在工作区内任意配置,同时满足状态约束(如避障)和输入约束(如电机限制)。通过一组真实的模拟场景和真实环境中的实验,验证了控制器的有效性。在真实环境中,一个工业全向移动机器人在工作区内执行了一项带避障的点稳定任务。
{"title":"A Novel Control Architecture Based on Behavior Trees for an Omni-Directional Mobile Robot","authors":"Rodrigo Bernardo, João M. C. Sousa, M. Botto, Paulo J. S. Gonçalves","doi":"10.3390/robotics12060170","DOIUrl":"https://doi.org/10.3390/robotics12060170","url":null,"abstract":"Robotic systems are increasingly present in dynamic environments. This paper proposes a hierarchical control structure wherein a behavior tree (BT) is used to improve the flexibility and adaptability of an omni-directional mobile robot for point stabilization. Flexibility and adaptability are crucial at each level of the sense–plan–act loop to implement robust and effective robotic solutions in dynamic environments. The proposed BT combines high-level decision making and continuous execution monitoring while applying non-linear model predictive control (NMPC) for the point stabilization of an omni-directional mobile robot. The proposed control architecture can guide the mobile robot to any configuration within the workspace while satisfying state constraints (e.g., obstacle avoidance) and input constraints (e.g., motor limits). The effectiveness of the controller was validated through a set of realistic simulation scenarios and experiments in a real environment, where an industrial omni-directional mobile robot performed a point stabilization task with obstacle avoidance in a workspace.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"67 3","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138967644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emotional Experience in Human–Robot Collaboration: Suitability of Virtual Reality Scenarios to Study Interactions beyond Safety Restrictions 人机协作中的情感体验:虚拟现实场景对研究超越安全限制的交互作用的适用性
IF 3.7 Q2 ROBOTICS Pub Date : 2023-12-08 DOI: 10.3390/robotics12060168
Franziska Legler, Jonas Trezl, Dorothea Langer, Max Bernhagen, A. Dettmann, A. Bullinger
Today’s research on fenceless human–robot collaboration (HRC) is challenged by a relatively slow development of safety features. Simultaneously, design recommendations for HRC are requested by the industry. To simulate HRC scenarios in advance, virtual reality (VR) technology can be utilized and ensure safety. VR also allows researchers to study the effects of safety-restricted features like close distance during movements and events of robotic malfunctions. In this paper, we present a VR experiment with 40 participants collaborating with a heavy-load robot and compare the results to a similar real-world experiment to study transferability and validity. The participant’s proximity to the robot, interaction level, and occurring system failures were varied. State anxiety, trust, and intention to use were used as dependent variables, and valence and arousal values were assessed over time. Overall, state anxiety was low and trust and intention to use were high. Only simulated failures significantly increased state anxiety, reduced trust, and resulted in reduced valence and increased arousal. In comparison with the real-world experiment, non-significant differences in all dependent variables and similar progression of valence and arousal were found during scenarios without system failures. Therefore, the suitability of applying VR in HRC research to study safety-restricted features can be supported; however, further research should examine transferability for high-intensity emotional experiences.
目前对无围栏人机协作(HRC)的研究受到安全特性发展相对缓慢的挑战。同时,HRC的设计建议也被业界所要求。为了提前模拟HRC场景,可以利用虚拟现实(VR)技术,确保安全。VR还允许研究人员研究安全限制功能的影响,比如移动时的近距离和机器人故障事件。在本文中,我们提出了一个VR实验,40名参与者与一个重载机器人合作,并将结果与类似的现实世界实验进行比较,以研究可转移性和有效性。参与者与机器人的接近程度、交互水平和发生的系统故障各不相同。状态焦虑、信任和使用意图被用作因变量,并随时间评估效价和唤醒值。总体而言,状态焦虑较低,信任和使用意图较高。只有模拟失败才能显著增加状态焦虑,减少信任,并导致效价降低和唤醒增加。与真实世界的实验相比,在没有系统故障的情况下,所有因变量和相似的效价和觉醒过程都没有显著差异。因此,可以支持在HRC研究中应用VR来研究安全受限特征的适用性;然而,进一步的研究应该检查高强度情绪体验的可转移性。
{"title":"Emotional Experience in Human–Robot Collaboration: Suitability of Virtual Reality Scenarios to Study Interactions beyond Safety Restrictions","authors":"Franziska Legler, Jonas Trezl, Dorothea Langer, Max Bernhagen, A. Dettmann, A. Bullinger","doi":"10.3390/robotics12060168","DOIUrl":"https://doi.org/10.3390/robotics12060168","url":null,"abstract":"Today’s research on fenceless human–robot collaboration (HRC) is challenged by a relatively slow development of safety features. Simultaneously, design recommendations for HRC are requested by the industry. To simulate HRC scenarios in advance, virtual reality (VR) technology can be utilized and ensure safety. VR also allows researchers to study the effects of safety-restricted features like close distance during movements and events of robotic malfunctions. In this paper, we present a VR experiment with 40 participants collaborating with a heavy-load robot and compare the results to a similar real-world experiment to study transferability and validity. The participant’s proximity to the robot, interaction level, and occurring system failures were varied. State anxiety, trust, and intention to use were used as dependent variables, and valence and arousal values were assessed over time. Overall, state anxiety was low and trust and intention to use were high. Only simulated failures significantly increased state anxiety, reduced trust, and resulted in reduced valence and increased arousal. In comparison with the real-world experiment, non-significant differences in all dependent variables and similar progression of valence and arousal were found during scenarios without system failures. Therefore, the suitability of applying VR in HRC research to study safety-restricted features can be supported; however, further research should examine transferability for high-intensity emotional experiences.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"87 5","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138586823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Performance of Mobile-Convolutional Neural Networks for Spatial and Temporal Human Action Recognition Analysis 评估移动卷积神经网络在空间和时间人类动作识别分析中的性能
IF 3.7 Q2 ROBOTICS Pub Date : 2023-12-08 DOI: 10.3390/robotics12060167
Stavros N. Moutsis, Konstantinos A. Tsintotas, Ioannis Kansizoglou, Antonios Gasteratos
Human action recognition is a computer vision task that identifies how a person or a group acts on a video sequence. Various methods that rely on deep-learning techniques, such as two- or three-dimensional convolutional neural networks (2D-CNNs, 3D-CNNs), recurrent neural networks (RNNs), and vision transformers (ViT), have been proposed to address this problem over the years. Motivated by the fact that most of the used CNNs in human action recognition present high complexity, and the necessity of implementations on mobile platforms that are characterized by restricted computational resources, in this article, we conduct an extensive evaluation protocol over the performance metrics of five lightweight architectures. In particular, we examine how these mobile-oriented CNNs (viz., ShuffleNet-v2, EfficientNet-b0, MobileNet-v3, and GhostNet) execute in spatial analysis compared to a recent tiny ViT, namely EVA-02-Ti, and a higher computational model, ResNet-50. Our models, previously trained on ImageNet and BU101, are measured for their classification accuracy on HMDB51, UCF101, and six classes of the NTU dataset. The average and max scores, as well as the voting approaches, are generated through three and fifteen RGB frames of each video, while two different rates for the dropout layers were assessed during the training. Last, a temporal analysis via multiple types of RNNs that employ features extracted by the trained networks is examined. Our results reveal that EfficientNet-b0 and EVA-02-Ti surpass the other mobile-CNNs, achieving comparable or superior performance to ResNet-50.
人类行为识别是一项计算机视觉任务,用于识别一个人或一组人对视频序列的行为。多年来,人们提出了各种依赖深度学习技术的方法,如二维或三维卷积神经网络(2d - cnn, 3d - cnn),循环神经网络(rnn)和视觉变压器(ViT)来解决这个问题。考虑到人类动作识别中使用的大多数cnn都具有很高的复杂性,以及在计算资源有限的移动平台上实现的必要性,在本文中,我们对五种轻量级架构的性能指标进行了广泛的评估协议。特别是,我们研究了这些面向移动的cnn(即,ShuffleNet-v2, EfficientNet-b0, MobileNet-v3和GhostNet)与最近的小型ViT(即EVA-02-Ti)和更高的计算模型ResNet-50相比,如何在空间分析中执行。我们之前在ImageNet和BU101上训练的模型,在HMDB51、UCF101和NTU数据集的六个类别上测量了它们的分类精度。通过每个视频的3帧和15帧RGB帧生成平均和最高分数以及投票方法,同时在训练期间评估了两种不同的退出层率。最后,通过多种类型的rnn进行时间分析,这些rnn采用由训练过的网络提取的特征。我们的研究结果表明,EfficientNet-b0和EVA-02-Ti超越了其他移动cnn,实现了与ResNet-50相当或更好的性能。
{"title":"Evaluating the Performance of Mobile-Convolutional Neural Networks for Spatial and Temporal Human Action Recognition Analysis","authors":"Stavros N. Moutsis, Konstantinos A. Tsintotas, Ioannis Kansizoglou, Antonios Gasteratos","doi":"10.3390/robotics12060167","DOIUrl":"https://doi.org/10.3390/robotics12060167","url":null,"abstract":"Human action recognition is a computer vision task that identifies how a person or a group acts on a video sequence. Various methods that rely on deep-learning techniques, such as two- or three-dimensional convolutional neural networks (2D-CNNs, 3D-CNNs), recurrent neural networks (RNNs), and vision transformers (ViT), have been proposed to address this problem over the years. Motivated by the fact that most of the used CNNs in human action recognition present high complexity, and the necessity of implementations on mobile platforms that are characterized by restricted computational resources, in this article, we conduct an extensive evaluation protocol over the performance metrics of five lightweight architectures. In particular, we examine how these mobile-oriented CNNs (viz., ShuffleNet-v2, EfficientNet-b0, MobileNet-v3, and GhostNet) execute in spatial analysis compared to a recent tiny ViT, namely EVA-02-Ti, and a higher computational model, ResNet-50. Our models, previously trained on ImageNet and BU101, are measured for their classification accuracy on HMDB51, UCF101, and six classes of the NTU dataset. The average and max scores, as well as the voting approaches, are generated through three and fifteen RGB frames of each video, while two different rates for the dropout layers were assessed during the training. Last, a temporal analysis via multiple types of RNNs that employ features extracted by the trained networks is examined. Our results reveal that EfficientNet-b0 and EVA-02-Ti surpass the other mobile-CNNs, achieving comparable or superior performance to ResNet-50.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"83 24","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138586712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NOHAS: A Novel Orthotic Hand Actuated by Servo Motors and Mobile App for Stroke Rehabilitation NOHAS:由伺服电机和移动应用程序驱动的新型中风康复矫形手
IF 3.7 Q2 ROBOTICS Pub Date : 2023-12-08 DOI: 10.3390/robotics12060169
Ebenezer Raj Selvaraj Mercyshalinie, A. Ghadge, N. Ifejika, Yonas T. Tadesse
The rehabilitation process after the onset of a stroke primarily deals with assisting in regaining mobility, communication skills, swallowing function, and activities of daily living (ADLs). This entirely depends on the specific regions of the brain that have been affected by the stroke. Patients can learn how to utilize adaptive equipment, regain movement, and reduce muscle spasticity through certain repetitive exercises and therapeutic interventions. These exercises can be performed by wearing soft robotic gloves on the impaired extremity. For post-stroke rehabilitation, we have designed and characterized an interactive hand orthosis with tendon-driven finger actuation mechanisms actuated by servo motors, which consists of a fabric glove and force-sensitive resistors (FSRs) at the tip. The robotic device moves the user’s hand when operated by mobile phone to replicate normal gripping behavior. In this paper, the characterization of finger movements in response to step input commands from a mobile app was carried out for each finger at the proximal interphalangeal (PIP), distal interphalangeal (DIP), and metacarpophalangeal (MCP) joints. In general, servo motor-based hand orthoses are energy-efficient; however, they generate noise during actuation. Here, we quantified the noise generated by servo motor actuation for each finger as well as when a group of fingers is simultaneously activated. To test ADL ability, we evaluated the device’s effectiveness in holding different objects from the Action Research Arm Test (ARAT) kit. Our device, novel hand orthosis actuated by servo motors (NOHAS), was tested on ten healthy human subjects and showed an average of 90% success rate in grasping tasks. Our orthotic hand shows promise for aiding post-stroke subjects recover because of its simplicity of use, lightweight construction, and carefully designed components.
中风发生后的康复过程主要是帮助患者恢复行动能力、交流技能、吞咽功能和日常生 活活动(ADLs)。这完全取决于受中风影响的大脑特定区域。患者可以学习如何使用适应性设备,恢复运动能力,并通过某些重复性练习和治疗干预减少肌肉痉挛。这些练习可以通过在受损肢体上佩戴柔软的机器人手套来完成。针对中风后康复,我们设计并鉴定了一种交互式手部矫形器,该矫形器具有由伺服电机驱动的肌腱驱动手指执行机制,由织物手套和顶端的力敏电阻器(FSR)组成。当用户使用手机进行操作时,该机器人装置会移动用户的手,以复制正常的抓握行为。本文针对每个手指的近端指间关节(PIP)、远端指间关节(DIP)和掌指关节(MCP),分析了手指响应移动应用程序的步进输入指令时的运动特征。一般来说,基于伺服电机的手部矫形器具有很高的能效,但在驱动过程中会产生噪音。在此,我们对伺服电机驱动每个手指以及同时驱动一组手指时产生的噪音进行了量化。为了测试 ADL 能力,我们评估了该装置在握住行动研究手臂测试(ARAT)套件中不同物体时的有效性。我们的设备--由伺服电机驱动的新型手部矫形器(NOHAS)在十名健康人体受试者身上进行了测试,结果显示在抓握任务中平均成功率达到 90%。我们的矫形手因其简单易用、结构轻巧和精心设计的部件,有望帮助中风后的受试者恢复健康。
{"title":"NOHAS: A Novel Orthotic Hand Actuated by Servo Motors and Mobile App for Stroke Rehabilitation","authors":"Ebenezer Raj Selvaraj Mercyshalinie, A. Ghadge, N. Ifejika, Yonas T. Tadesse","doi":"10.3390/robotics12060169","DOIUrl":"https://doi.org/10.3390/robotics12060169","url":null,"abstract":"The rehabilitation process after the onset of a stroke primarily deals with assisting in regaining mobility, communication skills, swallowing function, and activities of daily living (ADLs). This entirely depends on the specific regions of the brain that have been affected by the stroke. Patients can learn how to utilize adaptive equipment, regain movement, and reduce muscle spasticity through certain repetitive exercises and therapeutic interventions. These exercises can be performed by wearing soft robotic gloves on the impaired extremity. For post-stroke rehabilitation, we have designed and characterized an interactive hand orthosis with tendon-driven finger actuation mechanisms actuated by servo motors, which consists of a fabric glove and force-sensitive resistors (FSRs) at the tip. The robotic device moves the user’s hand when operated by mobile phone to replicate normal gripping behavior. In this paper, the characterization of finger movements in response to step input commands from a mobile app was carried out for each finger at the proximal interphalangeal (PIP), distal interphalangeal (DIP), and metacarpophalangeal (MCP) joints. In general, servo motor-based hand orthoses are energy-efficient; however, they generate noise during actuation. Here, we quantified the noise generated by servo motor actuation for each finger as well as when a group of fingers is simultaneously activated. To test ADL ability, we evaluated the device’s effectiveness in holding different objects from the Action Research Arm Test (ARAT) kit. Our device, novel hand orthosis actuated by servo motors (NOHAS), was tested on ten healthy human subjects and showed an average of 90% success rate in grasping tasks. Our orthotic hand shows promise for aiding post-stroke subjects recover because of its simplicity of use, lightweight construction, and carefully designed components.","PeriodicalId":37568,"journal":{"name":"Robotics","volume":"92 ","pages":""},"PeriodicalIF":3.7,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139010915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1