首页 > 最新文献

2019 International Conference on Robotics and Automation (ICRA)最新文献

英文 中文
Design and Analysis of A Miniature Two-Wheg Climbing Robot with Robust Internal and External Transitioning Capabilities 具有鲁棒内外过渡能力的微型两轮爬行机器人的设计与分析
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8793910
Darren C. Y. Koh, A. G. Dharmawan, H. Hariri, G. Soh, S. Foong, Roland Bouffanais, H. Low, K. Wood
Plane-to-plane transitioning has been a significant challenge for climbing robots. To accomplish this, additional actuator or robot module is usually required which significantly increases both size and weight of the robot. This paper presents a two-wheg miniature climbing robot with a novel passive vertical tail component which results in robust transitioning capabilities. The design decision was derived from an indepth force analysis of the climbing robot while performing the transition. The theoretical analysis is verified through a working prototype with robust transitioning capabilities whose performance follows closely the analytical prediction. The climbing robot is able to climb any slope angles, 4-way internal transitions, and 4-way external transitions. This work contributes to the understanding and advancement of the transitioning capabilities and the design of a simple climbing robot, which expands the possibilities of scaling down miniature climbing robot further.
平面到平面的过渡一直是攀爬机器人面临的重大挑战。为了实现这一点,通常需要额外的驱动器或机器人模块,这大大增加了机器人的尺寸和重量。本文提出了一种两轮微型爬行机器人,该机器人采用了一种新型的被动垂尾组件,具有鲁棒的过渡能力。设计决策是通过对爬行机器人进行过渡时的深度受力分析得出的。通过具有鲁棒过渡能力的工作样机验证了理论分析,其性能与分析预测基本一致。攀爬机器人能够攀爬任意坡角、四向内过渡和四向外过渡。这项工作有助于理解和提高过渡能力,并设计出简单的爬行机器人,为进一步缩小微型爬行机器人的规模提供了可能。
{"title":"Design and Analysis of A Miniature Two-Wheg Climbing Robot with Robust Internal and External Transitioning Capabilities","authors":"Darren C. Y. Koh, A. G. Dharmawan, H. Hariri, G. Soh, S. Foong, Roland Bouffanais, H. Low, K. Wood","doi":"10.1109/ICRA.2019.8793910","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793910","url":null,"abstract":"Plane-to-plane transitioning has been a significant challenge for climbing robots. To accomplish this, additional actuator or robot module is usually required which significantly increases both size and weight of the robot. This paper presents a two-wheg miniature climbing robot with a novel passive vertical tail component which results in robust transitioning capabilities. The design decision was derived from an indepth force analysis of the climbing robot while performing the transition. The theoretical analysis is verified through a working prototype with robust transitioning capabilities whose performance follows closely the analytical prediction. The climbing robot is able to climb any slope angles, 4-way internal transitions, and 4-way external transitions. This work contributes to the understanding and advancement of the transitioning capabilities and the design of a simple climbing robot, which expands the possibilities of scaling down miniature climbing robot further.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"29 1","pages":"9740-9746"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89777004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Accounting for Part Pose Estimation Uncertainties during Trajectory Generation for Part Pick-Up Using Mobile Manipulators 基于移动机械手的零件提取轨迹生成过程中零件姿态估计的不确定性
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8793501
Shantanu Thakar, P. Rajendran, Vivek Annem, A. Kabir, Satyandra K. Gupta
To minimize the operation time, mobile manipulators need to pick-up parts while the mobile base and the gripper are moving. The gripper speed needs to be selected to ensure that the pick-up operation does not fail due to uncertainties in part pose estimation. This, in turn, affects the mobile base trajectory. This paper presents an active learning based approach to construct a meta-model to estimate the probability of successful part pick-up for a given level of uncertainty in the part pose estimate. Using this model, we present an optimization-based framework to generate time-optimal trajectories that satisfy the given level of success probability threshold for picking-up the part.
为了最大限度地减少操作时间,移动机械手需要在移动基座和夹持器运动时取件。需要选择抓取器的速度,以确保抓取操作不会由于部分姿态估计的不确定性而失败。这反过来又影响了移动基地的轨迹。本文提出了一种基于主动学习的方法来构建一个元模型来估计零件姿态估计中给定不确定性水平下成功拾取零件的概率。使用该模型,我们提出了一个基于优化的框架,以生成满足给定成功概率阈值水平的时间最优轨迹。
{"title":"Accounting for Part Pose Estimation Uncertainties during Trajectory Generation for Part Pick-Up Using Mobile Manipulators","authors":"Shantanu Thakar, P. Rajendran, Vivek Annem, A. Kabir, Satyandra K. Gupta","doi":"10.1109/ICRA.2019.8793501","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793501","url":null,"abstract":"To minimize the operation time, mobile manipulators need to pick-up parts while the mobile base and the gripper are moving. The gripper speed needs to be selected to ensure that the pick-up operation does not fail due to uncertainties in part pose estimation. This, in turn, affects the mobile base trajectory. This paper presents an active learning based approach to construct a meta-model to estimate the probability of successful part pick-up for a given level of uncertainty in the part pose estimate. Using this model, we present an optimization-based framework to generate time-optimal trajectories that satisfy the given level of success probability threshold for picking-up the part.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"74 1","pages":"1329-1336"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86104699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Robotic Forceps without Position Sensors using Visual SLAM 使用视觉SLAM的无位置传感器的机器人钳
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794321
Takuya Iwai, T. Kanno, Tetsuro Miyazaki, Toshihiro Kawase, K. Kawashima
In this study, a robotic forceps with a wrist joint using visual SLAM for joint angle sensing was developed. The forceps has a flexible joint connected to the wrist joint at its rear end and the motion of the rear joint is driven by a parallel linkage. A monocular camera attached on the rear of the parallel linkage is in charge of position sensing, and the joint angles are estimated from the pose of the camera. The pose of the camera is obtained by a visual SLAM. The visual servo system realizes a simple attaching mechanism. The static and dynamic positioning experiments are conducted. We confirmed that the visual servoing system controls the forceps tip within the error of 3 deg in the motion range of 50 deg.
在这项研究中,开发了一种使用视觉SLAM进行关节角度传感的腕关节机器人钳。所述钳子的后端具有与腕关节相连的柔性关节,后关节的运动由并联机构驱动。安装在并联机构后部的单目摄像机负责位置感知,并根据摄像机的姿态估计关节角度。相机的姿态由视觉SLAM获得。视觉伺服系统实现了一种简单的附加机构。进行了静态和动态定位实验。我们确认了视觉伺服系统在50度运动范围内控制钳尖误差在3度以内。
{"title":"Robotic Forceps without Position Sensors using Visual SLAM","authors":"Takuya Iwai, T. Kanno, Tetsuro Miyazaki, Toshihiro Kawase, K. Kawashima","doi":"10.1109/ICRA.2019.8794321","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794321","url":null,"abstract":"In this study, a robotic forceps with a wrist joint using visual SLAM for joint angle sensing was developed. The forceps has a flexible joint connected to the wrist joint at its rear end and the motion of the rear joint is driven by a parallel linkage. A monocular camera attached on the rear of the parallel linkage is in charge of position sensing, and the joint angles are estimated from the pose of the camera. The pose of the camera is obtained by a visual SLAM. The visual servo system realizes a simple attaching mechanism. The static and dynamic positioning experiments are conducted. We confirmed that the visual servoing system controls the forceps tip within the error of 3 deg in the motion range of 50 deg.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"40 1","pages":"6331-6336"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83641020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrated UWB-Vision Approach for Autonomous Docking of UAVs in GPS-denied Environments gps拒绝环境下无人机自主对接的集成超宽带视觉方法
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8793851
Thien-Minh Nguyen, T. Nguyen, Muqing Cao, Zhirong Qiu, Lihua Xie
Though vision-based techniques have become quite popular for autonomous docking of Unmanned Aerial Vehicles (UAVs), due to limited field of view (FOV), the UAV must rely on other methods to detect and approach the target before vision can be used. In this paper we propose a method combining Ultra-wideband (UWB) ranging sensor with vision-based techniques to achieve both autonomous approaching and landing capabilities in GPS-denied environments. In the approaching phase, a robust and efficient recursive least-square optimization algorithm is proposed to estimate the position of the UAV relative to the target by using the distance and relative displacement measurements. Using this estimate, UAV is able to approach the target until the landing pad is detected by an onboard vision system, then UWB measurements and vision-derived poses are fused with onboard sensor of UAV to facilitate an accurate landing maneuver. Real-world experiments are conducted to demonstrate the efficiency of our method.
尽管基于视觉的技术在无人机的自主对接中已经非常流行,但由于视野(FOV)有限,无人机必须依靠其他方法来检测和接近目标,然后才能使用视觉。在本文中,我们提出了一种结合超宽带(UWB)测距传感器和基于视觉的技术的方法,以实现在gps拒绝环境下的自主接近和着陆能力。在逼近阶段,提出了一种鲁棒高效的递推最小二乘优化算法,利用距离和相对位移测量来估计无人机相对于目标的位置。利用该估计,无人机能够接近目标,直到着陆垫被机载视觉系统检测到,然后UWB测量和视觉衍生姿态与无人机机载传感器融合,以促进精确的着陆机动。实际实验证明了该方法的有效性。
{"title":"Integrated UWB-Vision Approach for Autonomous Docking of UAVs in GPS-denied Environments","authors":"Thien-Minh Nguyen, T. Nguyen, Muqing Cao, Zhirong Qiu, Lihua Xie","doi":"10.1109/ICRA.2019.8793851","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793851","url":null,"abstract":"Though vision-based techniques have become quite popular for autonomous docking of Unmanned Aerial Vehicles (UAVs), due to limited field of view (FOV), the UAV must rely on other methods to detect and approach the target before vision can be used. In this paper we propose a method combining Ultra-wideband (UWB) ranging sensor with vision-based techniques to achieve both autonomous approaching and landing capabilities in GPS-denied environments. In the approaching phase, a robust and efficient recursive least-square optimization algorithm is proposed to estimate the position of the UAV relative to the target by using the distance and relative displacement measurements. Using this estimate, UAV is able to approach the target until the landing pad is detected by an onboard vision system, then UWB measurements and vision-derived poses are fused with onboard sensor of UAV to facilitate an accurate landing maneuver. Real-world experiments are conducted to demonstrate the efficiency of our method.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"18 1","pages":"9603-9609"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79363424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Control and Configuration Planning of an Aerial Cable Towed System 空中电缆拖曳系统的控制与配置规划
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794396
Julian Erskine, A. Chriette, S. Caro
This paper investigates the effect of the robot configuration on the performance of an aerial cable towed system (ACTS) composed of three quadrotors manipulating a point mass payload. The kinematic and dynamic models of the ACTS are derived in a minimal set of geometric coordinates, and a centralized feedback linearization controller is developed. Independent to the payload trajectory, the configuration of the ACTS is controlled and is evaluated using a robustness index named the capacity margin. Experiments are performed with optimal, suboptimal, and wrench infeasible configurations. It is shown that configurations near the point of zero capacity margin allow the ACTS to hover but not to follow dynamic trajectories, and that the ACTS cannot fly with a negative capacity margin. Dynamic tests of the ACTS show the effects of the configuration on the achievable accelerations.
本文研究了机器人结构对由三个四旋翼飞行器操纵点质量载荷组成的空中缆索拖曳系统(ACTS)性能的影响。在最小几何坐标下建立了act的运动学和动力学模型,并设计了集中反馈线性化控制器。与有效载荷轨迹无关,ACTS的配置是受控的,并使用称为容量裕度的鲁棒性指标进行评估。在最优、次优和不可行配置下进行了实验。结果表明,在零容量裕度点附近的构型允许ACTS悬停但不能跟随动态轨迹,并且ACTS不能以负容量裕度飞行。动态测试显示了结构对可达到加速度的影响。
{"title":"Control and Configuration Planning of an Aerial Cable Towed System","authors":"Julian Erskine, A. Chriette, S. Caro","doi":"10.1109/ICRA.2019.8794396","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794396","url":null,"abstract":"This paper investigates the effect of the robot configuration on the performance of an aerial cable towed system (ACTS) composed of three quadrotors manipulating a point mass payload. The kinematic and dynamic models of the ACTS are derived in a minimal set of geometric coordinates, and a centralized feedback linearization controller is developed. Independent to the payload trajectory, the configuration of the ACTS is controlled and is evaluated using a robustness index named the capacity margin. Experiments are performed with optimal, suboptimal, and wrench infeasible configurations. It is shown that configurations near the point of zero capacity margin allow the ACTS to hover but not to follow dynamic trajectories, and that the ACTS cannot fly with a negative capacity margin. Dynamic tests of the ACTS show the effects of the configuration on the achievable accelerations.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"22 1","pages":"6440-6446"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75930806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
ModQuad-Vi: A Vision-Based Self-Assembling Modular Quadrotor ModQuad-Vi:一种基于视觉的自组装模块化四旋翼飞行器
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794056
Guanrui Li, Bruno Gabrich, David Saldaña, J. Das, Vijay R. Kumar, Mark H. Yim
Flying modular robots have the potential to rapidly form temporary structures. In the literature, docking actions rely on external systems and indoor infrastructures for relative pose estimation. In contrast to related work, we provide local estimation during the self-assembly process to avoid dependency on external systems. In this paper, we introduce ModQuad-Vi, a flying modular robot that is aimed to operate in outdoor environments. We propose a new robot design and vision-based docking method. Our design is based on a quadrotor platform with onboard computation and visual perception. Our control method is able to accurately align modules for docking actions. Additionally, we present the dynamics and a geometric controller for the aerial modular system. Experiments validate the vision-based docking method with successful results.
飞行的模块化机器人具有迅速形成临时结构的潜力。在文献中,对接动作依赖于外部系统和室内基础设施进行相对姿态估计。与相关工作相比,我们在自组装过程中提供了局部估计,以避免对外部系统的依赖。在本文中,我们介绍了ModQuad-Vi,一个飞行的模块化机器人,旨在在室外环境中工作。提出了一种新的机器人设计和基于视觉的对接方法。我们的设计是基于机载计算和视觉感知的四旋翼平台。我们的控制方法能够精确地对齐模块进行对接动作。此外,我们还提出了空中模块化系统的动力学和几何控制器。实验验证了基于视觉的对接方法,并取得了成功的结果。
{"title":"ModQuad-Vi: A Vision-Based Self-Assembling Modular Quadrotor","authors":"Guanrui Li, Bruno Gabrich, David Saldaña, J. Das, Vijay R. Kumar, Mark H. Yim","doi":"10.1109/ICRA.2019.8794056","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794056","url":null,"abstract":"Flying modular robots have the potential to rapidly form temporary structures. In the literature, docking actions rely on external systems and indoor infrastructures for relative pose estimation. In contrast to related work, we provide local estimation during the self-assembly process to avoid dependency on external systems. In this paper, we introduce ModQuad-Vi, a flying modular robot that is aimed to operate in outdoor environments. We propose a new robot design and vision-based docking method. Our design is based on a quadrotor platform with onboard computation and visual perception. Our control method is able to accurately align modules for docking actions. Additionally, we present the dynamics and a geometric controller for the aerial modular system. Experiments validate the vision-based docking method with successful results.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"41 1","pages":"346-352"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80168620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Intent-Uncertainty-Aware Grasp Planning for Robust Robot Assistance in Telemanipulation 鲁棒机器人辅助遥操作的意图-不确定性感知抓取规划
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8793819
Michael Bowman, Songpo Li, Xiaoli Zhang
Promoting a robot agent’s autonomy level, which allows it to understand the human operator’s intent and provide motion assistance to achieve it, has demonstrated great advantages to the operator’s intent in teleoperation. However, the research has been limited to the target approaching process. We advance the shared control technique one step further to deal with the more challenging object manipulation task. Appropriately manipulating an object is challenging as it requires fine motion constraints for a certain manipulation task. Although these motion constraints are critical for task success, they are subtle to observe from ambiguous human motion. The disembodiment problem and physical discrepancy between the human and robot hands bring additional uncertainty, make the object manipulation task more challenging. Moreover, there is a lack of modeling and planning techniques that can effectively combine the human motion input and robot agent’s motion input while accounting for the ambiguity of the human intent. To overcome this challenge, we built a multi-task robot grasping model and developed an intent-uncertainty-aware grasp planner to generate robust grasp poses given the ambiguous human intent inference inputs. With this validated modeling and planning techniques, it is expected to extend teleoperated robots’ functionality and adoption in practical telemanipulation scenarios.
提高机器人代理的自主水平,使其能够理解人类操作员的意图并提供运动辅助来实现这一意图,这在远程操作中对操作员的意图有很大的优势。然而,目前的研究仅限于目标逼近过程。我们将共享控制技术向前推进了一步,以处理更具挑战性的对象操作任务。适当地操纵一个对象是具有挑战性的,因为它需要精细的运动约束来完成特定的操作任务。尽管这些运动约束对任务的成功至关重要,但从模糊的人类运动中观察到它们是微妙的。人与机器人双手的分离问题和物理差异带来了额外的不确定性,使物体操作任务更具挑战性。此外,缺乏建模和规划技术,可以有效地将人类运动输入和机器人代理的运动输入结合起来,同时考虑到人类意图的模糊性。为了克服这一挑战,我们建立了一个多任务机器人抓取模型,并开发了一个意图不确定性感知抓取规划器,以在模糊的人类意图推理输入下生成鲁棒抓取姿态。通过这种验证的建模和规划技术,有望扩展远程操作机器人的功能和在实际远程操作场景中的采用。
{"title":"Intent-Uncertainty-Aware Grasp Planning for Robust Robot Assistance in Telemanipulation","authors":"Michael Bowman, Songpo Li, Xiaoli Zhang","doi":"10.1109/ICRA.2019.8793819","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8793819","url":null,"abstract":"Promoting a robot agent’s autonomy level, which allows it to understand the human operator’s intent and provide motion assistance to achieve it, has demonstrated great advantages to the operator’s intent in teleoperation. However, the research has been limited to the target approaching process. We advance the shared control technique one step further to deal with the more challenging object manipulation task. Appropriately manipulating an object is challenging as it requires fine motion constraints for a certain manipulation task. Although these motion constraints are critical for task success, they are subtle to observe from ambiguous human motion. The disembodiment problem and physical discrepancy between the human and robot hands bring additional uncertainty, make the object manipulation task more challenging. Moreover, there is a lack of modeling and planning techniques that can effectively combine the human motion input and robot agent’s motion input while accounting for the ambiguity of the human intent. To overcome this challenge, we built a multi-task robot grasping model and developed an intent-uncertainty-aware grasp planner to generate robust grasp poses given the ambiguous human intent inference inputs. With this validated modeling and planning techniques, it is expected to extend teleoperated robots’ functionality and adoption in practical telemanipulation scenarios.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"30 1","pages":"409-415"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73990089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Visual Guidance and Automatic Control for Robotic Personalized Stent Graft Manufacturing 机器人个性化支架制造的视觉引导与自动控制
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794123
Yu Guo, Miao Sun, F. P. Lo, Benny P. L. Lo
Personalized stent graft is designed to treat Abdominal Aortic Aneurysms (AAA). Due to the individual difference in arterial structures, stent graft has to be custom made for each AAA patient. Robotic platforms for autonomous personalized stent graft manufacturing have been proposed in recently which rely upon stereo vision systems for coordinating multiple robots for fabricating customized stent grafts. This paper proposes a novel hybrid vision system for real-time visual-sevoing for personalized stent-graft manufacturing. To coordinate the robotic arms, this system is based on projecting a dynamic stereo microscope coordinate system onto a static wide angle view stereo webcam coordinate system. The multiple stereo camera configuration enables accurate localization of the needle in 3D during the sewing process. The scale-invariant feature transform (SIFT) method and color filtering are implemented for stereo matching and feature identifications for object localization. To maintain the clear view of the sewing process, a visual-servoing system is developed for guiding the stereo microscopes for tracking the needle movements. The deep deterministic policy gradient (DDPG) reinforcement learning algorithm is developed for real-time intelligent robotic control. Experimental results have shown that the robotic arm can learn to reach the desired targets autonomously.
个体化支架移植被设计用于治疗腹主动脉瘤(AAA)。由于动脉结构的个体差异,每个AAA患者都需要定制支架。自主个性化支架制造机器人平台最近被提出,该平台依赖于立体视觉系统来协调多个机器人来制造定制支架。本文提出了一种用于个性化支架制造的实时视觉检测的新型混合视觉系统。为了协调机械臂,该系统基于将动态立体显微镜坐标系投影到静态广角立体网络摄像头坐标系上。在缝纫过程中,多重立体摄像头配置可以在3D中精确定位针。采用尺度不变特征变换(SIFT)方法和颜色滤波方法实现立体匹配和目标定位的特征识别。为了保持缝纫过程的清晰视图,开发了一个视觉伺服系统来引导立体显微镜跟踪针的运动。针对智能机器人的实时控制问题,提出了深度确定性策略梯度(DDPG)强化学习算法。实验结果表明,该机械臂能够自主学习达到预期目标。
{"title":"Visual Guidance and Automatic Control for Robotic Personalized Stent Graft Manufacturing","authors":"Yu Guo, Miao Sun, F. P. Lo, Benny P. L. Lo","doi":"10.1109/ICRA.2019.8794123","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794123","url":null,"abstract":"Personalized stent graft is designed to treat Abdominal Aortic Aneurysms (AAA). Due to the individual difference in arterial structures, stent graft has to be custom made for each AAA patient. Robotic platforms for autonomous personalized stent graft manufacturing have been proposed in recently which rely upon stereo vision systems for coordinating multiple robots for fabricating customized stent grafts. This paper proposes a novel hybrid vision system for real-time visual-sevoing for personalized stent-graft manufacturing. To coordinate the robotic arms, this system is based on projecting a dynamic stereo microscope coordinate system onto a static wide angle view stereo webcam coordinate system. The multiple stereo camera configuration enables accurate localization of the needle in 3D during the sewing process. The scale-invariant feature transform (SIFT) method and color filtering are implemented for stereo matching and feature identifications for object localization. To maintain the clear view of the sewing process, a visual-servoing system is developed for guiding the stereo microscopes for tracking the needle movements. The deep deterministic policy gradient (DDPG) reinforcement learning algorithm is developed for real-time intelligent robotic control. Experimental results have shown that the robotic arm can learn to reach the desired targets autonomously.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"11 1","pages":"8740-8746"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91294875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the Performance of Auxiliary Null Space Tasks via Time Scaling-Based Relaxation of the Primary Task 基于时间尺度的主任务松弛改进辅助零空间任务的性能
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794225
Nico Mansfeld, Youssef Michel, T. Bruckmann, S. Haddadin
Kinematic redundancy enhances the dexterity and flexibility of robot manipulators. By exploiting the redundant degrees of freedom, auxiliary null space tasks can be carried out in addition to the primary task. Such auxiliary tasks are often formulated in terms of a performance or safety criterion that shall be minimized. If the optimization criterion, however, is defined in global terms, then it is directly affected by the primary task. As a consequence, the task achievement of the auxiliary task may be unnecessarily detrimented by the main task. In addition to modifying the primary task via constraint relaxation, a possible solution for improving the performance of the auxiliary task is to relax the primary task temporarily via time scaling. This gives the null space task more time for achieving its objective. In this paper, we propose several such time scaling schemes and verify their performance for a DLR/KUKA Lightweight Robot with one redundant degree of freedom. Finally, we extend the concept to multiple prioritized tasks and provide a simulation example.
运动冗余提高了机器人机械手的灵巧性和灵活性。通过利用冗余自由度,可以在主任务之外执行辅助零空间任务。这些辅助任务通常是根据应尽量减少的性能或安全标准制定的。然而,如果优化准则是在全局条件下定义的,那么它将直接受到主要任务的影响。因此,辅助任务的任务完成可能会被主任务不必要地损害。除了通过放松约束来修改主任务外,提高辅助任务性能的一种可能的解决方案是通过时间尺度暂时放松主任务。这为零空间任务提供了更多的时间来实现其目标。在本文中,我们提出了几种这样的时间尺度方案,并验证了它们在具有一个冗余自由度的DLR/KUKA轻型机器人上的性能。最后,我们将这个概念扩展到多个优先级任务,并提供了一个仿真示例。
{"title":"Improving the Performance of Auxiliary Null Space Tasks via Time Scaling-Based Relaxation of the Primary Task","authors":"Nico Mansfeld, Youssef Michel, T. Bruckmann, S. Haddadin","doi":"10.1109/ICRA.2019.8794225","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794225","url":null,"abstract":"Kinematic redundancy enhances the dexterity and flexibility of robot manipulators. By exploiting the redundant degrees of freedom, auxiliary null space tasks can be carried out in addition to the primary task. Such auxiliary tasks are often formulated in terms of a performance or safety criterion that shall be minimized. If the optimization criterion, however, is defined in global terms, then it is directly affected by the primary task. As a consequence, the task achievement of the auxiliary task may be unnecessarily detrimented by the main task. In addition to modifying the primary task via constraint relaxation, a possible solution for improving the performance of the auxiliary task is to relax the primary task temporarily via time scaling. This gives the null space task more time for achieving its objective. In this paper, we propose several such time scaling schemes and verify their performance for a DLR/KUKA Lightweight Robot with one redundant degree of freedom. Finally, we extend the concept to multiple prioritized tasks and provide a simulation example.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"9342-9348"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88867347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Surfel-Based Dense RGB-D Reconstruction With Global And Local Consistency 基于surf的全局和局部一致性稠密RGB-D重建
Pub Date : 2019-05-20 DOI: 10.1109/ICRA.2019.8794355
Yi Yang, W. Dong, M. Kaess
Achieving high surface reconstruction accuracy in dense mapping has been a desirable target for both robotics and vision communities. In the robotics literature, simultaneous localization and mapping (SLAM) systems use RGB-D cameras to reconstruct a dense map of the environment. They leverage the depth input to provide accurate local pose estimation and a locally consistent model. However, drift in the pose tracking over time leads to misalignments and artifacts. On the other hand, offline computer vision methods, such as the pipeline that combines structure-from-motion (SfM) and multi-view stereo (MVS), estimate the camera poses by performing batch optimization. These methods achieve global consistency, but suffer from heavy computation loads. We propose a novel approach that integrates both methods to achieve locally and globally consistent reconstruction. First, we estimate poses of keyframes in the offline SfM pipeline to provide strong global constraints at relatively low cost. Afterwards, we compute odometry between frames driven by off-the-shelf SLAM systems with high local accuracy. We fuse the two pose estimations using factor graph optimization to generate accurate camera poses for dense reconstruction. Experiments on real-world and synthetic datasets demonstrate that our approach produces more accurate models comparing to existing dense SLAM systems, while achieving significant speedup with respect to state-of-the-art SfM-MVS pipelines.
在密集映射中实现高表面重建精度一直是机器人和视觉界的理想目标。在机器人技术文献中,同步定位和测绘(SLAM)系统使用RGB-D相机重建环境的密集地图。他们利用深度输入来提供准确的局部姿态估计和局部一致的模型。然而,漂移的姿态跟踪随着时间的推移导致错位和伪影。另一方面,离线计算机视觉方法,如结合运动结构(SfM)和多视点立体(MVS)的流水线,通过批量优化来估计相机姿态。这些方法实现了全局一致性,但计算负荷较大。我们提出了一种整合这两种方法的新方法,以实现局部和全局一致的重建。首先,我们估计离线SfM管道中关键帧的姿态,以相对较低的成本提供强大的全局约束。然后,我们计算了由现成的SLAM系统驱动的帧之间的里程计,具有较高的局部精度。我们使用因子图优化方法融合两种姿态估计,生成精确的相机姿态用于密集重建。在真实世界和合成数据集上的实验表明,与现有的密集SLAM系统相比,我们的方法产生了更准确的模型,同时相对于最先进的SfM-MVS管道实现了显著的加速。
{"title":"Surfel-Based Dense RGB-D Reconstruction With Global And Local Consistency","authors":"Yi Yang, W. Dong, M. Kaess","doi":"10.1109/ICRA.2019.8794355","DOIUrl":"https://doi.org/10.1109/ICRA.2019.8794355","url":null,"abstract":"Achieving high surface reconstruction accuracy in dense mapping has been a desirable target for both robotics and vision communities. In the robotics literature, simultaneous localization and mapping (SLAM) systems use RGB-D cameras to reconstruct a dense map of the environment. They leverage the depth input to provide accurate local pose estimation and a locally consistent model. However, drift in the pose tracking over time leads to misalignments and artifacts. On the other hand, offline computer vision methods, such as the pipeline that combines structure-from-motion (SfM) and multi-view stereo (MVS), estimate the camera poses by performing batch optimization. These methods achieve global consistency, but suffer from heavy computation loads. We propose a novel approach that integrates both methods to achieve locally and globally consistent reconstruction. First, we estimate poses of keyframes in the offline SfM pipeline to provide strong global constraints at relatively low cost. Afterwards, we compute odometry between frames driven by off-the-shelf SLAM systems with high local accuracy. We fuse the two pose estimations using factor graph optimization to generate accurate camera poses for dense reconstruction. Experiments on real-world and synthetic datasets demonstrate that our approach produces more accurate models comparing to existing dense SLAM systems, while achieving significant speedup with respect to state-of-the-art SfM-MVS pipelines.","PeriodicalId":6730,"journal":{"name":"2019 International Conference on Robotics and Automation (ICRA)","volume":"9 5","pages":"5238-5244"},"PeriodicalIF":0.0,"publicationDate":"2019-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91480866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2019 International Conference on Robotics and Automation (ICRA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1