首页 > 最新文献

2020 17th International Conference on Ubiquitous Robots (UR)最新文献

英文 中文
Virtual Reality for Offline Programming of Robotic Applications with Online Teaching Methods 虚拟现实技术在机器人离线编程中的应用与在线教学方法
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144806
Gabriele Bolano, A. Roennau, R. Dillmann, Albert Groz
Robotic systems are complex and commonly require experts to program the motions and interactions between all the different components. Operators with programming skills are usually needed to make the robot perform a new task or even to apply small changes in its current behavior. For this reason many tools have been developed to ease the programming of robotic systems. Online programming methods rely on the use of the robot in order to move it to the desired configurations. On the other hand, simulation-based methods enable the offline teaching of the needed program without involving the actual hardware setup. Virtual Reality (VR) allows the user to program a robot safely and effortlessly, without the need to move the real manipulator. However, online programming methods are needed for on-site adjustments, but a common interface between these two methods is usually not available. In this work we propose a VR-based framework for programming robotic tasks. The system architecture deployed allows the integration of the defined programs into existing tools for online teaching and execution on the real hardware. The proposed virtual environment enables the intuitive definition of the entire task workflow, without the need to involve the real setup. The bilateral communication between this component and the robotic hardware allows the user to introduce changes in the virtual environment, as well into the real system. In this way, they can both be updated with the latest changes and used in a interchangeable way, exploiting the advantages of both methods in a flexible manner.
机器人系统是复杂的,通常需要专家对所有不同组件之间的运动和相互作用进行编程。通常需要具有编程技能的操作员使机器人执行新任务,甚至对其当前行为进行微小的改变。由于这个原因,已经开发了许多工具来简化机器人系统的编程。在线编程方法依赖于机器人的使用,以便将其移动到所需的配置。另一方面,基于仿真的方法可以在不涉及实际硬件设置的情况下进行所需程序的离线教学。虚拟现实(VR)允许用户安全地、毫不费力地对机器人进行编程,而不需要移动真正的机械手。然而,现场调整需要在线编程方法,但这两种方法之间通常没有公共接口。在这项工作中,我们提出了一个基于vr的机器人任务编程框架。部署的系统架构允许将定义的程序集成到现有的在线教学工具中,并在实际硬件上执行。所建议的虚拟环境可以直观地定义整个任务工作流,而无需涉及实际设置。该组件和机器人硬件之间的双边通信允许用户在虚拟环境中以及在真实系统中引入变化。通过这种方式,它们都可以使用最新的更改进行更新,并以可互换的方式使用,以灵活的方式利用两种方法的优点。
{"title":"Virtual Reality for Offline Programming of Robotic Applications with Online Teaching Methods","authors":"Gabriele Bolano, A. Roennau, R. Dillmann, Albert Groz","doi":"10.1109/UR49135.2020.9144806","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144806","url":null,"abstract":"Robotic systems are complex and commonly require experts to program the motions and interactions between all the different components. Operators with programming skills are usually needed to make the robot perform a new task or even to apply small changes in its current behavior. For this reason many tools have been developed to ease the programming of robotic systems. Online programming methods rely on the use of the robot in order to move it to the desired configurations. On the other hand, simulation-based methods enable the offline teaching of the needed program without involving the actual hardware setup. Virtual Reality (VR) allows the user to program a robot safely and effortlessly, without the need to move the real manipulator. However, online programming methods are needed for on-site adjustments, but a common interface between these two methods is usually not available. In this work we propose a VR-based framework for programming robotic tasks. The system architecture deployed allows the integration of the defined programs into existing tools for online teaching and execution on the real hardware. The proposed virtual environment enables the intuitive definition of the entire task workflow, without the need to involve the real setup. The bilateral communication between this component and the robotic hardware allows the user to introduce changes in the virtual environment, as well into the real system. In this way, they can both be updated with the latest changes and used in a interchangeable way, exploiting the advantages of both methods in a flexible manner.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129657845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A 6-DOF hybrid actuation system for a medical robot under MRI environment 磁共振成像环境下医疗机器人六自由度混合驱动系统
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144986
M. Farooq, S. Ko
Medical robotic systems are widely developing for the efficient operation. Robotics has offered many viable solutions for applications ranging from marine, industry, domestic and medical. One of the key challenges is the design and control of actuation systems. This paper presents a 6-DOF actuation system for a medical robot deployed in magnetic resonance imaging system. It is a hybrid system designed to actuate two distinct mechanisms; concentric tube and tendon actuated robots. The actuation system is designed to fit inside the bore of commercially available Siemens® 3T MR scanner and set to follow the predefined anatomical constraints. As a preliminary analysis, the stroke of the developed actuation system was measured to analyze the workspace. Further experimentation will be performed in the future to validate the effectiveness of the presented system.
医疗机器人系统因其高效的操作而得到广泛的发展。机器人技术为船舶、工业、家庭和医疗等应用提供了许多可行的解决方案。其中一个关键的挑战是驱动系统的设计和控制。介绍了一种应用于磁共振成像系统的医疗机器人六自由度驱动系统。它是一个混合系统,旨在驱动两种不同的机制;同心管和肌腱驱动机器人。驱动系统设计适合商用西门子®3T MR扫描仪的孔内,并设置遵循预定义的解剖约束。作为初步分析,测量了所开发的作动系统的行程以分析其工作空间。进一步的实验将在未来进行,以验证该系统的有效性。
{"title":"A 6-DOF hybrid actuation system for a medical robot under MRI environment","authors":"M. Farooq, S. Ko","doi":"10.1109/UR49135.2020.9144986","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144986","url":null,"abstract":"Medical robotic systems are widely developing for the efficient operation. Robotics has offered many viable solutions for applications ranging from marine, industry, domestic and medical. One of the key challenges is the design and control of actuation systems. This paper presents a 6-DOF actuation system for a medical robot deployed in magnetic resonance imaging system. It is a hybrid system designed to actuate two distinct mechanisms; concentric tube and tendon actuated robots. The actuation system is designed to fit inside the bore of commercially available Siemens® 3T MR scanner and set to follow the predefined anatomical constraints. As a preliminary analysis, the stroke of the developed actuation system was measured to analyze the workspace. Further experimentation will be performed in the future to validate the effectiveness of the presented system.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128591513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Reliable Low-Cost Foot Contact Sensor for Legged Robots 一种可靠、低成本的足部接触传感器
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144878
Hyunwoo Nam, Qing Xu, D. Hong
In an unstructured environment, fast walking legged robots can easily damage itself or the crowd due to slips or missing desired contacts. Therefore, it is important to sense ground contacts for legged robots. This paper presents a low-cost, lightweight, simple and robust foot contact sensor designed for legged robots with point feet. First, the mechanical design of the foot is proposed. The foot detects contact as it presses against the ground through the deformation of a layer of polyurethane rubber, which allows the compressive displacement of the contact foot pad to trigger the enclosed sensor. This sensor is a binary contact sensor using pushbutton switches. The total weight of the foot contact sensor is 82g, and the cost of manufacturing one is less than $10 USD. Next, the effectiveness of the developed foot is confirmed through several experiments. The angle between the center axis of the foot and the ground is referred to as the contact angle in this paper. The foot contact sensor can reliably detect ground contact over contact angles between 30° to 150°. This prototype sensor can also withstand contact forces of over 80N for more than 10,000 steps.
在非结构化环境中,快速行走的有腿机器人很容易因滑倒或缺少所需的接触而损坏自身或人群。因此,对有腿机器人的地面接触感测是非常重要的。提出了一种低成本、轻量化、简单耐用的足部接触传感器。首先,提出了足部的机械设计。脚通过一层聚氨酯橡胶的变形来检测与地面的接触,这使得接触脚垫的压缩位移触发封闭的传感器。该传感器是一种使用按钮开关的二进制接触传感器。脚触传感器的总重量为82g,制造成本不到10美元。其次,通过多次实验验证了开发脚的有效性。脚部中轴线与地面之间的夹角在本文中称为接触角。脚触传感器可以在30°到150°的接触角范围内可靠地检测地面接触。这种原型传感器还可以承受超过80N的接触力,超过10,000步。
{"title":"A Reliable Low-Cost Foot Contact Sensor for Legged Robots","authors":"Hyunwoo Nam, Qing Xu, D. Hong","doi":"10.1109/UR49135.2020.9144878","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144878","url":null,"abstract":"In an unstructured environment, fast walking legged robots can easily damage itself or the crowd due to slips or missing desired contacts. Therefore, it is important to sense ground contacts for legged robots. This paper presents a low-cost, lightweight, simple and robust foot contact sensor designed for legged robots with point feet. First, the mechanical design of the foot is proposed. The foot detects contact as it presses against the ground through the deformation of a layer of polyurethane rubber, which allows the compressive displacement of the contact foot pad to trigger the enclosed sensor. This sensor is a binary contact sensor using pushbutton switches. The total weight of the foot contact sensor is 82g, and the cost of manufacturing one is less than $10 USD. Next, the effectiveness of the developed foot is confirmed through several experiments. The angle between the center axis of the foot and the ground is referred to as the contact angle in this paper. The foot contact sensor can reliably detect ground contact over contact angles between 30° to 150°. This prototype sensor can also withstand contact forces of over 80N for more than 10,000 steps.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128055114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
I-LOAM: Intensity Enhanced LiDAR Odometry and Mapping I-LOAM:强度增强激光雷达测程和测绘
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144987
Yeong-Sang Park, Hyesu Jang, Ayoung Kim
In this paper, we introduce an extension to the existing LiDAR Odometry and Mapping (LOAM) [1] by additionally considering LiDAR intensity. In an urban environment, planar structures from buildings and roads often introduce ambiguity in a certain direction. Incorporation of the intensity value to the cost function prevents divergence occurence from this structural ambiguity, thereby yielding better odometry and mapping in terms of accuracy. Specifically, we have updated the edge and plane point correspondence search to include intensity. This simple but effective strategy shows meaningful improvement over the existing LOAM. The proposed method is validated using the KITTI dataset.
在本文中,我们通过额外考虑激光雷达强度,对现有的激光雷达测程和测绘(LOAM)[1]进行了扩展。在城市环境中,来自建筑和道路的平面结构往往会在某个方向上引入歧义。将强度值合并到成本函数中可以防止这种结构模糊性产生分歧,从而在准确性方面产生更好的里程计和映射。具体来说,我们已经更新了边缘和面点对应搜索,以包括强度。这种简单但有效的策略比现有的LOAM有意义的改进。利用KITTI数据集对该方法进行了验证。
{"title":"I-LOAM: Intensity Enhanced LiDAR Odometry and Mapping","authors":"Yeong-Sang Park, Hyesu Jang, Ayoung Kim","doi":"10.1109/UR49135.2020.9144987","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144987","url":null,"abstract":"In this paper, we introduce an extension to the existing LiDAR Odometry and Mapping (LOAM) [1] by additionally considering LiDAR intensity. In an urban environment, planar structures from buildings and roads often introduce ambiguity in a certain direction. Incorporation of the intensity value to the cost function prevents divergence occurence from this structural ambiguity, thereby yielding better odometry and mapping in terms of accuracy. Specifically, we have updated the edge and plane point correspondence search to include intensity. This simple but effective strategy shows meaningful improvement over the existing LOAM. The proposed method is validated using the KITTI dataset.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131154366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Development of Seabed Walking Mechanism for Underwater Amphibious Robot 水下两栖机器人海底行走机构的研制
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144940
Taesik Kim, Seokyong Song, Son-cheol Yu
In this paper, we proposed an underwater walking mechanism for the underwater amphibious robot that uses one degree of freedom (DOF) actuators. For this walking mechanism, we developed a unique spring-hinge type paddle that enables the amphibious robot to walk on the seabed. We proposed a simplified 2-D model of the robot. Then, we analyzed rough-terrain capability of this mechanism by using following terms: the paddle-length, the hinge-length, the distance to the obstacle, and the maximum sweep angle. We developed an experimental robot for a feasibility test of the effectiveness of proposed walking mechanism, and we performed ground and water tank experiments with this robot. As a result, we confirmed that the robot walked stably with the proposed mechanism.
本文提出了一种基于单自由度作动器的水陆两栖机器人水下行走机构。对于这种行走机构,我们开发了一种独特的弹簧铰链式桨,使两栖机器人能够在海底行走。我们提出了机器人的简化二维模型。然后,利用桨长、铰长、与障碍物的距离和最大掠角分析了该机构的粗糙地形性能。我们开发了一个实验机器人来测试所提出的行走机构的有效性,我们用这个机器人进行了地面和水箱实验。结果表明,机器人在该机构下行走稳定。
{"title":"Development of Seabed Walking Mechanism for Underwater Amphibious Robot","authors":"Taesik Kim, Seokyong Song, Son-cheol Yu","doi":"10.1109/UR49135.2020.9144940","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144940","url":null,"abstract":"In this paper, we proposed an underwater walking mechanism for the underwater amphibious robot that uses one degree of freedom (DOF) actuators. For this walking mechanism, we developed a unique spring-hinge type paddle that enables the amphibious robot to walk on the seabed. We proposed a simplified 2-D model of the robot. Then, we analyzed rough-terrain capability of this mechanism by using following terms: the paddle-length, the hinge-length, the distance to the obstacle, and the maximum sweep angle. We developed an experimental robot for a feasibility test of the effectiveness of proposed walking mechanism, and we performed ground and water tank experiments with this robot. As a result, we confirmed that the robot walked stably with the proposed mechanism.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"28 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134066726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Attention-model Guided Image Enhancement for Robotic Vision Applications 注意力模型引导图像增强在机器人视觉中的应用
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144966
Ming Yi, Wanxiang Li, A. Elibol, N. Chong
Optical data is one of the crucial information resources for robotic platforms to sense and interact with the environment being employed. Obtained image quality is the main factor of having a successful application of sophisticated methods (e.g., object detection and recognition). In this paper, a method is proposed to improve the image quality by enhancing the lighting and denoising. The proposed method is based on a generative adversarial network (GAN) structure. It makes use of the attention model both to guide the enhancement process and to apply denoising simultaneously thanks to the step of adding noise on the input of discriminator networks. Detailed experimental and comparative results using real datasets were presented in order to underline the performance of the proposed method.
光学数据是机器人平台感知环境和与环境交互的重要信息资源之一。获得的图像质量是复杂方法(例如,目标检测和识别)成功应用的主要因素。本文提出了一种通过增强光照和去噪来提高图像质量的方法。该方法基于生成对抗网络(GAN)结构。它利用注意模型来指导增强过程,并通过在鉴别器网络的输入上加入噪声的步骤来同时应用去噪。为了强调该方法的性能,给出了使用真实数据集的详细实验和比较结果。
{"title":"Attention-model Guided Image Enhancement for Robotic Vision Applications","authors":"Ming Yi, Wanxiang Li, A. Elibol, N. Chong","doi":"10.1109/UR49135.2020.9144966","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144966","url":null,"abstract":"Optical data is one of the crucial information resources for robotic platforms to sense and interact with the environment being employed. Obtained image quality is the main factor of having a successful application of sophisticated methods (e.g., object detection and recognition). In this paper, a method is proposed to improve the image quality by enhancing the lighting and denoising. The proposed method is based on a generative adversarial network (GAN) structure. It makes use of the attention model both to guide the enhancement process and to apply denoising simultaneously thanks to the step of adding noise on the input of discriminator networks. Detailed experimental and comparative results using real datasets were presented in order to underline the performance of the proposed method.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133088667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Hands-Free: a robot augmented reality teleoperation system 免提:一个机器人增强现实远程操作系统
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144841
Cristina Nuzzi, S. Ghidini, R. Pagani, S. Pasinetti, Gabriele Coffetti, G. Sansoni
In this paper the novel teleoperation method "Hands-Free" is presented. Hands-Free is a vision-based augmented reality system that allows users to teleoperate a robot end-effector with their hands in real time. The system leverages OpenPose neural network to detect the human operator hand in a given workspace, achieving an average inference time of 0.15 s. The user index position is extracted from the image and converted in real world coordinates to move the robot end-effector in a different workspace.The user hand skeleton is visualized in real-time moving in the actual robot workspace, allowing the user to teleoperate the robot intuitively, regardless of the differences between the user workspace and the robot workspace.Since a set of calibration procedures is involved to convert the index position to the robot end-effector position, we designed three experiments to determine the different errors introduced by conversion. A detailed explanation of the mathematical principles adopted in this work is provided in the paper.Finally, the proposed system has been developed using ROS and is publicly available at the following GitHub repository: https://github.com/Krissy93/hands-free-project.
本文提出了一种新的远程操作方法“免提”。hands - free是一种基于视觉的增强现实系统,允许用户用手实时远程操作机器人末端执行器。该系统利用OpenPose神经网络在给定的工作空间中检测人类操作员的手,平均推理时间为0.15 s。从图像中提取用户索引位置并转换为现实世界坐标,以便在不同的工作空间中移动机器人末端执行器。用户的手骨架在实际机器人工作空间中实时移动,允许用户直观地远程操作机器人,而不考虑用户工作空间和机器人工作空间之间的差异。由于将指标位置转换为机器人末端执行器位置涉及一套校准程序,因此我们设计了三个实验来确定转换带来的不同误差。本文对本文所采用的数学原理作了详细的说明。最后,建议的系统是使用ROS开发的,并且可以在以下GitHub存储库中公开获得:https://github.com/Krissy93/hands-free-project。
{"title":"Hands-Free: a robot augmented reality teleoperation system","authors":"Cristina Nuzzi, S. Ghidini, R. Pagani, S. Pasinetti, Gabriele Coffetti, G. Sansoni","doi":"10.1109/UR49135.2020.9144841","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144841","url":null,"abstract":"In this paper the novel teleoperation method \"Hands-Free\" is presented. Hands-Free is a vision-based augmented reality system that allows users to teleoperate a robot end-effector with their hands in real time. The system leverages OpenPose neural network to detect the human operator hand in a given workspace, achieving an average inference time of 0.15 s. The user index position is extracted from the image and converted in real world coordinates to move the robot end-effector in a different workspace.The user hand skeleton is visualized in real-time moving in the actual robot workspace, allowing the user to teleoperate the robot intuitively, regardless of the differences between the user workspace and the robot workspace.Since a set of calibration procedures is involved to convert the index position to the robot end-effector position, we designed three experiments to determine the different errors introduced by conversion. A detailed explanation of the mathematical principles adopted in this work is provided in the paper.Finally, the proposed system has been developed using ROS and is publicly available at the following GitHub repository: https://github.com/Krissy93/hands-free-project.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"482 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115950593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Accurate On-line Extrinsic Calibration for a Multi-camera SLAM System 多相机SLAM系统的精确在线外部定标
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144877
O. F. Ince, Jun-Sik Kim
Simultaneous localization and mapping (SLAM) system has an important role in providing an accurate and comprehensive solution for situational awareness in unknown environments. In order to maximize the situational awareness, the wider field of view is required. It is possible to achieve a wide field of view with an omnidirectional lense or multiple perspective cameras. However, calibration of such systems is sensitive and difficult. For this reason, we present a practical solution to a multi-camera SLAM system. The goal of this study is to obtain robust localization and mapping for multi-camera setup without requiring pre-calibration of the camera system calibration. With this goal, we associate measurements from cameras with their relative poses and propose an iterative optimization method to refine the map, keyframe poses and relative poses between cameras simultaneously. We evaluated our method on a dataset which consists of three cameras with small overlapping regions, and on the KITTI odometry dataset which is set in stereo configuration. The experiments demonstrated that the proposed method provides not only a practical but also robust SLAM solution for multi-camera systems.
同时定位与制图(SLAM)系统在为未知环境下的态势感知提供准确、全面的解决方案方面具有重要作用。为了最大限度地提高态势感知能力,需要更广阔的视野。使用全向镜头或多角度相机可以获得广阔的视野。然而,这种系统的校准是敏感和困难的。为此,我们提出了一种实用的多相机SLAM系统解决方案。本研究的目标是在不需要预先校准相机系统校准的情况下获得多相机设置的鲁棒定位和映射。为此,我们将相机的测量值与其相对姿态相关联,并提出了一种迭代优化方法来同时优化地图、关键帧姿态和相机之间的相对姿态。我们在由三个具有小重叠区域的摄像机组成的数据集和设置为立体配置的KITTI odometry数据集上评估了我们的方法。实验结果表明,该方法为多相机系统提供了一种实用且鲁棒的SLAM解决方案。
{"title":"Accurate On-line Extrinsic Calibration for a Multi-camera SLAM System","authors":"O. F. Ince, Jun-Sik Kim","doi":"10.1109/UR49135.2020.9144877","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144877","url":null,"abstract":"Simultaneous localization and mapping (SLAM) system has an important role in providing an accurate and comprehensive solution for situational awareness in unknown environments. In order to maximize the situational awareness, the wider field of view is required. It is possible to achieve a wide field of view with an omnidirectional lense or multiple perspective cameras. However, calibration of such systems is sensitive and difficult. For this reason, we present a practical solution to a multi-camera SLAM system. The goal of this study is to obtain robust localization and mapping for multi-camera setup without requiring pre-calibration of the camera system calibration. With this goal, we associate measurements from cameras with their relative poses and propose an iterative optimization method to refine the map, keyframe poses and relative poses between cameras simultaneously. We evaluated our method on a dataset which consists of three cameras with small overlapping regions, and on the KITTI odometry dataset which is set in stereo configuration. The experiments demonstrated that the proposed method provides not only a practical but also robust SLAM solution for multi-camera systems.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132568387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Contact States Estimation Algorithm Using Fuzzy Logic in Peg-in-hole Assembly* 基于模糊逻辑的孔钉装配接触状态估计算法*
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144946
Haeseong Lee, Jaeheung Park
Peg-in-hole assembly is regarded as one of the essential tasks in the robotic assembly. To complete the task, it is required to estimate the Contact State(CS) of a peg relative to a hole and control motions of the peg in the assembly environment. In this paper, we propose the estimation algorithm using fuzzy logic for the satisfaction of these requirements. Firstly, we describe a peg-in-hole environment, which has holes with several sizes on the surface with a fine area. Afterward, we classify the CS of the peg in the environment. Secondly, we explain and analyze the proposed algorithm and a motion control method. Using the proposed algorithm, we can estimate all the CS. After estimating the current CS, proper actions are commanded for the peg-in-hole assembly. To validate the proposed algorithm, we conducted an experiment using a 7 DOF torque-controlled manipulator and prefabricated furniture.
钉孔装配是机器人装配中的核心任务之一。为了完成这项任务,需要估计钉相对于孔的接触状态(CS),并控制钉在装配环境中的运动。为了满足这些要求,本文提出了一种基于模糊逻辑的估计算法。首先,我们描述了一个钉孔环境,该环境在一个精细区域的表面上有几种尺寸的孔。然后,我们对环境中peg的CS进行了分类。其次,对所提出的算法和运动控制方法进行了说明和分析。利用该算法,我们可以估计出所有的CS。估算完当前的钻压后,就可以对孔内钉组合进行适当的操作。为了验证所提出的算法,我们使用7自由度力矩控制机械手和预制家具进行了实验。
{"title":"Contact States Estimation Algorithm Using Fuzzy Logic in Peg-in-hole Assembly*","authors":"Haeseong Lee, Jaeheung Park","doi":"10.1109/UR49135.2020.9144946","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144946","url":null,"abstract":"Peg-in-hole assembly is regarded as one of the essential tasks in the robotic assembly. To complete the task, it is required to estimate the Contact State(CS) of a peg relative to a hole and control motions of the peg in the assembly environment. In this paper, we propose the estimation algorithm using fuzzy logic for the satisfaction of these requirements. Firstly, we describe a peg-in-hole environment, which has holes with several sizes on the surface with a fine area. Afterward, we classify the CS of the peg in the environment. Secondly, we explain and analyze the proposed algorithm and a motion control method. Using the proposed algorithm, we can estimate all the CS. After estimating the current CS, proper actions are commanded for the peg-in-hole assembly. To validate the proposed algorithm, we conducted an experiment using a 7 DOF torque-controlled manipulator and prefabricated furniture.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"54 43","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120839669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Fusion Drive: End-to-End Multi Modal Sensor Fusion for Guided Low-Cost Autonomous Vehicle 融合驱动:端到端多模态传感器融合制导低成本自动驾驶汽车
Pub Date : 2020-06-01 DOI: 10.1109/UR49135.2020.9144707
Ikhyun Kang, Reinis Cimurs, Jin Han Lee, I. Suh
In this paper, we present a supervised learning-based mixed-input sensor fusion neural network for autonomous navigation on a designed track referred to as Fusion Drive. The proposed method combines RGB image and LiDAR laser sensor data for guided navigation along the track and avoidance of learned as well as previously unobserved obstacles for a low-cost embedded navigation system. The proposed network combines separate CNN-based sensor processing into a fully combined network that learns throttle and steering angle labels end-to-end. The proposed network outputs navigational commands with similar learned behavior from the human demonstrations. Performed experiments with validation data-set and in real environment exhibit desired behavior. Recorded performance shows improvement over similar approaches.
在本文中,我们提出了一种基于监督学习的混合输入传感器融合神经网络,用于在设计的轨道上自主导航,称为融合驱动。该方法将RGB图像和LiDAR激光传感器数据相结合,用于低成本嵌入式导航系统沿轨道导航和避免已知障碍物以及先前未观察到的障碍物。所提出的网络将单独的基于cnn的传感器处理组合成一个完全组合的网络,该网络可以端到端学习油门和转向角标签。提出的网络输出导航命令与人类演示的学习行为相似。用验证数据集和真实环境进行实验,表现出期望的行为。记录的性能显示了与类似方法相比的改进。
{"title":"Fusion Drive: End-to-End Multi Modal Sensor Fusion for Guided Low-Cost Autonomous Vehicle","authors":"Ikhyun Kang, Reinis Cimurs, Jin Han Lee, I. Suh","doi":"10.1109/UR49135.2020.9144707","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144707","url":null,"abstract":"In this paper, we present a supervised learning-based mixed-input sensor fusion neural network for autonomous navigation on a designed track referred to as Fusion Drive. The proposed method combines RGB image and LiDAR laser sensor data for guided navigation along the track and avoidance of learned as well as previously unobserved obstacles for a low-cost embedded navigation system. The proposed network combines separate CNN-based sensor processing into a fully combined network that learns throttle and steering angle labels end-to-end. The proposed network outputs navigational commands with similar learned behavior from the human demonstrations. Performed experiments with validation data-set and in real environment exhibit desired behavior. Recorded performance shows improvement over similar approaches.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125476633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2020 17th International Conference on Ubiquitous Robots (UR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1