Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144806
Gabriele Bolano, A. Roennau, R. Dillmann, Albert Groz
Robotic systems are complex and commonly require experts to program the motions and interactions between all the different components. Operators with programming skills are usually needed to make the robot perform a new task or even to apply small changes in its current behavior. For this reason many tools have been developed to ease the programming of robotic systems. Online programming methods rely on the use of the robot in order to move it to the desired configurations. On the other hand, simulation-based methods enable the offline teaching of the needed program without involving the actual hardware setup. Virtual Reality (VR) allows the user to program a robot safely and effortlessly, without the need to move the real manipulator. However, online programming methods are needed for on-site adjustments, but a common interface between these two methods is usually not available. In this work we propose a VR-based framework for programming robotic tasks. The system architecture deployed allows the integration of the defined programs into existing tools for online teaching and execution on the real hardware. The proposed virtual environment enables the intuitive definition of the entire task workflow, without the need to involve the real setup. The bilateral communication between this component and the robotic hardware allows the user to introduce changes in the virtual environment, as well into the real system. In this way, they can both be updated with the latest changes and used in a interchangeable way, exploiting the advantages of both methods in a flexible manner.
{"title":"Virtual Reality for Offline Programming of Robotic Applications with Online Teaching Methods","authors":"Gabriele Bolano, A. Roennau, R. Dillmann, Albert Groz","doi":"10.1109/UR49135.2020.9144806","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144806","url":null,"abstract":"Robotic systems are complex and commonly require experts to program the motions and interactions between all the different components. Operators with programming skills are usually needed to make the robot perform a new task or even to apply small changes in its current behavior. For this reason many tools have been developed to ease the programming of robotic systems. Online programming methods rely on the use of the robot in order to move it to the desired configurations. On the other hand, simulation-based methods enable the offline teaching of the needed program without involving the actual hardware setup. Virtual Reality (VR) allows the user to program a robot safely and effortlessly, without the need to move the real manipulator. However, online programming methods are needed for on-site adjustments, but a common interface between these two methods is usually not available. In this work we propose a VR-based framework for programming robotic tasks. The system architecture deployed allows the integration of the defined programs into existing tools for online teaching and execution on the real hardware. The proposed virtual environment enables the intuitive definition of the entire task workflow, without the need to involve the real setup. The bilateral communication between this component and the robotic hardware allows the user to introduce changes in the virtual environment, as well into the real system. In this way, they can both be updated with the latest changes and used in a interchangeable way, exploiting the advantages of both methods in a flexible manner.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129657845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144986
M. Farooq, S. Ko
Medical robotic systems are widely developing for the efficient operation. Robotics has offered many viable solutions for applications ranging from marine, industry, domestic and medical. One of the key challenges is the design and control of actuation systems. This paper presents a 6-DOF actuation system for a medical robot deployed in magnetic resonance imaging system. It is a hybrid system designed to actuate two distinct mechanisms; concentric tube and tendon actuated robots. The actuation system is designed to fit inside the bore of commercially available Siemens® 3T MR scanner and set to follow the predefined anatomical constraints. As a preliminary analysis, the stroke of the developed actuation system was measured to analyze the workspace. Further experimentation will be performed in the future to validate the effectiveness of the presented system.
{"title":"A 6-DOF hybrid actuation system for a medical robot under MRI environment","authors":"M. Farooq, S. Ko","doi":"10.1109/UR49135.2020.9144986","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144986","url":null,"abstract":"Medical robotic systems are widely developing for the efficient operation. Robotics has offered many viable solutions for applications ranging from marine, industry, domestic and medical. One of the key challenges is the design and control of actuation systems. This paper presents a 6-DOF actuation system for a medical robot deployed in magnetic resonance imaging system. It is a hybrid system designed to actuate two distinct mechanisms; concentric tube and tendon actuated robots. The actuation system is designed to fit inside the bore of commercially available Siemens® 3T MR scanner and set to follow the predefined anatomical constraints. As a preliminary analysis, the stroke of the developed actuation system was measured to analyze the workspace. Further experimentation will be performed in the future to validate the effectiveness of the presented system.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128591513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144878
Hyunwoo Nam, Qing Xu, D. Hong
In an unstructured environment, fast walking legged robots can easily damage itself or the crowd due to slips or missing desired contacts. Therefore, it is important to sense ground contacts for legged robots. This paper presents a low-cost, lightweight, simple and robust foot contact sensor designed for legged robots with point feet. First, the mechanical design of the foot is proposed. The foot detects contact as it presses against the ground through the deformation of a layer of polyurethane rubber, which allows the compressive displacement of the contact foot pad to trigger the enclosed sensor. This sensor is a binary contact sensor using pushbutton switches. The total weight of the foot contact sensor is 82g, and the cost of manufacturing one is less than $10 USD. Next, the effectiveness of the developed foot is confirmed through several experiments. The angle between the center axis of the foot and the ground is referred to as the contact angle in this paper. The foot contact sensor can reliably detect ground contact over contact angles between 30° to 150°. This prototype sensor can also withstand contact forces of over 80N for more than 10,000 steps.
{"title":"A Reliable Low-Cost Foot Contact Sensor for Legged Robots","authors":"Hyunwoo Nam, Qing Xu, D. Hong","doi":"10.1109/UR49135.2020.9144878","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144878","url":null,"abstract":"In an unstructured environment, fast walking legged robots can easily damage itself or the crowd due to slips or missing desired contacts. Therefore, it is important to sense ground contacts for legged robots. This paper presents a low-cost, lightweight, simple and robust foot contact sensor designed for legged robots with point feet. First, the mechanical design of the foot is proposed. The foot detects contact as it presses against the ground through the deformation of a layer of polyurethane rubber, which allows the compressive displacement of the contact foot pad to trigger the enclosed sensor. This sensor is a binary contact sensor using pushbutton switches. The total weight of the foot contact sensor is 82g, and the cost of manufacturing one is less than $10 USD. Next, the effectiveness of the developed foot is confirmed through several experiments. The angle between the center axis of the foot and the ground is referred to as the contact angle in this paper. The foot contact sensor can reliably detect ground contact over contact angles between 30° to 150°. This prototype sensor can also withstand contact forces of over 80N for more than 10,000 steps.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128055114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144987
Yeong-Sang Park, Hyesu Jang, Ayoung Kim
In this paper, we introduce an extension to the existing LiDAR Odometry and Mapping (LOAM) [1] by additionally considering LiDAR intensity. In an urban environment, planar structures from buildings and roads often introduce ambiguity in a certain direction. Incorporation of the intensity value to the cost function prevents divergence occurence from this structural ambiguity, thereby yielding better odometry and mapping in terms of accuracy. Specifically, we have updated the edge and plane point correspondence search to include intensity. This simple but effective strategy shows meaningful improvement over the existing LOAM. The proposed method is validated using the KITTI dataset.
{"title":"I-LOAM: Intensity Enhanced LiDAR Odometry and Mapping","authors":"Yeong-Sang Park, Hyesu Jang, Ayoung Kim","doi":"10.1109/UR49135.2020.9144987","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144987","url":null,"abstract":"In this paper, we introduce an extension to the existing LiDAR Odometry and Mapping (LOAM) [1] by additionally considering LiDAR intensity. In an urban environment, planar structures from buildings and roads often introduce ambiguity in a certain direction. Incorporation of the intensity value to the cost function prevents divergence occurence from this structural ambiguity, thereby yielding better odometry and mapping in terms of accuracy. Specifically, we have updated the edge and plane point correspondence search to include intensity. This simple but effective strategy shows meaningful improvement over the existing LOAM. The proposed method is validated using the KITTI dataset.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131154366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144940
Taesik Kim, Seokyong Song, Son-cheol Yu
In this paper, we proposed an underwater walking mechanism for the underwater amphibious robot that uses one degree of freedom (DOF) actuators. For this walking mechanism, we developed a unique spring-hinge type paddle that enables the amphibious robot to walk on the seabed. We proposed a simplified 2-D model of the robot. Then, we analyzed rough-terrain capability of this mechanism by using following terms: the paddle-length, the hinge-length, the distance to the obstacle, and the maximum sweep angle. We developed an experimental robot for a feasibility test of the effectiveness of proposed walking mechanism, and we performed ground and water tank experiments with this robot. As a result, we confirmed that the robot walked stably with the proposed mechanism.
{"title":"Development of Seabed Walking Mechanism for Underwater Amphibious Robot","authors":"Taesik Kim, Seokyong Song, Son-cheol Yu","doi":"10.1109/UR49135.2020.9144940","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144940","url":null,"abstract":"In this paper, we proposed an underwater walking mechanism for the underwater amphibious robot that uses one degree of freedom (DOF) actuators. For this walking mechanism, we developed a unique spring-hinge type paddle that enables the amphibious robot to walk on the seabed. We proposed a simplified 2-D model of the robot. Then, we analyzed rough-terrain capability of this mechanism by using following terms: the paddle-length, the hinge-length, the distance to the obstacle, and the maximum sweep angle. We developed an experimental robot for a feasibility test of the effectiveness of proposed walking mechanism, and we performed ground and water tank experiments with this robot. As a result, we confirmed that the robot walked stably with the proposed mechanism.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"28 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134066726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144966
Ming Yi, Wanxiang Li, A. Elibol, N. Chong
Optical data is one of the crucial information resources for robotic platforms to sense and interact with the environment being employed. Obtained image quality is the main factor of having a successful application of sophisticated methods (e.g., object detection and recognition). In this paper, a method is proposed to improve the image quality by enhancing the lighting and denoising. The proposed method is based on a generative adversarial network (GAN) structure. It makes use of the attention model both to guide the enhancement process and to apply denoising simultaneously thanks to the step of adding noise on the input of discriminator networks. Detailed experimental and comparative results using real datasets were presented in order to underline the performance of the proposed method.
{"title":"Attention-model Guided Image Enhancement for Robotic Vision Applications","authors":"Ming Yi, Wanxiang Li, A. Elibol, N. Chong","doi":"10.1109/UR49135.2020.9144966","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144966","url":null,"abstract":"Optical data is one of the crucial information resources for robotic platforms to sense and interact with the environment being employed. Obtained image quality is the main factor of having a successful application of sophisticated methods (e.g., object detection and recognition). In this paper, a method is proposed to improve the image quality by enhancing the lighting and denoising. The proposed method is based on a generative adversarial network (GAN) structure. It makes use of the attention model both to guide the enhancement process and to apply denoising simultaneously thanks to the step of adding noise on the input of discriminator networks. Detailed experimental and comparative results using real datasets were presented in order to underline the performance of the proposed method.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133088667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144841
Cristina Nuzzi, S. Ghidini, R. Pagani, S. Pasinetti, Gabriele Coffetti, G. Sansoni
In this paper the novel teleoperation method "Hands-Free" is presented. Hands-Free is a vision-based augmented reality system that allows users to teleoperate a robot end-effector with their hands in real time. The system leverages OpenPose neural network to detect the human operator hand in a given workspace, achieving an average inference time of 0.15 s. The user index position is extracted from the image and converted in real world coordinates to move the robot end-effector in a different workspace.The user hand skeleton is visualized in real-time moving in the actual robot workspace, allowing the user to teleoperate the robot intuitively, regardless of the differences between the user workspace and the robot workspace.Since a set of calibration procedures is involved to convert the index position to the robot end-effector position, we designed three experiments to determine the different errors introduced by conversion. A detailed explanation of the mathematical principles adopted in this work is provided in the paper.Finally, the proposed system has been developed using ROS and is publicly available at the following GitHub repository: https://github.com/Krissy93/hands-free-project.
{"title":"Hands-Free: a robot augmented reality teleoperation system","authors":"Cristina Nuzzi, S. Ghidini, R. Pagani, S. Pasinetti, Gabriele Coffetti, G. Sansoni","doi":"10.1109/UR49135.2020.9144841","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144841","url":null,"abstract":"In this paper the novel teleoperation method \"Hands-Free\" is presented. Hands-Free is a vision-based augmented reality system that allows users to teleoperate a robot end-effector with their hands in real time. The system leverages OpenPose neural network to detect the human operator hand in a given workspace, achieving an average inference time of 0.15 s. The user index position is extracted from the image and converted in real world coordinates to move the robot end-effector in a different workspace.The user hand skeleton is visualized in real-time moving in the actual robot workspace, allowing the user to teleoperate the robot intuitively, regardless of the differences between the user workspace and the robot workspace.Since a set of calibration procedures is involved to convert the index position to the robot end-effector position, we designed three experiments to determine the different errors introduced by conversion. A detailed explanation of the mathematical principles adopted in this work is provided in the paper.Finally, the proposed system has been developed using ROS and is publicly available at the following GitHub repository: https://github.com/Krissy93/hands-free-project.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"482 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115950593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144877
O. F. Ince, Jun-Sik Kim
Simultaneous localization and mapping (SLAM) system has an important role in providing an accurate and comprehensive solution for situational awareness in unknown environments. In order to maximize the situational awareness, the wider field of view is required. It is possible to achieve a wide field of view with an omnidirectional lense or multiple perspective cameras. However, calibration of such systems is sensitive and difficult. For this reason, we present a practical solution to a multi-camera SLAM system. The goal of this study is to obtain robust localization and mapping for multi-camera setup without requiring pre-calibration of the camera system calibration. With this goal, we associate measurements from cameras with their relative poses and propose an iterative optimization method to refine the map, keyframe poses and relative poses between cameras simultaneously. We evaluated our method on a dataset which consists of three cameras with small overlapping regions, and on the KITTI odometry dataset which is set in stereo configuration. The experiments demonstrated that the proposed method provides not only a practical but also robust SLAM solution for multi-camera systems.
{"title":"Accurate On-line Extrinsic Calibration for a Multi-camera SLAM System","authors":"O. F. Ince, Jun-Sik Kim","doi":"10.1109/UR49135.2020.9144877","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144877","url":null,"abstract":"Simultaneous localization and mapping (SLAM) system has an important role in providing an accurate and comprehensive solution for situational awareness in unknown environments. In order to maximize the situational awareness, the wider field of view is required. It is possible to achieve a wide field of view with an omnidirectional lense or multiple perspective cameras. However, calibration of such systems is sensitive and difficult. For this reason, we present a practical solution to a multi-camera SLAM system. The goal of this study is to obtain robust localization and mapping for multi-camera setup without requiring pre-calibration of the camera system calibration. With this goal, we associate measurements from cameras with their relative poses and propose an iterative optimization method to refine the map, keyframe poses and relative poses between cameras simultaneously. We evaluated our method on a dataset which consists of three cameras with small overlapping regions, and on the KITTI odometry dataset which is set in stereo configuration. The experiments demonstrated that the proposed method provides not only a practical but also robust SLAM solution for multi-camera systems.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132568387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144946
Haeseong Lee, Jaeheung Park
Peg-in-hole assembly is regarded as one of the essential tasks in the robotic assembly. To complete the task, it is required to estimate the Contact State(CS) of a peg relative to a hole and control motions of the peg in the assembly environment. In this paper, we propose the estimation algorithm using fuzzy logic for the satisfaction of these requirements. Firstly, we describe a peg-in-hole environment, which has holes with several sizes on the surface with a fine area. Afterward, we classify the CS of the peg in the environment. Secondly, we explain and analyze the proposed algorithm and a motion control method. Using the proposed algorithm, we can estimate all the CS. After estimating the current CS, proper actions are commanded for the peg-in-hole assembly. To validate the proposed algorithm, we conducted an experiment using a 7 DOF torque-controlled manipulator and prefabricated furniture.
{"title":"Contact States Estimation Algorithm Using Fuzzy Logic in Peg-in-hole Assembly*","authors":"Haeseong Lee, Jaeheung Park","doi":"10.1109/UR49135.2020.9144946","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144946","url":null,"abstract":"Peg-in-hole assembly is regarded as one of the essential tasks in the robotic assembly. To complete the task, it is required to estimate the Contact State(CS) of a peg relative to a hole and control motions of the peg in the assembly environment. In this paper, we propose the estimation algorithm using fuzzy logic for the satisfaction of these requirements. Firstly, we describe a peg-in-hole environment, which has holes with several sizes on the surface with a fine area. Afterward, we classify the CS of the peg in the environment. Secondly, we explain and analyze the proposed algorithm and a motion control method. Using the proposed algorithm, we can estimate all the CS. After estimating the current CS, proper actions are commanded for the peg-in-hole assembly. To validate the proposed algorithm, we conducted an experiment using a 7 DOF torque-controlled manipulator and prefabricated furniture.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"54 43","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120839669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144707
Ikhyun Kang, Reinis Cimurs, Jin Han Lee, I. Suh
In this paper, we present a supervised learning-based mixed-input sensor fusion neural network for autonomous navigation on a designed track referred to as Fusion Drive. The proposed method combines RGB image and LiDAR laser sensor data for guided navigation along the track and avoidance of learned as well as previously unobserved obstacles for a low-cost embedded navigation system. The proposed network combines separate CNN-based sensor processing into a fully combined network that learns throttle and steering angle labels end-to-end. The proposed network outputs navigational commands with similar learned behavior from the human demonstrations. Performed experiments with validation data-set and in real environment exhibit desired behavior. Recorded performance shows improvement over similar approaches.
{"title":"Fusion Drive: End-to-End Multi Modal Sensor Fusion for Guided Low-Cost Autonomous Vehicle","authors":"Ikhyun Kang, Reinis Cimurs, Jin Han Lee, I. Suh","doi":"10.1109/UR49135.2020.9144707","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144707","url":null,"abstract":"In this paper, we present a supervised learning-based mixed-input sensor fusion neural network for autonomous navigation on a designed track referred to as Fusion Drive. The proposed method combines RGB image and LiDAR laser sensor data for guided navigation along the track and avoidance of learned as well as previously unobserved obstacles for a low-cost embedded navigation system. The proposed network combines separate CNN-based sensor processing into a fully combined network that learns throttle and steering angle labels end-to-end. The proposed network outputs navigational commands with similar learned behavior from the human demonstrations. Performed experiments with validation data-set and in real environment exhibit desired behavior. Recorded performance shows improvement over similar approaches.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125476633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}