Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144871
Hu Huang, Aibin Zhu, Jiyuan Song, Yao Tu, Xiaojun Shi, Zhifu Guo
Flexible hand exoskeleton robots are more and more used in medical rehabilitation. This is due to the fact that these exoskeletons have strong compatibility with hands, can realize continuous deformation, and can apply force according to the motion trajectory. This paper proposes a cable-actuated flexible hand exoskeleton. Firstly, a motion model of one finger is established. Then a hand exoskeleton for rehabilitation is designed and constructed based on it. The exoskeleton is remotely actuated by motor and the force is transmitted through cables to achieve the bidirectional drive of the fingers. In addition, a tensioning mechanism of pulley block is designed to pre-tension the wire slack in the process of wire transmission. Finally, the experiment results show that the exoskeleton can bend the three joints of the index finger (DIP, PIP, and MCP) to 57 degrees, 35 degrees, and 31 degrees respectively. Under the load of 2.5N per finger, the exoskeleton can still drive the fingers to flex. The experimental results verify that the exoskeleton is a feasible solution that can meet the requirements of hand rehabilitation and can enable patients to recover and improve finger function in daily activities.
{"title":"Characterization and Evaluation of A Cable-Actuated Flexible Hand Exoskeleton*","authors":"Hu Huang, Aibin Zhu, Jiyuan Song, Yao Tu, Xiaojun Shi, Zhifu Guo","doi":"10.1109/UR49135.2020.9144871","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144871","url":null,"abstract":"Flexible hand exoskeleton robots are more and more used in medical rehabilitation. This is due to the fact that these exoskeletons have strong compatibility with hands, can realize continuous deformation, and can apply force according to the motion trajectory. This paper proposes a cable-actuated flexible hand exoskeleton. Firstly, a motion model of one finger is established. Then a hand exoskeleton for rehabilitation is designed and constructed based on it. The exoskeleton is remotely actuated by motor and the force is transmitted through cables to achieve the bidirectional drive of the fingers. In addition, a tensioning mechanism of pulley block is designed to pre-tension the wire slack in the process of wire transmission. Finally, the experiment results show that the exoskeleton can bend the three joints of the index finger (DIP, PIP, and MCP) to 57 degrees, 35 degrees, and 31 degrees respectively. Under the load of 2.5N per finger, the exoskeleton can still drive the fingers to flex. The experimental results verify that the exoskeleton is a feasible solution that can meet the requirements of hand rehabilitation and can enable patients to recover and improve finger function in daily activities.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128886097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144889
M. Faroni, R. Pagani, G. Legnani
Recent developments in industrial robotics use real-time trajectory modification to improve throughput and safety in automatic processes. Online trajectory scaling is often used to this purpose. In this paper, we propose a feedback trajectory scaling approach that is able to recover from the delay introduced by the speed modulation and improves the path-following performance thanks to an additional inner control loop. Simulation and experimental results on an industrial 6-degree-of-freedom robot show the effectiveness of the proposed approach compared to standard algorithms.
{"title":"Real-time trajectory scaling for robot manipulators","authors":"M. Faroni, R. Pagani, G. Legnani","doi":"10.1109/UR49135.2020.9144889","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144889","url":null,"abstract":"Recent developments in industrial robotics use real-time trajectory modification to improve throughput and safety in automatic processes. Online trajectory scaling is often used to this purpose. In this paper, we propose a feedback trajectory scaling approach that is able to recover from the delay introduced by the speed modulation and improves the path-following performance thanks to an additional inner control loop. Simulation and experimental results on an industrial 6-degree-of-freedom robot show the effectiveness of the proposed approach compared to standard algorithms.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116779903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144912
Sumin Hu, Seungwon Song, H. Myung
This paper proposes an area-wise method to build aesthetically pleasing RGB-D data by projecting camera images onto LiDAR point clouds corrected by Graph SLAM. In particular, the focus is on projecting images to corresponding flat surfaces, extracted as plane equations by RANSAC. The newly created data boasts a camera-like view even in 3D due to its dense, yet smooth flat point clouds. However, since this method is only limited to planar surfaces, other 3D data points that could not be separated as planes had to suffer poor quality due to sparse and rough LiDAR point clouds.
{"title":"Image Projection onto Flat LiDAR Point Cloud Surfaces to Create Dense and Smooth 3D Color Maps","authors":"Sumin Hu, Seungwon Song, H. Myung","doi":"10.1109/UR49135.2020.9144912","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144912","url":null,"abstract":"This paper proposes an area-wise method to build aesthetically pleasing RGB-D data by projecting camera images onto LiDAR point clouds corrected by Graph SLAM. In particular, the focus is on projecting images to corresponding flat surfaces, extracted as plane equations by RANSAC. The newly created data boasts a camera-like view even in 3D due to its dense, yet smooth flat point clouds. However, since this method is only limited to planar surfaces, other 3D data points that could not be separated as planes had to suffer poor quality due to sparse and rough LiDAR point clouds.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127319788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144780
Jaegoo Choy, Kyungjae Lee, Songhwai Oh
In case of deep reinforcement learning (RL) algorithms, to achieve high performance in complex continuous control tasks, it is necessary to exploit the goal and at the same time explore the environment. In this paper, we introduce a novel off-policy actor-critic reinforcement learning algorithm with a sparse Tsallis entropy regularizer. The sparse Tsallis entropy regularizer has the effect of maximizing the expected returns while maximizing the sparse Tsallis entropy for its policy function. Maximizing the sparse Tsallis entropy makes the actor to explore the large action and state space efficiently, thus it helps us to find the optimal action at each state. We derive the iteration update rules and modify a policy iteration rule for an off-policy method. In experiments, we demonstrate the effectiveness of the proposed method in continuous reinforcement learning problems in terms of the convergence speed. The proposed method outperforms former on-policy and off-policy RL algorithms in terms of the convergence speed and performance.
{"title":"Sparse Actor-Critic: Sparse Tsallis Entropy Regularized Reinforcement Learning in a Continuous Action Space","authors":"Jaegoo Choy, Kyungjae Lee, Songhwai Oh","doi":"10.1109/UR49135.2020.9144780","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144780","url":null,"abstract":"In case of deep reinforcement learning (RL) algorithms, to achieve high performance in complex continuous control tasks, it is necessary to exploit the goal and at the same time explore the environment. In this paper, we introduce a novel off-policy actor-critic reinforcement learning algorithm with a sparse Tsallis entropy regularizer. The sparse Tsallis entropy regularizer has the effect of maximizing the expected returns while maximizing the sparse Tsallis entropy for its policy function. Maximizing the sparse Tsallis entropy makes the actor to explore the large action and state space efficiently, thus it helps us to find the optimal action at each state. We derive the iteration update rules and modify a policy iteration rule for an off-policy method. In experiments, we demonstrate the effectiveness of the proposed method in continuous reinforcement learning problems in terms of the convergence speed. The proposed method outperforms former on-policy and off-policy RL algorithms in terms of the convergence speed and performance.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"131 19","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131746111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144921
Kenichi Ishida, T. Takubo, Daiki Kobayashi, A. Ueno
The tripod gait using a buffer area around leg workspace is proposed to improve the mobility of the hexapod robot. The proposed method provides the buffer area for the support leg located near the border of the workspace to make a continuous walking motion when the walking direction is suddenly changed from the current walking direction. The change of direction is frequently occurred in the teleoperation by the operator to adapt to the remote environment and the required tasks. In the case of a fixed workspace, the group of the support leg when it located on the border of the workspace has to stop the motion until the group of the swing leg will reach to a new landing position corresponding to the commanded direction. Since the buffer area is given to the support leg, it can move until the swing leg will land to the new target position for the direction change. In this paper, the buffer area is defined around the workspace so that the trajectory generation method of the direction change of the support leg and the swing leg using the buffer area are implemented. The smooth direction change using the proposed method is shown based on the actual robot parameters.
{"title":"Tripod gait using buffer area around leg workspace for flexible direction change","authors":"Kenichi Ishida, T. Takubo, Daiki Kobayashi, A. Ueno","doi":"10.1109/UR49135.2020.9144921","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144921","url":null,"abstract":"The tripod gait using a buffer area around leg workspace is proposed to improve the mobility of the hexapod robot. The proposed method provides the buffer area for the support leg located near the border of the workspace to make a continuous walking motion when the walking direction is suddenly changed from the current walking direction. The change of direction is frequently occurred in the teleoperation by the operator to adapt to the remote environment and the required tasks. In the case of a fixed workspace, the group of the support leg when it located on the border of the workspace has to stop the motion until the group of the swing leg will reach to a new landing position corresponding to the commanded direction. Since the buffer area is given to the support leg, it can move until the swing leg will land to the new target position for the direction change. In this paper, the buffer area is defined around the workspace so that the trajectory generation method of the direction change of the support leg and the swing leg using the buffer area are implemented. The smooth direction change using the proposed method is shown based on the actual robot parameters.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130193928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144804
Jingwen Zhang, Junjie Shen, D. Hong
With a unique kinematic arrangement, a new type of quadruped robot with reduced degrees of freedom (DoF) requires minimal-torque actuators to achieve high-payload locomotion. This paper focuses on the kinematic analysis and design optimization for robots of this type. To plan and control its change of posture, a necessary strategy to find feasible solutions of full-body inverse kinematics under additional kinematic constraints is introduced. A design method via nonlinear programming (NLP) is first presented in order to optimize link parameters with guarantee to a series of successive steps. Workspace is also investigated to prepare for further dynamic motion planning. We have verified feasibility of proposed methods with software simulations and hardware implementations, e.g., omni-directional walking and situ rotation.
{"title":"Kinematic Analysis and Design Optimization for a Reduced-DoF Quadruped Robot with Minimal Torque Requirements","authors":"Jingwen Zhang, Junjie Shen, D. Hong","doi":"10.1109/UR49135.2020.9144804","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144804","url":null,"abstract":"With a unique kinematic arrangement, a new type of quadruped robot with reduced degrees of freedom (DoF) requires minimal-torque actuators to achieve high-payload locomotion. This paper focuses on the kinematic analysis and design optimization for robots of this type. To plan and control its change of posture, a necessary strategy to find feasible solutions of full-body inverse kinematics under additional kinematic constraints is introduced. A design method via nonlinear programming (NLP) is first presented in order to optimize link parameters with guarantee to a series of successive steps. Workspace is also investigated to prepare for further dynamic motion planning. We have verified feasibility of proposed methods with software simulations and hardware implementations, e.g., omni-directional walking and situ rotation.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130122305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144890
Min-Woo Na, Jae-Bok Song
In 3D measurement inspection systems, precise registration between measured point clouds is required to obtain high quality results. In such cases, it is critical that there be proper overlaps between the measurements and that the overall shapes be measured without any blank areas. Thus, if the inspection system does not reflect the shape of the object, unmeasured areas may remain, causing the registration to fail or deteriorate. To solve this problem, a robotic path planning method to measure all areas of complex shaped objects is proposed. First, a segmentation-based view planning to extract a viewpoint that properly reflects the object shape is presented. In addition, occlusions that may occur in the extracted viewpoints are prevented, and path planning is performed to make the viewpoint available to a measurement system comprising a robot and rotary table. Furthermore, it is shown that a complex-shaped object can be measured without occlusions using the proposed method.
{"title":"Robotic Path Planning for Inspection of Complex-Shaped Objects","authors":"Min-Woo Na, Jae-Bok Song","doi":"10.1109/UR49135.2020.9144890","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144890","url":null,"abstract":"In 3D measurement inspection systems, precise registration between measured point clouds is required to obtain high quality results. In such cases, it is critical that there be proper overlaps between the measurements and that the overall shapes be measured without any blank areas. Thus, if the inspection system does not reflect the shape of the object, unmeasured areas may remain, causing the registration to fail or deteriorate. To solve this problem, a robotic path planning method to measure all areas of complex shaped objects is proposed. First, a segmentation-based view planning to extract a viewpoint that properly reflects the object shape is presented. In addition, occlusions that may occur in the extracted viewpoints are prevented, and path planning is performed to make the viewpoint available to a measurement system comprising a robot and rotary table. Furthermore, it is shown that a complex-shaped object can be measured without occlusions using the proposed method.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114588682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144806
Gabriele Bolano, A. Roennau, R. Dillmann, Albert Groz
Robotic systems are complex and commonly require experts to program the motions and interactions between all the different components. Operators with programming skills are usually needed to make the robot perform a new task or even to apply small changes in its current behavior. For this reason many tools have been developed to ease the programming of robotic systems. Online programming methods rely on the use of the robot in order to move it to the desired configurations. On the other hand, simulation-based methods enable the offline teaching of the needed program without involving the actual hardware setup. Virtual Reality (VR) allows the user to program a robot safely and effortlessly, without the need to move the real manipulator. However, online programming methods are needed for on-site adjustments, but a common interface between these two methods is usually not available. In this work we propose a VR-based framework for programming robotic tasks. The system architecture deployed allows the integration of the defined programs into existing tools for online teaching and execution on the real hardware. The proposed virtual environment enables the intuitive definition of the entire task workflow, without the need to involve the real setup. The bilateral communication between this component and the robotic hardware allows the user to introduce changes in the virtual environment, as well into the real system. In this way, they can both be updated with the latest changes and used in a interchangeable way, exploiting the advantages of both methods in a flexible manner.
{"title":"Virtual Reality for Offline Programming of Robotic Applications with Online Teaching Methods","authors":"Gabriele Bolano, A. Roennau, R. Dillmann, Albert Groz","doi":"10.1109/UR49135.2020.9144806","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144806","url":null,"abstract":"Robotic systems are complex and commonly require experts to program the motions and interactions between all the different components. Operators with programming skills are usually needed to make the robot perform a new task or even to apply small changes in its current behavior. For this reason many tools have been developed to ease the programming of robotic systems. Online programming methods rely on the use of the robot in order to move it to the desired configurations. On the other hand, simulation-based methods enable the offline teaching of the needed program without involving the actual hardware setup. Virtual Reality (VR) allows the user to program a robot safely and effortlessly, without the need to move the real manipulator. However, online programming methods are needed for on-site adjustments, but a common interface between these two methods is usually not available. In this work we propose a VR-based framework for programming robotic tasks. The system architecture deployed allows the integration of the defined programs into existing tools for online teaching and execution on the real hardware. The proposed virtual environment enables the intuitive definition of the entire task workflow, without the need to involve the real setup. The bilateral communication between this component and the robotic hardware allows the user to introduce changes in the virtual environment, as well into the real system. In this way, they can both be updated with the latest changes and used in a interchangeable way, exploiting the advantages of both methods in a flexible manner.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129657845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual perception is a fundamental capability necessary for intelligent mobile robots to interact properly and safely with the humans in the real-world. Recently, the world has seen revolutionary advances in deep learning has led to some incredible breakthroughs in vision technology. However, research integrating diverse visual perception methods into robotic systems is still in its infancy and lacks validation in real-world scenarios. In this paper, we present a visual perception framework for an intelligent mobile robot. Based on the robot operating system middleware, our framework integrates a broad set of advanced algorithms capable of recognising people, objects and human poses, as well as describing observed scenes. In several challenge scenarios of international robotics competitions using two mobile service robots, the performance and acceptability of the proposed framework are evaluated.
{"title":"Visual Perception Framework for an Intelligent Mobile Robot","authors":"Chung-yeon Lee, Hyun-Dong Lee, Injune Hwang, Byoung-Tak Zhang","doi":"10.1109/UR49135.2020.9144932","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144932","url":null,"abstract":"Visual perception is a fundamental capability necessary for intelligent mobile robots to interact properly and safely with the humans in the real-world. Recently, the world has seen revolutionary advances in deep learning has led to some incredible breakthroughs in vision technology. However, research integrating diverse visual perception methods into robotic systems is still in its infancy and lacks validation in real-world scenarios. In this paper, we present a visual perception framework for an intelligent mobile robot. Based on the robot operating system middleware, our framework integrates a broad set of advanced algorithms capable of recognising people, objects and human poses, as well as describing observed scenes. In several challenge scenarios of international robotics competitions using two mobile service robots, the performance and acceptability of the proposed framework are evaluated.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117313551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144981
Yasuyuki Fujii, Kazuki Harada, H. Yamazoe, Joo-Ho Lee
Marine and lake monitoring application have been received a lot of attention to monitor and study water environmental changes. We are developing a novel sensing device that moves to an arbitrary position or keep a fixed position on water for long term environmental monitoring. The characteristics of our device are low-power, low-cost, omni-directional movement and portability. In this paper, we present a long-term surface monitoring system of the robot, a prototype of the sensing device and a control system. The proposed device goals are to move to any arbitrary direction or maintain its positions autonomously in the ocean or lake. We executed multiple experiments which not only confirmed the feasibility of the concepts but also identified some issues with the control system.
{"title":"Development and performance experiments in Lake Biwa of a small sensing device keeping fixed position on water","authors":"Yasuyuki Fujii, Kazuki Harada, H. Yamazoe, Joo-Ho Lee","doi":"10.1109/UR49135.2020.9144981","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144981","url":null,"abstract":"Marine and lake monitoring application have been received a lot of attention to monitor and study water environmental changes. We are developing a novel sensing device that moves to an arbitrary position or keep a fixed position on water for long term environmental monitoring. The characteristics of our device are low-power, low-cost, omni-directional movement and portability. In this paper, we present a long-term surface monitoring system of the robot, a prototype of the sensing device and a control system. The proposed device goals are to move to any arbitrary direction or maintain its positions autonomously in the ocean or lake. We executed multiple experiments which not only confirmed the feasibility of the concepts but also identified some issues with the control system.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"43 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120883468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}