Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144924
D. Hayosh, Xiao Liu, Kiju Lee
This paper presents a humanoid torso robot, named Woody. It has two arms, each with five degrees of freedom (DoF) and a 2-DoF neck supporting the head with two moving eyebrows. Woody’s hardware is made primarily from laser-cut plywood. Two cameras, a microphone, a speaker, and a microprocessor board are embedded in the robot. The processor board also contains a wireless communication module for remote control and access to networked or cloudbased resources. The interactive functions include face tracking, facial emotion recognition, and several pre-programmed default gestures. Woody’s graphical user interface (GUI) provides its users with instructions on hardware construction and initial setup. Through this GUI, the user can also program the robot and record new gestures by its motion recording function. Woody is developed as an open-source hardware platform that can also utilize open-source software for educational purposes.
{"title":"Woody: Low-Cost, Open-Source Humanoid Torso Robot","authors":"D. Hayosh, Xiao Liu, Kiju Lee","doi":"10.1109/UR49135.2020.9144924","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144924","url":null,"abstract":"This paper presents a humanoid torso robot, named Woody. It has two arms, each with five degrees of freedom (DoF) and a 2-DoF neck supporting the head with two moving eyebrows. Woody’s hardware is made primarily from laser-cut plywood. Two cameras, a microphone, a speaker, and a microprocessor board are embedded in the robot. The processor board also contains a wireless communication module for remote control and access to networked or cloudbased resources. The interactive functions include face tracking, facial emotion recognition, and several pre-programmed default gestures. Woody’s graphical user interface (GUI) provides its users with instructions on hardware construction and initial setup. Through this GUI, the user can also program the robot and record new gestures by its motion recording function. Woody is developed as an open-source hardware platform that can also utilize open-source software for educational purposes.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116143464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144931
Yang Bai, Koki Asami, M. Svinin, E. Magid
In this paper, a control strategy is developed for tracking the propagation of an expanding flood zone by using a group of unmanned aerial vehicles (UAVs). The strategy consists of two stages: a caging stage and a covering stage. In the caging stage, a group of UAVs, referring to the boundary drones, are averagely distributed along the boundary of the flood zone, tracking its propagation. In the covering stage, another group of UAVs, referring to the inner drones, are allocated among the interior region of the flood zone, covering the region as much as possible with less overlapping of the UAVs’ field of view. Corresponding control algorithms are proposed for the aforementioned types of UAVs to implement the control strategy. The feasibility of the control strategy is verified under simulations.
{"title":"Cooperative Multi-Robot Control for Monitoring an Expanding Flood Area","authors":"Yang Bai, Koki Asami, M. Svinin, E. Magid","doi":"10.1109/UR49135.2020.9144931","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144931","url":null,"abstract":"In this paper, a control strategy is developed for tracking the propagation of an expanding flood zone by using a group of unmanned aerial vehicles (UAVs). The strategy consists of two stages: a caging stage and a covering stage. In the caging stage, a group of UAVs, referring to the boundary drones, are averagely distributed along the boundary of the flood zone, tracking its propagation. In the covering stage, another group of UAVs, referring to the inner drones, are allocated among the interior region of the flood zone, covering the region as much as possible with less overlapping of the UAVs’ field of view. Corresponding control algorithms are proposed for the aforementioned types of UAVs to implement the control strategy. The feasibility of the control strategy is verified under simulations.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116330717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144799
Junjie Shen, Yeting Liu, Xiaoguang Zhang, D. Hong
This paper proposes a nonlinear programming (NLP) formulation intended for the trajectory optimization of legged robot jumping applications during the stance phase, taking into consideration the detailed robot model, actuator capability, terrain condition, etc. The method is applicable to a wide class of jumping robots and was successfully implemented on an articulated robotic leg for jumping in terms of maximum reachable height, minimum energy consumption, as well as optimum energy efficiency. The simulation and experimental results demonstrate that this approach is capable of not only planning one single jumping trajectory, but also designing a periodic jumping gait for legged robots.
{"title":"Optimized Jumping of an Articulated Robotic Leg","authors":"Junjie Shen, Yeting Liu, Xiaoguang Zhang, D. Hong","doi":"10.1109/UR49135.2020.9144799","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144799","url":null,"abstract":"This paper proposes a nonlinear programming (NLP) formulation intended for the trajectory optimization of legged robot jumping applications during the stance phase, taking into consideration the detailed robot model, actuator capability, terrain condition, etc. The method is applicable to a wide class of jumping robots and was successfully implemented on an articulated robotic leg for jumping in terms of maximum reachable height, minimum energy consumption, as well as optimum energy efficiency. The simulation and experimental results demonstrate that this approach is capable of not only planning one single jumping trajectory, but also designing a periodic jumping gait for legged robots.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125803468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144774
Maryam Tebyani, Ash Robbins, William Asper, S. Kurniawan, M. Teodorescu, Zhongkui Wang, S. Hirai
We present a novel approach for fabricating cable-driven robotic systems. Particularly, we show that a biomimetic finger featuring accurate bone geometry, ligament structures, and viscoelastic tendons can be synthesized as a single part using a mutli-material 3D printer. This fabrication method eliminates the need to engineer an interface between the rigid skeletal structure and elastic tendon system. The artificial muscles required to drive the printed tendons of the finger can also be printed in place. MuJoCo, a physics simulation engine which can be used to generate control strategies, is used to develop a model of the non-linear platform. A physical test bed is used to compare the simulation results to a printed prototype. This lays the groundwork for a new robotics design approach, where the fabrication and assembly is automated.
{"title":"3D Printing an Assembled Biomimetic Robotic Finger","authors":"Maryam Tebyani, Ash Robbins, William Asper, S. Kurniawan, M. Teodorescu, Zhongkui Wang, S. Hirai","doi":"10.1109/UR49135.2020.9144774","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144774","url":null,"abstract":"We present a novel approach for fabricating cable-driven robotic systems. Particularly, we show that a biomimetic finger featuring accurate bone geometry, ligament structures, and viscoelastic tendons can be synthesized as a single part using a mutli-material 3D printer. This fabrication method eliminates the need to engineer an interface between the rigid skeletal structure and elastic tendon system. The artificial muscles required to drive the printed tendons of the finger can also be printed in place. MuJoCo, a physics simulation engine which can be used to generate control strategies, is used to develop a model of the non-linear platform. A physical test bed is used to compare the simulation results to a printed prototype. This lays the groundwork for a new robotics design approach, where the fabrication and assembly is automated.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128365906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144980
Juan Medrano, Francisco Yumbla, SeungYeop Jeong, Iksu Choi, Yonghee Park, Eugene Auh, H. Moon
In this work, we propose a method to estimate the inertial jerk of a multicopter vehicle using inertial measurement unit (IMU) measurements, without taking any time derivatives nor the need for motor rpm sensors. If an attitude estimate is not available, a jerk estimate expressed in the vehicle-fixed frame can still be obtained. Our main result comes from the differential flatness property of the vehicle dynamics, which provides an expression that relates jerk with acceleration, angular velocity and thrust dynamics. Dynamic simulation shows that a reasonable estimation of jerk is obtained at linear and angular speed as fast as 2 m/s and 2 rad/s, respectively, and degraded with speed beyond those values. The proposed sensing model could be used to improve control performance or state estimation pipelines in multicopter vehicles.
{"title":"Jerk estimation for quadrotor based on differential flatness","authors":"Juan Medrano, Francisco Yumbla, SeungYeop Jeong, Iksu Choi, Yonghee Park, Eugene Auh, H. Moon","doi":"10.1109/UR49135.2020.9144980","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144980","url":null,"abstract":"In this work, we propose a method to estimate the inertial jerk of a multicopter vehicle using inertial measurement unit (IMU) measurements, without taking any time derivatives nor the need for motor rpm sensors. If an attitude estimate is not available, a jerk estimate expressed in the vehicle-fixed frame can still be obtained. Our main result comes from the differential flatness property of the vehicle dynamics, which provides an expression that relates jerk with acceleration, angular velocity and thrust dynamics. Dynamic simulation shows that a reasonable estimation of jerk is obtained at linear and angular speed as fast as 2 m/s and 2 rad/s, respectively, and degraded with speed beyond those values. The proposed sensing model could be used to improve control performance or state estimation pipelines in multicopter vehicles.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130266268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144771
T. Tao, Xingyu Yang, Jiayu Xu, Wei Wang, Sicong Zhang, Ming Li, Guanghua Xu
Stroke has become the second leading cause of death in the world, and timely rehabilitation can effectively help patients recover. At present, with the shortage of rehabilitation doctors, using rehabilitation robots to help patients recover has become a more feasible solution. In order to plan a bionic motion trajectory of an upper limb rehabilitation robot more conveniently, a teaching trajectory planning method was proposed based on human pose estimation in this paper. The teaching trajectories were collected by Kinect's depth camera and human bone joints were tracked using deep neural networks OpenPose. The processed trajectories were verified with modeling simulation and robot motion. The planar trajectories were evaluated using the minimum Jerk principle on bio-imitability, the position determination coefficient is more 0.99, the speed determination coefficient is more than 0.94, and the acceleration determination coefficient is more than 0.88. In the case of block, the recognition success rate has increased by more than 73.4% compared with Kinect's bone binding OpenPose algorithm for human bone joint recognition. The bioimitability of the trajectories planned by this method can conveniently and quickly meet the needs of rehabilitation doctors in hospitals to plan the rehabilitation robot trajectory.
{"title":"Trajectory Planning of Upper Limb Rehabilitation Robot Based on Human Pose Estimation","authors":"T. Tao, Xingyu Yang, Jiayu Xu, Wei Wang, Sicong Zhang, Ming Li, Guanghua Xu","doi":"10.1109/UR49135.2020.9144771","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144771","url":null,"abstract":"Stroke has become the second leading cause of death in the world, and timely rehabilitation can effectively help patients recover. At present, with the shortage of rehabilitation doctors, using rehabilitation robots to help patients recover has become a more feasible solution. In order to plan a bionic motion trajectory of an upper limb rehabilitation robot more conveniently, a teaching trajectory planning method was proposed based on human pose estimation in this paper. The teaching trajectories were collected by Kinect's depth camera and human bone joints were tracked using deep neural networks OpenPose. The processed trajectories were verified with modeling simulation and robot motion. The planar trajectories were evaluated using the minimum Jerk principle on bio-imitability, the position determination coefficient is more 0.99, the speed determination coefficient is more than 0.94, and the acceleration determination coefficient is more than 0.88. In the case of block, the recognition success rate has increased by more than 73.4% compared with Kinect's bone binding OpenPose algorithm for human bone joint recognition. The bioimitability of the trajectories planned by this method can conveniently and quickly meet the needs of rehabilitation doctors in hospitals to plan the rehabilitation robot trajectory.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131606892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144958
Timothy Ha, Kyunghoon Cho, Geonho Cha, Kyungjae Lee, Songhwai Oh
In this paper, we propose a model-based Monte-Carlo Tree Search (model-based MCTS) algorithm for the vehicle planning and control problem. While driving vehicles, we need to predict the future states of other vehicles to avoid collisions. However, because the movements of the vehicles are determined by the intentions of the human drivers, we need to use a prediction model which captures the intention of the human behavior. In our model-based MCTS algorithm, we introduce a neural-network-based prediction model which predicts the behaviors of the human drivers. Unlike conventional MCTS algorithms, our method estimates the rewards and Q-values based on intention-considering future states, not from the pre-defined deterministic models or self-play methods. For the evaluation, we use environments where the other vehicles follow the trajectories of pre-collected driver datasets. Our method shows novel results in the collision avoidance and success rate of the driving, compared to other reinforcement learning and imitation learning algorithms.
{"title":"Vehicle Control with Prediction Model Based Monte-Carlo Tree Search","authors":"Timothy Ha, Kyunghoon Cho, Geonho Cha, Kyungjae Lee, Songhwai Oh","doi":"10.1109/UR49135.2020.9144958","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144958","url":null,"abstract":"In this paper, we propose a model-based Monte-Carlo Tree Search (model-based MCTS) algorithm for the vehicle planning and control problem. While driving vehicles, we need to predict the future states of other vehicles to avoid collisions. However, because the movements of the vehicles are determined by the intentions of the human drivers, we need to use a prediction model which captures the intention of the human behavior. In our model-based MCTS algorithm, we introduce a neural-network-based prediction model which predicts the behaviors of the human drivers. Unlike conventional MCTS algorithms, our method estimates the rewards and Q-values based on intention-considering future states, not from the pre-defined deterministic models or self-play methods. For the evaluation, we use environments where the other vehicles follow the trajectories of pre-collected driver datasets. Our method shows novel results in the collision avoidance and success rate of the driving, compared to other reinforcement learning and imitation learning algorithms.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131880791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144811
Donghun Noh, Yeting Liu, Fadi A. Rafeedi, Hyunwoo Nam, Kyle Gillespie, June-sup Yi, Taoyuanmin Zhu, Qing Xu, D. Hong
This paper introduces the kinematic configuration, kinematic analysis, workspace analysis of a dual-arm manipulation platform intended for varied cooking applications. Based on the analysis of different essential cooking tasks, each arm was designed to have 5 degrees of freedom (DOFs) independently with an additional single DOF located at the center of the linkage connecting the two arms. The additional actuator expands the reachable workspace as well as the common workspace between the two arms. Furthermore, the additional joint optimizes the arm’s joint configuration for cooking tasks by giving the arm a redundant pitch joint. This allows the ends of each arm to be able to produce linear planar trajectories which are important for many precise cooking actions. The system will also be able to multitask, being able to simultaneously perform potentially disparate tasks in different areas of its workspace. Besides these advantages, we expect that this dual-arm system will be more computationally and cost-efficient than similar systems using higher DOF arms.
{"title":"Minimal Degree of Freedom Dual-Arm Manipulation Platform with Coupling Body Joint for Diverse Cooking Tasks","authors":"Donghun Noh, Yeting Liu, Fadi A. Rafeedi, Hyunwoo Nam, Kyle Gillespie, June-sup Yi, Taoyuanmin Zhu, Qing Xu, D. Hong","doi":"10.1109/UR49135.2020.9144811","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144811","url":null,"abstract":"This paper introduces the kinematic configuration, kinematic analysis, workspace analysis of a dual-arm manipulation platform intended for varied cooking applications. Based on the analysis of different essential cooking tasks, each arm was designed to have 5 degrees of freedom (DOFs) independently with an additional single DOF located at the center of the linkage connecting the two arms. The additional actuator expands the reachable workspace as well as the common workspace between the two arms. Furthermore, the additional joint optimizes the arm’s joint configuration for cooking tasks by giving the arm a redundant pitch joint. This allows the ends of each arm to be able to produce linear planar trajectories which are important for many precise cooking actions. The system will also be able to multitask, being able to simultaneously perform potentially disparate tasks in different areas of its workspace. Besides these advantages, we expect that this dual-arm system will be more computationally and cost-efficient than similar systems using higher DOF arms.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"329 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122741059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144983
Tao Xue, Jun Xie, Guanghua Xu, Peng Fang, Guiling Cui, Guanglin Li, Guozhi Cao, Yanjun Zhang, T. Tao, Min Li, Xiaodong Zhang
The event related potential (ERP) component P300 and N200 are considered to be the most valuable electrophysiological indicators to reflect cognitive function. The traditional rare-event P300-BCI paradigm usually only takes P300 component as the target feature but ignores the N200 component. In this paper, we proposed a novel motion-onset N200P300 brain-computer interface (BCI) paradigm, which could evoke significant N200 and P300 responses simultaneously. To evaluate the practicality of the proposed novel BCI paradigm and the robustness of the evoked N200P300 components, three different classifiers of linear discriminant analysis (LDA), stepwise linear discriminant analysis (SWLDA) and support vector machine (SVM) with different algorithm principles were used to analyze the recognition accuracy. We also compared the motion-onset N200P300 data with an N200-free portion to evaluate the impact of N200 component on the improvement of the BCI accuracy. Experimental results showed that, by means of this N200P300 combination feature, the BCI accuracy significantly increased and the false positive rate significantly decreased, indicating that the proposed motion-onset N200P300 BCI paradigm has superior performance than a traditional P300-BCI paradigm.
{"title":"A Novel Motion-Onset N200P300 Brain-Computer Interface Paradigm*","authors":"Tao Xue, Jun Xie, Guanghua Xu, Peng Fang, Guiling Cui, Guanglin Li, Guozhi Cao, Yanjun Zhang, T. Tao, Min Li, Xiaodong Zhang","doi":"10.1109/UR49135.2020.9144983","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144983","url":null,"abstract":"The event related potential (ERP) component P300 and N200 are considered to be the most valuable electrophysiological indicators to reflect cognitive function. The traditional rare-event P300-BCI paradigm usually only takes P300 component as the target feature but ignores the N200 component. In this paper, we proposed a novel motion-onset N200P300 brain-computer interface (BCI) paradigm, which could evoke significant N200 and P300 responses simultaneously. To evaluate the practicality of the proposed novel BCI paradigm and the robustness of the evoked N200P300 components, three different classifiers of linear discriminant analysis (LDA), stepwise linear discriminant analysis (SWLDA) and support vector machine (SVM) with different algorithm principles were used to analyze the recognition accuracy. We also compared the motion-onset N200P300 data with an N200-free portion to evaluate the impact of N200 component on the improvement of the BCI accuracy. Experimental results showed that, by means of this N200P300 combination feature, the BCI accuracy significantly increased and the false positive rate significantly decreased, indicating that the proposed motion-onset N200P300 BCI paradigm has superior performance than a traditional P300-BCI paradigm.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128765635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144902
Min Wang, H. Voos
UAV teleoperation is a demanding task, especially for amateur operators who wish to successfully accomplish their mission without collision. In this work we present an integrated 2D LIDAR based Sense-and-Avoid system which actively assists unskilled human operator in obstacle avoidance, so that the operator can focus on higher-level decisions and global objectives in UAV applications such as search and rescue, farming etc. Specifically, with our perception-assistive vehicle control design, novel adaptive virtual cushion force field (AVCFF) based avoidance strategy, and integrated sensing solution, the proposed UAV teleoperation assistance system is capable of obstacle detection and tracking, as well as automatic avoidance in complex environment where both static and dynamic objects are present. The proposed system is constructed on the basis of Hector Quadrotor open source framework [1], and its effectiveness is demonstrated and validated on a realistic simulated UAV platform in Gazebo simulations where the UAV is operated at a high speed.
{"title":"An Integrated Teleoperation Assistance System for Collision Avoidance of High-speed UAVs in Complex Environments","authors":"Min Wang, H. Voos","doi":"10.1109/UR49135.2020.9144902","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144902","url":null,"abstract":"UAV teleoperation is a demanding task, especially for amateur operators who wish to successfully accomplish their mission without collision. In this work we present an integrated 2D LIDAR based Sense-and-Avoid system which actively assists unskilled human operator in obstacle avoidance, so that the operator can focus on higher-level decisions and global objectives in UAV applications such as search and rescue, farming etc. Specifically, with our perception-assistive vehicle control design, novel adaptive virtual cushion force field (AVCFF) based avoidance strategy, and integrated sensing solution, the proposed UAV teleoperation assistance system is capable of obstacle detection and tracking, as well as automatic avoidance in complex environment where both static and dynamic objects are present. The proposed system is constructed on the basis of Hector Quadrotor open source framework [1], and its effectiveness is demonstrated and validated on a realistic simulated UAV platform in Gazebo simulations where the UAV is operated at a high speed.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127177898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}