Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144862
Haruka Sekino, Erina Kasano, Wei-Fen Hsieh, E. Sato-Shimokawara, Toru Yamaguchi
Dialogue robots have been actively researched. Many of these robots rely on merely using verbal information. However, human intention is conveyed using verbal information and nonverbal information. In order to convey intention as humans do, robots are necessary to express intention using verbal information and nonverbal information. This paper use speech information and head motion information to express confidence/unconfidence because they were useful features to estimate one’s confidence. First, human behavior expressing the presence or absence of confidence was collected from 8 participants. Human behavior was recorded by a microphone and a video camera. In order to select the behavior which is more understandable, the participants’ behavior was estimated for the confidence level by 3 estimators. Then the data of participants whose behavior was estimated to be more understandable were selected. The selected behavior was defined as representative speech feature and motion feature. Robot behavior was designed based on representative behavior. Finally, the experiment was conducted to evaluate the designed robot behavior. The robot behavior was estimated by 5 participants. The experiment results show that 3 participants estimated correctly the confidence/unconfidence behavior based on the representative speech feature. The differences between confidence and unconfidence of behavior are s the spent time before answer, the effective value of sound pressure, and utterance speed. Also, 3 participants estimated correctly the unconfidence behavior based on the representative motion features which are the longer spent time before answer and the bigger head rotation.
{"title":"Robot Behavior Design Expressing Confidence/Unconfidence based on Human Behavior Analysis","authors":"Haruka Sekino, Erina Kasano, Wei-Fen Hsieh, E. Sato-Shimokawara, Toru Yamaguchi","doi":"10.1109/UR49135.2020.9144862","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144862","url":null,"abstract":"Dialogue robots have been actively researched. Many of these robots rely on merely using verbal information. However, human intention is conveyed using verbal information and nonverbal information. In order to convey intention as humans do, robots are necessary to express intention using verbal information and nonverbal information. This paper use speech information and head motion information to express confidence/unconfidence because they were useful features to estimate one’s confidence. First, human behavior expressing the presence or absence of confidence was collected from 8 participants. Human behavior was recorded by a microphone and a video camera. In order to select the behavior which is more understandable, the participants’ behavior was estimated for the confidence level by 3 estimators. Then the data of participants whose behavior was estimated to be more understandable were selected. The selected behavior was defined as representative speech feature and motion feature. Robot behavior was designed based on representative behavior. Finally, the experiment was conducted to evaluate the designed robot behavior. The robot behavior was estimated by 5 participants. The experiment results show that 3 participants estimated correctly the confidence/unconfidence behavior based on the representative speech feature. The differences between confidence and unconfidence of behavior are s the spent time before answer, the effective value of sound pressure, and utterance speed. Also, 3 participants estimated correctly the unconfidence behavior based on the representative motion features which are the longer spent time before answer and the bigger head rotation.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123053038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144933
Yanjun Zhang, Jun Xie, Guanghua Xu, Peng Fang, Guiling Cui, Guanglin Li, Guozhi Cao, Tao Xue, Xiaodong Zhang, Min Li, T. Tao
In order to improve the practicability of brain computer interface (BCI) system based on steady-state visual evoked potential (SSVEP), it is necessary to design BCI equipment with portability and low-cost. According to the principle of stochastic resonance (SR), the recognition accuracy of visual evoked potential could be improved by full-screen visual noise. Based on the above requirements, this paper proposed the usage of field programmable gate array (FPGA) to control stimulator through high definition multimedia interface (HDMI) for the display of steady-state motion visual evoked potential (SSMVEP) paradigm. By adding spatially localized visual noise to the motion-reversal checkerboard paradigm, the recognition accuracy is improved. According to the experimental results under different noise levels, the average recognition accuracies calculated with occipital electrodes O1, Oz, O2, PO3, POz and PO4 are 77.2%, 87.5%, and 85.2% corresponding to noise standard deviations values of 0, 24, and 40, respectively. In order to analyze the SR effect on the recognition accuracy with utilization of spatially localized visual noise, statistical analyses on the recognition accuracies under different noise intensities and different channel combinations are carried out. Results showed that the spatially localized visual noise could significantly improve the recognition accuracy and the stability of the proposed FPGA based online SSMVEP BCI system.
{"title":"FPGA Implementation of Visual Noise Optimized Online Steady-State Motion Visual Evoked Potential BCI System*","authors":"Yanjun Zhang, Jun Xie, Guanghua Xu, Peng Fang, Guiling Cui, Guanglin Li, Guozhi Cao, Tao Xue, Xiaodong Zhang, Min Li, T. Tao","doi":"10.1109/UR49135.2020.9144933","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144933","url":null,"abstract":"In order to improve the practicability of brain computer interface (BCI) system based on steady-state visual evoked potential (SSVEP), it is necessary to design BCI equipment with portability and low-cost. According to the principle of stochastic resonance (SR), the recognition accuracy of visual evoked potential could be improved by full-screen visual noise. Based on the above requirements, this paper proposed the usage of field programmable gate array (FPGA) to control stimulator through high definition multimedia interface (HDMI) for the display of steady-state motion visual evoked potential (SSMVEP) paradigm. By adding spatially localized visual noise to the motion-reversal checkerboard paradigm, the recognition accuracy is improved. According to the experimental results under different noise levels, the average recognition accuracies calculated with occipital electrodes O1, Oz, O2, PO3, POz and PO4 are 77.2%, 87.5%, and 85.2% corresponding to noise standard deviations values of 0, 24, and 40, respectively. In order to analyze the SR effect on the recognition accuracy with utilization of spatially localized visual noise, statistical analyses on the recognition accuracies under different noise intensities and different channel combinations are carried out. Results showed that the spatially localized visual noise could significantly improve the recognition accuracy and the stability of the proposed FPGA based online SSMVEP BCI system.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122430154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144859
Dong Wei, Hua Zhou, Huayong Yang
Robots are increasingly working side-by-side with human to fuse their complementary capabilities in cooperating with them for tasks in a wide range of applications, such as exoskeleton and industry or health-care. In order to promote natural interaction between humans and robots, the ability of humans to negotiate intentions through haptic channels has inspired a number of studies aimed at improving human-robot interaction performance. In this work, we propose a novel human-robot negotiation policy and introduce adaptive virtual fixture technology into traditional mechanisms to integrate bilateral intentions. In the policy, virtual fixtures are used to generate and adjust virtual paths while negotiation with human partners, speeding up people’s perception of robot task, making negotiation more efficient. Moreover, the path will adapt online to the estimated human intention, providing better solutions for both dyads while ensuring performance. The proposed strategy is verified in collaborative obstacle avoidance experiments.
{"title":"Human-robot negotiation of intentions based on virtual fixtures for shared task execution","authors":"Dong Wei, Hua Zhou, Huayong Yang","doi":"10.1109/UR49135.2020.9144859","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144859","url":null,"abstract":"Robots are increasingly working side-by-side with human to fuse their complementary capabilities in cooperating with them for tasks in a wide range of applications, such as exoskeleton and industry or health-care. In order to promote natural interaction between humans and robots, the ability of humans to negotiate intentions through haptic channels has inspired a number of studies aimed at improving human-robot interaction performance. In this work, we propose a novel human-robot negotiation policy and introduce adaptive virtual fixture technology into traditional mechanisms to integrate bilateral intentions. In the policy, virtual fixtures are used to generate and adjust virtual paths while negotiation with human partners, speeding up people’s perception of robot task, making negotiation more efficient. Moreover, the path will adapt online to the estimated human intention, providing better solutions for both dyads while ensuring performance. The proposed strategy is verified in collaborative obstacle avoidance experiments.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123313500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a model predictive control scheme for robotic manipulator in trajectory tracking in the presence of input constraints, which provides convergent tracking of reference trajectories and robustness to model mismatch. Firstly, the dynamic model of n-link robotic manipulator is linearized and discretized using Taylor approximation, based on which the constrained optimization question is converted to a quadratic programming problem. Then future output of system is predicted and the optimum control problem is solved online according to current state and previous input, while terminal constraint is included to reduce the tracking error. Finally, the convergence of the proposed control scheme is proved in simulation with the UR5 model and its robustness to model mismatch is verified by comparison with classical predictive control method.
{"title":"Trajectory Tracking of Robotic Manipulators with Constraints Based on Model Predictive Control","authors":"Q. Tang, Zhugang Chu, Yu Qiang, Shun Wu, Zheng Zhou","doi":"10.1109/UR49135.2020.9144943","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144943","url":null,"abstract":"This paper presents a model predictive control scheme for robotic manipulator in trajectory tracking in the presence of input constraints, which provides convergent tracking of reference trajectories and robustness to model mismatch. Firstly, the dynamic model of n-link robotic manipulator is linearized and discretized using Taylor approximation, based on which the constrained optimization question is converted to a quadratic programming problem. Then future output of system is predicted and the optimum control problem is solved online according to current state and previous input, while terminal constraint is included to reduce the tracking error. Finally, the convergence of the proposed control scheme is proved in simulation with the UR5 model and its robustness to model mismatch is verified by comparison with classical predictive control method.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124249671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144892
Jaewoo Park, Isaac Kang, Junhyeong Kwon, Eunji Lee, Yoonsik Kim, Sujeong You, S. Ji, N. Cho
Recent advances in machine learning methods have increased the performances of object detection and recognition systems. Accordingly, automatic understanding of assembly instructions in manuals in the form of electronic or paper materials has also become an issue in the research community. This task is quite challenging because it requires the automatic optical character recognition (OCR) and also the understanding of various mechanical parts and diverse assembly illustrations that are sometimes difficult to understand even for humans. Although deep networks are showing high performance in many computer vision tasks, it is still difficult to perform this task by an end-to-end deep neural network due to the lack of training data, and also because of diversity and ambiguity of illustrative instructions. Hence, in this paper, we propose to tackle this problem by using both conventional non-learning approaches and deep neural networks, considering the current state-of-the-arts. Precisely, we first extract components having strict geometric structures, such as characters and illustrations, by conventional non-learning algorithms, and then apply deep neural networks to recognize the extracted components. The main targets considered in this paper are the types and the numbers of connectors, and behavioral indicators such as circles, rectangles, and arrows for each cut in do-it-yourself (DIY) furniture assembly manuals. For these limited targets, we train a deep neural network to recognize them with high precision. Experiments show that our method works robustly in various types of furniture assembly instructions.
{"title":"Recognition of Assembly Instructions Based on Geometric Feature and Text Recognition","authors":"Jaewoo Park, Isaac Kang, Junhyeong Kwon, Eunji Lee, Yoonsik Kim, Sujeong You, S. Ji, N. Cho","doi":"10.1109/UR49135.2020.9144892","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144892","url":null,"abstract":"Recent advances in machine learning methods have increased the performances of object detection and recognition systems. Accordingly, automatic understanding of assembly instructions in manuals in the form of electronic or paper materials has also become an issue in the research community. This task is quite challenging because it requires the automatic optical character recognition (OCR) and also the understanding of various mechanical parts and diverse assembly illustrations that are sometimes difficult to understand even for humans. Although deep networks are showing high performance in many computer vision tasks, it is still difficult to perform this task by an end-to-end deep neural network due to the lack of training data, and also because of diversity and ambiguity of illustrative instructions. Hence, in this paper, we propose to tackle this problem by using both conventional non-learning approaches and deep neural networks, considering the current state-of-the-arts. Precisely, we first extract components having strict geometric structures, such as characters and illustrations, by conventional non-learning algorithms, and then apply deep neural networks to recognize the extracted components. The main targets considered in this paper are the types and the numbers of connectors, and behavioral indicators such as circles, rectangles, and arrows for each cut in do-it-yourself (DIY) furniture assembly manuals. For these limited targets, we train a deep neural network to recognize them with high precision. Experiments show that our method works robustly in various types of furniture assembly instructions.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125125944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144984
Hyeonchul Jung, Min-Soo Kim, Yeheng Chen, H. Min, Taejoon Park
In this paper, we present a method to implement a robotic system with deep learning-based object detection in a simulation environment. The simulation environment is developed in Gazebo and run on Robot Operating System(ROS). ROS is a set of open-source software libraries that aims to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms. Gazebo is the convenient 3D simulator for use along with ROS. This paper introduces the steps to create a robot arm system controlled by ROS and object detection system using images from camera in Gazebo environment.
{"title":"Implementation of a unified simulation for robot arm control with object detection based on ROS and Gazebo","authors":"Hyeonchul Jung, Min-Soo Kim, Yeheng Chen, H. Min, Taejoon Park","doi":"10.1109/UR49135.2020.9144984","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144984","url":null,"abstract":"In this paper, we present a method to implement a robotic system with deep learning-based object detection in a simulation environment. The simulation environment is developed in Gazebo and run on Robot Operating System(ROS). ROS is a set of open-source software libraries that aims to simplify the task of creating complex and robust robot behavior across a wide variety of robotic platforms. Gazebo is the convenient 3D simulator for use along with ROS. This paper introduces the steps to create a robot arm system controlled by ROS and object detection system using images from camera in Gazebo environment.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125497316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144868
Hamid Isakhani, C. Xiong, Shigang Yue, Wenbin Chen
Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. However, primitive nature of the bioinspired research impedes such achievements, hence to bypass these limitations, this study introduces a bioinspired non-cooperative multiple objective optimization methodology based on a novel fusion of PARSEC, Nash strategy, and genetic algorithms to achieve insect-level aerodynamic efficiencies. The proposed technique is validated on a conventional airfoil as well as the wing crosssection of a desert locust (Schistocerca gregaria) at low Reynolds number, and we have recorded a 77% improvement in its gliding ratio.
{"title":"A Bioinspired Airfoil Optimization Technique Using Nash Genetic Algorithm","authors":"Hamid Isakhani, C. Xiong, Shigang Yue, Wenbin Chen","doi":"10.1109/UR49135.2020.9144868","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144868","url":null,"abstract":"Natural fliers glide and minimize wing articulation to conserve energy for endured and long range flights. Elucidating the underlying physiology of such capability could potentially address numerous challenging problems in flight engineering. However, primitive nature of the bioinspired research impedes such achievements, hence to bypass these limitations, this study introduces a bioinspired non-cooperative multiple objective optimization methodology based on a novel fusion of PARSEC, Nash strategy, and genetic algorithms to achieve insect-level aerodynamic efficiencies. The proposed technique is validated on a conventional airfoil as well as the wing crosssection of a desert locust (Schistocerca gregaria) at low Reynolds number, and we have recorded a 77% improvement in its gliding ratio.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129735147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aiming at the existing problems of exoskeletons in human motion capture and recognition, causing human can only move passively in the exoskeleton, which is difficult to efficiently rebuild the connection between human muscles and nerves. This paper describes a system for the analysis of human motion capture based on CAN bus. The system adopts a distributed system architecture, which can collect data including plantar pressure, exoskeleton joint angle, and provide data support for subsequent motion recognition algorithm. The system has a simple measurement method and low dependence on measurement environment. The accuracy of the system is verified by the contrast experiment of Vicon 3D motion capture system. The results of the experiment indicate that the designed human motion capture system has a high accuracy, and meets the requirements of human motion perception when controlling the Rehabilitation Exoskeleton.
{"title":"The Design and Implementation of Human Motion Capture System Based on CAN Bus *","authors":"Xian Yue, Aibin Zhu, Jiyuan Song, Guangzhong Cao, Delin An, Zhifu Guo","doi":"10.1109/UR49135.2020.9144858","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144858","url":null,"abstract":"Aiming at the existing problems of exoskeletons in human motion capture and recognition, causing human can only move passively in the exoskeleton, which is difficult to efficiently rebuild the connection between human muscles and nerves. This paper describes a system for the analysis of human motion capture based on CAN bus. The system adopts a distributed system architecture, which can collect data including plantar pressure, exoskeleton joint angle, and provide data support for subsequent motion recognition algorithm. The system has a simple measurement method and low dependence on measurement environment. The accuracy of the system is verified by the contrast experiment of Vicon 3D motion capture system. The results of the experiment indicate that the designed human motion capture system has a high accuracy, and meets the requirements of human motion perception when controlling the Rehabilitation Exoskeleton.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126873139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144768
Yuyang Lin, Yunlai Shi, Jun Zhang, Fugang Wang, Wenbo Wu, Haichao Sun
Robot-assisted prostate intervention under Magnetic resonance imaging (MRI) guidance is a promising method to improve the clinical performance compared with the manual method. An MRI-guided 6-DOF prostate intervention serial robot fully actuated by ultrasonic motors is designed and the control strategy is proposed. The mechanical design of the proposed robot is presented based on the design requirements of the prostate intervention robot. The binocular vision is adopted as the in-vitro needle tip measurement method and the robotic system combined with the binocular cameras are illustrated. Then the ultrasonic motor driving controller is designed. Finally, the position accuracy evaluation of the robot is carried out and the position error is about 1.898 mm which shows a good accuracy of the robot. The position tracking characteristics of the ultrasonic motor is presented where the maximum tracking error is under 7.5° which shows the efficiency of the driving controller design. The experiments indicate that the prostate intervention robot is feasible and shows good performance in the accuracy evaluation.
{"title":"Design and Control of a Piezoelectric Actuated Prostate Intervention Robotic System*","authors":"Yuyang Lin, Yunlai Shi, Jun Zhang, Fugang Wang, Wenbo Wu, Haichao Sun","doi":"10.1109/UR49135.2020.9144768","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144768","url":null,"abstract":"Robot-assisted prostate intervention under Magnetic resonance imaging (MRI) guidance is a promising method to improve the clinical performance compared with the manual method. An MRI-guided 6-DOF prostate intervention serial robot fully actuated by ultrasonic motors is designed and the control strategy is proposed. The mechanical design of the proposed robot is presented based on the design requirements of the prostate intervention robot. The binocular vision is adopted as the in-vitro needle tip measurement method and the robotic system combined with the binocular cameras are illustrated. Then the ultrasonic motor driving controller is designed. Finally, the position accuracy evaluation of the robot is carried out and the position error is about 1.898 mm which shows a good accuracy of the robot. The position tracking characteristics of the ultrasonic motor is presented where the maximum tracking error is under 7.5° which shows the efficiency of the driving controller design. The experiments indicate that the prostate intervention robot is feasible and shows good performance in the accuracy evaluation.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116609077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1109/UR49135.2020.9144869
Se Jin Kim, W. Chung, Keehoon Kim
Human-robot interaction (HRI) is a rapidly growing research area and it occurs in many applications including human-robot collaboration, human power augmentation, and rehabilitation robotics. As it is hard to exactly calculate intended motion trajectory, generally, interaction control is applied in HRI instead of pure motion control. To implement the interaction control, force information is necessary and force sensor is widely used in force feedback. However, force sensor has some limitations as 1) it is subject to breakdown, 2) it imposes additional volume and weight to the system, and 3) its applicable places are constrained. In this situation, force estimation can be a good solution. However, if force in static situation should be measured, using position and velocity is not sufficient because they are not influenced by the exerted force anymore. Therefore, we proposed sEMG-based static force estimation using deep learning. sEMG provides a useful information about human-exerting force because it reflects the human intention. Also, to extract the complex relationship between sEMG and force, deep learning approach is used. Experimental results show that when force with maximal value of 63.2 N is exerted, average force estimation error was 3.67 N. Also, the proposed method shows that force onset timing of estimated force is faster than force sensor signal. This result would be advantageous for faster human intention recognition.
{"title":"sEMG-based Static Force Estimation for Human-Robot Interaction using Deep Learning","authors":"Se Jin Kim, W. Chung, Keehoon Kim","doi":"10.1109/UR49135.2020.9144869","DOIUrl":"https://doi.org/10.1109/UR49135.2020.9144869","url":null,"abstract":"Human-robot interaction (HRI) is a rapidly growing research area and it occurs in many applications including human-robot collaboration, human power augmentation, and rehabilitation robotics. As it is hard to exactly calculate intended motion trajectory, generally, interaction control is applied in HRI instead of pure motion control. To implement the interaction control, force information is necessary and force sensor is widely used in force feedback. However, force sensor has some limitations as 1) it is subject to breakdown, 2) it imposes additional volume and weight to the system, and 3) its applicable places are constrained. In this situation, force estimation can be a good solution. However, if force in static situation should be measured, using position and velocity is not sufficient because they are not influenced by the exerted force anymore. Therefore, we proposed sEMG-based static force estimation using deep learning. sEMG provides a useful information about human-exerting force because it reflects the human intention. Also, to extract the complex relationship between sEMG and force, deep learning approach is used. Experimental results show that when force with maximal value of 63.2 N is exerted, average force estimation error was 3.67 N. Also, the proposed method shows that force onset timing of estimated force is faster than force sensor signal. This result would be advantageous for faster human intention recognition.","PeriodicalId":360208,"journal":{"name":"2020 17th International Conference on Ubiquitous Robots (UR)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132458513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}