Pub Date : 2015-09-10DOI: 10.1109/ICAR.2015.7251472
J. B. Ullauri, L. Peternel, B. Ugurlu, Yoji Yamada, J. Morimoto
Exoskeletons are successful at supporting human motion only when the necessary amount of power is provided at the right time. Exoskeleton control based on EMG signals can be utilized to command the required amount of support in real-time. To this end, one needs to map human muscle activity to the desired task-specific exoskeleton torques. In order to achieve such mapping, this paper analyzes two distinct methods to estimate the human-elbow-joint torque based on the related muscle activity. The first model is adopted from pneumatic artificial muscles (PAMs). The second model is based on a machine learning method known as Gaussian Process Regression (GPR). The performance of both approaches were assessed based on their ability to estimate the elbow-joint torque of two able-bodied subjects using EMG signals that were collected from biceps and triceps muscles. The experiments suggest that the GPR-based approach provides relatively more favorable predictions.
{"title":"On the EMG-based torque estimation for humans coupled with a force-controlled elbow exoskeleton","authors":"J. B. Ullauri, L. Peternel, B. Ugurlu, Yoji Yamada, J. Morimoto","doi":"10.1109/ICAR.2015.7251472","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251472","url":null,"abstract":"Exoskeletons are successful at supporting human motion only when the necessary amount of power is provided at the right time. Exoskeleton control based on EMG signals can be utilized to command the required amount of support in real-time. To this end, one needs to map human muscle activity to the desired task-specific exoskeleton torques. In order to achieve such mapping, this paper analyzes two distinct methods to estimate the human-elbow-joint torque based on the related muscle activity. The first model is adopted from pneumatic artificial muscles (PAMs). The second model is based on a machine learning method known as Gaussian Process Regression (GPR). The performance of both approaches were assessed based on their ability to estimate the elbow-joint torque of two able-bodied subjects using EMG signals that were collected from biceps and triceps muscles. The experiments suggest that the GPR-based approach provides relatively more favorable predictions.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115498196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251502
A. Paikan, David Schiebener, Mirko Wächter, T. Asfour, G. Metta, L. Natale
This study describes the transfer of object grasping skills between two different humanoid robots with different software frameworks. We realize such a knowledge and skill transfer between the humanoid robots iCub and ARMAR-III. These two robots have different kinematics and are programmed using different middlewares, YARP and ArmarX. We developed a bridge system to allow for the execution of grasping skills of ARMAR-III on iCub. As the embodiment differs, the known feasible grasps for the one robot are not always feasible for the other robot. We propose a reactive correction behavior to detect failure of a grasp during its execution, to correct it until it is successful, and thus adapt the known grasp definition to the new embodiment.
{"title":"Transferring object grasping knowledge and skill across different robotic platforms","authors":"A. Paikan, David Schiebener, Mirko Wächter, T. Asfour, G. Metta, L. Natale","doi":"10.1109/ICAR.2015.7251502","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251502","url":null,"abstract":"This study describes the transfer of object grasping skills between two different humanoid robots with different software frameworks. We realize such a knowledge and skill transfer between the humanoid robots iCub and ARMAR-III. These two robots have different kinematics and are programmed using different middlewares, YARP and ArmarX. We developed a bridge system to allow for the execution of grasping skills of ARMAR-III on iCub. As the embodiment differs, the known feasible grasps for the one robot are not always feasible for the other robot. We propose a reactive correction behavior to detect failure of a grasp during its execution, to correct it until it is successful, and thus adapt the known grasp definition to the new embodiment.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124263430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251487
E. Mehrabi, H. Talebi, M. Zarei-nejad, I. Sharifi
Cooperative control of manipulator robotic systems to grasp and handle an object is studied in this paper. Based on the passive decomposition approach, the cooperative system is decomposed into decoupled shaped and locked systems. Then, regressor free adaptive control laws for the decoupled shaped and locked systems are proposed. Despite existing shaped and locked approaches in the literature, the proposed approach guarantees the passivity of the closed loop system when the robots dynamics are unknown. Simulation results verify the accuracy of the proposed control scheme.
{"title":"Cooperative control of manipulator robotic systems with unknown dynamics","authors":"E. Mehrabi, H. Talebi, M. Zarei-nejad, I. Sharifi","doi":"10.1109/ICAR.2015.7251487","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251487","url":null,"abstract":"Cooperative control of manipulator robotic systems to grasp and handle an object is studied in this paper. Based on the passive decomposition approach, the cooperative system is decomposed into decoupled shaped and locked systems. Then, regressor free adaptive control laws for the decoupled shaped and locked systems are proposed. Despite existing shaped and locked approaches in the literature, the proposed approach guarantees the passivity of the closed loop system when the robots dynamics are unknown. Simulation results verify the accuracy of the proposed control scheme.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127546732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251505
José Cano, Eduardo J. Molinos, V. Nagarajan, S. Vijayakumar
In distributed (mobile) robotics environments, the different computing substrates offer flexible resource allocation options to perform computations that implement an overall system goal. The AnyScale concept that we introduce and describe in this paper exploits this redundancy by dynamically allocating tasks to appropriate substrates (or scales) to optimize some level of system performance while migrating others depending on current resource and performance parameters. In this paper, we demonstrate this concept with a general ROS-based infrastructure that solves the task allocation problem by optimising the system performance while correctly reacting to unpredictable events at the same time. Assignment decisions are based on a characterisation of the static/dynamic parameters that represent the system and its interaction with the environment. We instantiate our infrastructure on a case study application, in which a mobile robot navigates along the floor of a building trying to reach a predefined goal. Experimental validation demonstrates more robust performance (around a third improvement in metrics) under the Anyscale implementation framework.
{"title":"Dynamic process migration in heterogeneous ROS-based environments","authors":"José Cano, Eduardo J. Molinos, V. Nagarajan, S. Vijayakumar","doi":"10.1109/ICAR.2015.7251505","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251505","url":null,"abstract":"In distributed (mobile) robotics environments, the different computing substrates offer flexible resource allocation options to perform computations that implement an overall system goal. The AnyScale concept that we introduce and describe in this paper exploits this redundancy by dynamically allocating tasks to appropriate substrates (or scales) to optimize some level of system performance while migrating others depending on current resource and performance parameters. In this paper, we demonstrate this concept with a general ROS-based infrastructure that solves the task allocation problem by optimising the system performance while correctly reacting to unpredictable events at the same time. Assignment decisions are based on a characterisation of the static/dynamic parameters that represent the system and its interaction with the environment. We instantiate our infrastructure on a case study application, in which a mobile robot navigates along the floor of a building trying to reach a predefined goal. Experimental validation demonstrates more robust performance (around a third improvement in metrics) under the Anyscale implementation framework.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"19 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126182202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251436
Zhi Zheng, Eric M. Young, A. Swanson, A. Weitlauf, Z. Warren, N. Sarkar
Autism Spectrum Disorder (ASD) impacts 1 in 68 children in the U.S. with tremendous consequent cost in terms of care and treatment. Evidence suggests that early intervention is critical for optimal treatment results. Robots have been shown to have great potential to attract attention of children with ASD and can facilitate early interventions on core deficits. In this paper, we propose a robotic platform that mediates imitation skill training for young children with ASD. Imitation skills are considered to be one of the most important skill deficits in children with ASD, which has a profound impact on social communication. While a few previous works have provided methods for single gesture imitation training, the current paper extends the training to incorporate mixed gestures consisting of multiple single gestures during intervention. A preliminary user study showed that the proposed robotic system was able to stimulate mixed gesture imitation in young children with ASD with promising gesture recognition accuracy.
{"title":"Robot-mediated mixed gesture imitation skill training for young children with ASD","authors":"Zhi Zheng, Eric M. Young, A. Swanson, A. Weitlauf, Z. Warren, N. Sarkar","doi":"10.1109/ICAR.2015.7251436","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251436","url":null,"abstract":"Autism Spectrum Disorder (ASD) impacts 1 in 68 children in the U.S. with tremendous consequent cost in terms of care and treatment. Evidence suggests that early intervention is critical for optimal treatment results. Robots have been shown to have great potential to attract attention of children with ASD and can facilitate early interventions on core deficits. In this paper, we propose a robotic platform that mediates imitation skill training for young children with ASD. Imitation skills are considered to be one of the most important skill deficits in children with ASD, which has a profound impact on social communication. While a few previous works have provided methods for single gesture imitation training, the current paper extends the training to incorporate mixed gestures consisting of multiple single gestures during intervention. A preliminary user study showed that the proposed robotic system was able to stimulate mixed gesture imitation in young children with ASD with promising gesture recognition accuracy.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115571808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251500
Riccardo Monica, J. Aleotti, S. Caselli
In this paper, a spatial attention approach is presented for a robot manipulator equipped with a Kinect range sensor in eye-in-hand configuration. The location of salient object manipulation actions performed by the user is detected by analyzing the motion of the user hand. Relevance of user activities is determined by an attentional approach based on Gaussian mixture models. A next best view planner focuses the viewpoint of the eye-in-hand sensor towards the regions of the workspace that are most salient. 3D scene representation is updated by using a modified version of the KinectFusion algorithm that exploits the robot kinematics. Experiments are reported comparing two variations of next best view strategies.
{"title":"GMM-based detection of human hand actions for robot spatial attention","authors":"Riccardo Monica, J. Aleotti, S. Caselli","doi":"10.1109/ICAR.2015.7251500","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251500","url":null,"abstract":"In this paper, a spatial attention approach is presented for a robot manipulator equipped with a Kinect range sensor in eye-in-hand configuration. The location of salient object manipulation actions performed by the user is detected by analyzing the motion of the user hand. Relevance of user activities is determined by an attentional approach based on Gaussian mixture models. A next best view planner focuses the viewpoint of the eye-in-hand sensor towards the regions of the workspace that are most salient. 3D scene representation is updated by using a modified version of the KinectFusion algorithm that exploits the robot kinematics. Experiments are reported comparing two variations of next best view strategies.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121983653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251511
Simon Hangl, Emre Ugur, S. Szedmák, J. Piater, A. Ude
In the context of manipulation of dynamical systems, it is not trivial to design controllers that can cope with unpredictable changes in the system being manipulated. For example, in a pouring task, the target cup might start moving or the agent may decide to change the amount of the liquid during action execution. In order to cope with these situations, the robot should smoothly (and timely) change its execution policy based on the requirements of the new situation. In this paper, we propose a robust method that allows the robot to smoothly and successfully react to such changes. The robot first learns a set of execution trajectories that can solve a number of tasks in different situations. When encountered with a novel situation, the robot smoothly adapts its trajectory to a new one that is generated by weighted linear combination of the previously learned trajectories, where the weights are computed using a metric that depends on the task. This task-dependent metric is automatically learned in the state space of the robot, rather than the motor control space, and further optimized using using reinforcement learning (RL) framework. We discuss that our system can learn and model various manipulation tasks such as pouring or reaching; and can successfully react to a wide range of perturbations introduced during task executions. We evaluated our method against ground truth with a synthetic trajectory dataset, and verified it in grasping and pouring tasks with a real robot.
{"title":"Reactive, task-specific object manipulation by metric reinforcement learning","authors":"Simon Hangl, Emre Ugur, S. Szedmák, J. Piater, A. Ude","doi":"10.1109/ICAR.2015.7251511","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251511","url":null,"abstract":"In the context of manipulation of dynamical systems, it is not trivial to design controllers that can cope with unpredictable changes in the system being manipulated. For example, in a pouring task, the target cup might start moving or the agent may decide to change the amount of the liquid during action execution. In order to cope with these situations, the robot should smoothly (and timely) change its execution policy based on the requirements of the new situation. In this paper, we propose a robust method that allows the robot to smoothly and successfully react to such changes. The robot first learns a set of execution trajectories that can solve a number of tasks in different situations. When encountered with a novel situation, the robot smoothly adapts its trajectory to a new one that is generated by weighted linear combination of the previously learned trajectories, where the weights are computed using a metric that depends on the task. This task-dependent metric is automatically learned in the state space of the robot, rather than the motor control space, and further optimized using using reinforcement learning (RL) framework. We discuss that our system can learn and model various manipulation tasks such as pouring or reaching; and can successfully react to a wide range of perturbations introduced during task executions. We evaluated our method against ground truth with a synthetic trajectory dataset, and verified it in grasping and pouring tasks with a real robot.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128564001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251496
Hsien-I Lin, Yu-Che Huang
Robotic calligraphy is an interesting problem and recently draws much attention. Two major problems in robotic calligraphy are stroke shape and stroke order. Most of previous work focused on controlling brush trajectory, pressure, velocity, and acceleration to draw a desired stroke shape. As for stroke order, it was manually given from a database. Even for a software of optical character recognition (OCR), it cannot recognize the stroke order from a character image. This paper describes the automatic extraction of the stroke order of a Chinese character by visual matching. Specifically speaking, the stroke order of a Chinese character on an image can be automatically generated by the association of the standard image of the same character given with its stroke order. The proposed visual-matching method extracts the features of the Hough Lines of an input image and uses support vector machine (SVM) to associate the features with the ones of the standard image. The features used in the proposed method were evaluated on several Chinese characters. Two famous Chinese characters “Country” and “Dragon” were used to demonstrate the feasibility of the proposed method. The matched rate of the stroke order of “Country” and “Dragon” were 95.8% and 90.3%, respectively.
{"title":"Visual matching of stroke order in robotic calligraphy","authors":"Hsien-I Lin, Yu-Che Huang","doi":"10.1109/ICAR.2015.7251496","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251496","url":null,"abstract":"Robotic calligraphy is an interesting problem and recently draws much attention. Two major problems in robotic calligraphy are stroke shape and stroke order. Most of previous work focused on controlling brush trajectory, pressure, velocity, and acceleration to draw a desired stroke shape. As for stroke order, it was manually given from a database. Even for a software of optical character recognition (OCR), it cannot recognize the stroke order from a character image. This paper describes the automatic extraction of the stroke order of a Chinese character by visual matching. Specifically speaking, the stroke order of a Chinese character on an image can be automatically generated by the association of the standard image of the same character given with its stroke order. The proposed visual-matching method extracts the features of the Hough Lines of an input image and uses support vector machine (SVM) to associate the features with the ones of the standard image. The features used in the proposed method were evaluated on several Chinese characters. Two famous Chinese characters “Country” and “Dragon” were used to demonstrate the feasibility of the proposed method. The matched rate of the stroke order of “Country” and “Dragon” were 95.8% and 90.3%, respectively.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114064728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251509
Bengisu Ozbay, Elvan Kuzucu, M. Gul, Dilan Ozturk, M. Tasci, A. Arisoy, H. Sirin, Ismail Uyanik
Light Detection and Ranging (LiDAR) devices are gaining more importance for obtaining sensory information in mobile robot applications. However, existing solutions in literature yield low frequency outputs with huge measurement delay to obtain 3D range image of the environment. This paper introduces the design and construction of a 3D range sensor based on rotating a 2D LiDAR around its pitch axis. Different than previous approaches, we adjust our scan frequency to 5 Hz to support its application on mobile robot platforms. However, increasing scan frequency drastically reduces the measurement density in 3D range images. Therefore, we propose two post-processing algorithms to increase measurement density while keeping the 3D scan frequency at an acceptable level. To this end, we use an extended version of the Papoulis-Gerchberg algorithm to achieve super-resolution on 3D range data by estimating the unmeasured samples in the environment. In addition, we propose a probabilistic obstacle reconstruction algorithm to consider the probabilities of the estimated (virtual) points and to obtain a very fast prediction about the existence and shape of the obstacles.
{"title":"A high frequency 3D LiDAR with enhanced measurement density via Papoulis-Gerchberg","authors":"Bengisu Ozbay, Elvan Kuzucu, M. Gul, Dilan Ozturk, M. Tasci, A. Arisoy, H. Sirin, Ismail Uyanik","doi":"10.1109/ICAR.2015.7251509","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251509","url":null,"abstract":"Light Detection and Ranging (LiDAR) devices are gaining more importance for obtaining sensory information in mobile robot applications. However, existing solutions in literature yield low frequency outputs with huge measurement delay to obtain 3D range image of the environment. This paper introduces the design and construction of a 3D range sensor based on rotating a 2D LiDAR around its pitch axis. Different than previous approaches, we adjust our scan frequency to 5 Hz to support its application on mobile robot platforms. However, increasing scan frequency drastically reduces the measurement density in 3D range images. Therefore, we propose two post-processing algorithms to increase measurement density while keeping the 3D scan frequency at an acceptable level. To this end, we use an extended version of the Papoulis-Gerchberg algorithm to achieve super-resolution on 3D range data by estimating the unmeasured samples in the environment. In addition, we propose a probabilistic obstacle reconstruction algorithm to consider the probabilities of the estimated (virtual) points and to obtain a very fast prediction about the existence and shape of the obstacles.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"385 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115976916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251439
N. Mavridis, G. Pierris, P. Gallina, N. Moustakas, A. Astaras
Joystick-based teleoperation is a dominant method for remotely controlling various types of robots, such as excavators, cranes, and space telerobotics. Our ultimate goal is to create effective methods for training and assessing human operators of joystick-controlled robots. Towards that goal, an extensive study consisting of a total of 38 experimental subjects on both simulated as well as a physical robot, using either no feedback or auditory feedback, has been performed. In this paper, we present the complete experimental setup and we report only on the 18 experimental subjects teleoperating the simulated robot. Multiple observables were recorded, including not only joystick and robot angles and timings, but also subjective measures of difficulty, personality and usability data, and automated analysis of facial expressions and blink rate of the subjects. Our initial results indicate that: First, that the subjective difficulty of teleoperation with auditory feedback has smaller variance as compared to teleoperation without feedback. Second, that the subjective difficulty of a task is linearly related with the logarithm of task completion time. Third, we introduce two important indicators of operator performance, namely the Average Velocity of Robot Joints (AVRJ), and the Correct-to-Wrong-Joystick Direction Ratio (CWJR), and we show how these relate to accumulated user experience and with task time. We conclude with a forward-looking discussion including future steps.
{"title":"Subjective difficulty and indicators of performance of joystick-based robot arm teleoperation with auditory feedback","authors":"N. Mavridis, G. Pierris, P. Gallina, N. Moustakas, A. Astaras","doi":"10.1109/ICAR.2015.7251439","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251439","url":null,"abstract":"Joystick-based teleoperation is a dominant method for remotely controlling various types of robots, such as excavators, cranes, and space telerobotics. Our ultimate goal is to create effective methods for training and assessing human operators of joystick-controlled robots. Towards that goal, an extensive study consisting of a total of 38 experimental subjects on both simulated as well as a physical robot, using either no feedback or auditory feedback, has been performed. In this paper, we present the complete experimental setup and we report only on the 18 experimental subjects teleoperating the simulated robot. Multiple observables were recorded, including not only joystick and robot angles and timings, but also subjective measures of difficulty, personality and usability data, and automated analysis of facial expressions and blink rate of the subjects. Our initial results indicate that: First, that the subjective difficulty of teleoperation with auditory feedback has smaller variance as compared to teleoperation without feedback. Second, that the subjective difficulty of a task is linearly related with the logarithm of task completion time. Third, we introduce two important indicators of operator performance, namely the Average Velocity of Robot Joints (AVRJ), and the Correct-to-Wrong-Joystick Direction Ratio (CWJR), and we show how these relate to accumulated user experience and with task time. We conclude with a forward-looking discussion including future steps.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134061312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}