Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223464
Amandine Mayima, A. Clodic, R. Alami
When we perform a collaborative task with another human, we are able to tell, to a certain extent, how things are going and more precisely if things are going well or not. This knowledge allows us to adapt our behavior. Therefore, we think it is desirable to provide robots with means to measure in real-time the Quality of the Interaction with their human partners. To make this possible, we propose a model and a set of metrics targeting the evaluation of the QoI in collaborative tasks through the measure of the human engagement and the online task effectiveness. These model and metrics have been implemented and tested within the high-level controller of an entertainment robot deployed in a mall. The first results show significant differences in the computed QoI when in interaction with a fully compliant human, a confused human and a non-cooperative one.
{"title":"Toward a Robot Computing an Online Estimation of the Quality of its Interaction with its Human Partner","authors":"Amandine Mayima, A. Clodic, R. Alami","doi":"10.1109/RO-MAN47096.2020.9223464","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223464","url":null,"abstract":"When we perform a collaborative task with another human, we are able to tell, to a certain extent, how things are going and more precisely if things are going well or not. This knowledge allows us to adapt our behavior. Therefore, we think it is desirable to provide robots with means to measure in real-time the Quality of the Interaction with their human partners. To make this possible, we propose a model and a set of metrics targeting the evaluation of the QoI in collaborative tasks through the measure of the human engagement and the online task effectiveness. These model and metrics have been implemented and tested within the high-level controller of an entertainment robot deployed in a mall. The first results show significant differences in the computed QoI when in interaction with a fully compliant human, a confused human and a non-cooperative one.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116766813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223557
Michela Bogliolo, G. Marchesi, Andrea Germinario, Emanuele Micheli, A. Canessa, Francesco Burlando, Francesco Vallone, A. Pilotto, M. Casadio
Sarcopenia is the loss of skeletal muscle tone, mass and strength associated with aging and lack of exercise. Its incidence is increasing, due to the growth in the number and proportion of older persons in world’s population. To prevent the onset of Sarcopenia and to contrast its effects, it is important to perform on a regular basis physical exercises involving the upper and lower limbs. One of the main problems is to motivate elderly people to start a training routine, even better if in groups. This determines the need of innovative, stimulating solutions, targeting groups of people and easily usable. The primary objective of this study was to develop and test a new method for answering this need. We designed a platform where the humanoid robot Pepper guided a group of subjects to perform a set of physical exercises specifically designed to contrast Sarcopenia. The robot illustrated, demonstrated, and then performed the exercises simultaneously with the subjects’ group. Moreover, by using an additional external camera Pepper controlled in real time the execution of the exercises, encouraging Participants who slow down or did not complete all the movements. The processing offline of the recorded data allowed estimating individual subjects performance. The platform has been tested with 8 volunteers divided into two groups. The preliminary results were encouraging: participants demonstrated a high degree of satisfaction for the robot-guided training. Moreover, participants moved with almost synchronously, indicating that all of them followed the robot, maintaining engagement and respecting the correct timing of the exercises.
{"title":"A robot instructor for the prevention and treatment of Sarcopenia in the aging population: a pilot study","authors":"Michela Bogliolo, G. Marchesi, Andrea Germinario, Emanuele Micheli, A. Canessa, Francesco Burlando, Francesco Vallone, A. Pilotto, M. Casadio","doi":"10.1109/RO-MAN47096.2020.9223557","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223557","url":null,"abstract":"Sarcopenia is the loss of skeletal muscle tone, mass and strength associated with aging and lack of exercise. Its incidence is increasing, due to the growth in the number and proportion of older persons in world’s population. To prevent the onset of Sarcopenia and to contrast its effects, it is important to perform on a regular basis physical exercises involving the upper and lower limbs. One of the main problems is to motivate elderly people to start a training routine, even better if in groups. This determines the need of innovative, stimulating solutions, targeting groups of people and easily usable. The primary objective of this study was to develop and test a new method for answering this need. We designed a platform where the humanoid robot Pepper guided a group of subjects to perform a set of physical exercises specifically designed to contrast Sarcopenia. The robot illustrated, demonstrated, and then performed the exercises simultaneously with the subjects’ group. Moreover, by using an additional external camera Pepper controlled in real time the execution of the exercises, encouraging Participants who slow down or did not complete all the movements. The processing offline of the recorded data allowed estimating individual subjects performance. The platform has been tested with 8 volunteers divided into two groups. The preliminary results were encouraging: participants demonstrated a high degree of satisfaction for the robot-guided training. Moreover, participants moved with almost synchronously, indicating that all of them followed the robot, maintaining engagement and respecting the correct timing of the exercises.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134628257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223468
Ji Han, G. Ajaykumar, Ze Li, Chien-Ming Huang
Interaction conventions (e.g., using pinch gestures to zoom in and out) are designed to structure how users effectively work with an interactive technology. We contend in this paper that successful human-robot interactions may be achieved through an appropriate use of interaction conventions. We present a simple, natural interaction convention—"Put That Here"—for instructing a robot partner to perform pick-and-place tasks. This convention allows people to use common gestures and verbal commands to select objects of interest and to specify their intended location of placement. We implement an autonomous robot system capable of parsing and operating through this convention. Through a user study, we show that participants were easily able to adopt and use the convention to provide task specifications. Our results show that participants using this convention were able to complete tasks faster and experienced significantly lower cognitive load than when using only verbal commands to give instructions. Furthermore, when asked to give natural pick-and-place instructions to a human collaborator, the participants intuitively used task specification methods that paralleled our convention, incorporating both gestures and verbal commands to provide precise task-relevant information. We discuss the potential of interaction conventions in enabling productive human-robot interactions.
交互约定(例如,使用缩放手势来放大和缩小)是为了使用户有效地使用交互技术而设计的。我们在本文中认为,成功的人机交互可以通过适当使用交互约定来实现。我们提出了一个简单、自然的交互约定——“Put That Here”——用于指导机器人伙伴执行拾取和放置任务。这种惯例允许人们使用常见的手势和口头命令来选择感兴趣的物体并指定它们的预期放置位置。我们实现了一个自主的机器人系统,能够通过这个约定进行解析和操作。通过用户研究,我们表明参与者很容易采用和使用约定来提供任务规范。我们的研究结果表明,使用这种习惯的参与者能够更快地完成任务,并且比只使用口头命令给予指示的参与者经历的认知负荷要低得多。此外,当被要求向人类合作者提供自然的拾取和放置指令时,参与者直观地使用与我们的惯例相似的任务规范方法,结合手势和口头命令来提供精确的任务相关信息。我们讨论了交互约定在实现高效人机交互方面的潜力。
{"title":"Structuring Human-Robot Interactions via Interaction Conventions","authors":"Ji Han, G. Ajaykumar, Ze Li, Chien-Ming Huang","doi":"10.1109/RO-MAN47096.2020.9223468","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223468","url":null,"abstract":"Interaction conventions (e.g., using pinch gestures to zoom in and out) are designed to structure how users effectively work with an interactive technology. We contend in this paper that successful human-robot interactions may be achieved through an appropriate use of interaction conventions. We present a simple, natural interaction convention—\"Put That Here\"—for instructing a robot partner to perform pick-and-place tasks. This convention allows people to use common gestures and verbal commands to select objects of interest and to specify their intended location of placement. We implement an autonomous robot system capable of parsing and operating through this convention. Through a user study, we show that participants were easily able to adopt and use the convention to provide task specifications. Our results show that participants using this convention were able to complete tasks faster and experienced significantly lower cognitive load than when using only verbal commands to give instructions. Furthermore, when asked to give natural pick-and-place instructions to a human collaborator, the participants intuitively used task specification methods that paralleled our convention, incorporating both gestures and verbal commands to provide precise task-relevant information. We discuss the potential of interaction conventions in enabling productive human-robot interactions.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130605660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223583
Soheil Gholami, Virginia Ruiz Garate, E. Momi, A. Ajoudani
Autonomous goal detection and navigation control of mobile robots in remote environments can help to unload human operators from simple, monotonous tasks allowing them to focus on more cognitively stimulating actions. This can result in better task performances, while creating user-interfaces that are understandable by non-experts. However, full autonomy in unpredictable and dynamically changing environments is still far from becoming a reality. Thus, teleoperated systems integrating the supervisory role and instantaneous decision-making capacity of humans are still required for fast and reliable robotic operations. This work presents a novel shared-autonomy framework for goal detection and navigation control of mobile manipulators. The controller exploits human-gaze information to estimate the desired goal. This is used together with control-pad data to predict user intention, and to activate the autonomous control for executing a target task. Using the control-pad device, a user can react to unexpected disturbances and halt the autonomous mode at any time. By releasing the control-pad device (e.g., after avoiding an instantaneous obstacle) the controller smoothly switches back to the autonomous mode and navigates the robot towards the target. Experiments for reaching a target goal in the presence of unknown obstacles are carried out to evaluate the performance of the proposed shared-autonomy framework over seven subjects. The results prove the accuracy, time-efficiency, and ease-of-use of the presented shared-autonomy control framework.
{"title":"A Shared-Autonomy Approach to Goal Detection and Navigation Control of Mobile Collaborative Robots","authors":"Soheil Gholami, Virginia Ruiz Garate, E. Momi, A. Ajoudani","doi":"10.1109/RO-MAN47096.2020.9223583","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223583","url":null,"abstract":"Autonomous goal detection and navigation control of mobile robots in remote environments can help to unload human operators from simple, monotonous tasks allowing them to focus on more cognitively stimulating actions. This can result in better task performances, while creating user-interfaces that are understandable by non-experts. However, full autonomy in unpredictable and dynamically changing environments is still far from becoming a reality. Thus, teleoperated systems integrating the supervisory role and instantaneous decision-making capacity of humans are still required for fast and reliable robotic operations. This work presents a novel shared-autonomy framework for goal detection and navigation control of mobile manipulators. The controller exploits human-gaze information to estimate the desired goal. This is used together with control-pad data to predict user intention, and to activate the autonomous control for executing a target task. Using the control-pad device, a user can react to unexpected disturbances and halt the autonomous mode at any time. By releasing the control-pad device (e.g., after avoiding an instantaneous obstacle) the controller smoothly switches back to the autonomous mode and navigates the robot towards the target. Experiments for reaching a target goal in the presence of unknown obstacles are carried out to evaluate the performance of the proposed shared-autonomy framework over seven subjects. The results prove the accuracy, time-efficiency, and ease-of-use of the presented shared-autonomy control framework.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133461734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223341
Pin-Chu Yang, Nishanth Koganti, G. A. G. Ricardez, Masaki Yamamoto, J. Takamatsu, T. Ogasawara
A robust, easy-to-deploy robot for service tasks in a real environment is difficult to construct. Record-and-playback (R&P) is a method used to teach motor-skills to robots for performing service tasks. However, R&P methods do not scale to challenging tasks where even slight changes in the environment, such as localization errors, would either require trajectory modification or a new demonstration. In this paper, we propose a Sequence-to-Sequence (Seq2Seq) based neural network model to generate robot trajectories in configuration space given a context variable based on real-world measurements in Cartesian space. We use the offset between a target pose and the actual pose after localization as the context variable. The model is trained using a few expert demonstrations collected using teleoperation. We apply our proposed method to the task of toilet cleaning where the robot has to clean the surface of a toilet bowl using a compliant end-effector in a constrained toilet setting. In the experiments, the model is given a novel offset context and it generates a modified robot trajectory for that context. We demonstrate that our proposed model is able to generate trajectories for unseen setups and the executed trajectory results in cleaning of the toilet bowl.
{"title":"Context Dependent Trajectory Generation using Sequence-to-Sequence Models for Robotic Toilet Cleaning","authors":"Pin-Chu Yang, Nishanth Koganti, G. A. G. Ricardez, Masaki Yamamoto, J. Takamatsu, T. Ogasawara","doi":"10.1109/RO-MAN47096.2020.9223341","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223341","url":null,"abstract":"A robust, easy-to-deploy robot for service tasks in a real environment is difficult to construct. Record-and-playback (R&P) is a method used to teach motor-skills to robots for performing service tasks. However, R&P methods do not scale to challenging tasks where even slight changes in the environment, such as localization errors, would either require trajectory modification or a new demonstration. In this paper, we propose a Sequence-to-Sequence (Seq2Seq) based neural network model to generate robot trajectories in configuration space given a context variable based on real-world measurements in Cartesian space. We use the offset between a target pose and the actual pose after localization as the context variable. The model is trained using a few expert demonstrations collected using teleoperation. We apply our proposed method to the task of toilet cleaning where the robot has to clean the surface of a toilet bowl using a compliant end-effector in a constrained toilet setting. In the experiments, the model is given a novel offset context and it generates a modified robot trajectory for that context. We demonstrate that our proposed model is able to generate trajectories for unseen setups and the executed trajectory results in cleaning of the toilet bowl.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116191750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223598
Monica Perusquía-Hernández, Marisabel Cuberos-Balda, David Antonio Gómez Jáuregui, Diego F. Paez-Granados, Felix Dollack, José Salazar
Self-tracking aims to increase awareness, decrease undesired behaviors, and ultimately lead towards a healthier lifestyle. However, inappropriate communication of self- tracking results might cause the opposite effect. Subtle self- tracking feedback is an alternative that can be provided with the aid of an artificial agent representing the self. Hence, we propose a wearable pet that reflects the user’s affective states through visual and haptic feedback. By eliciting empathy and fostering helping behaviors towards it, users would indirectly help themselves. A wearable prototype was built, and three user studies performed to evaluate the appropriateness of the proposed affective representations. Visual representations using facial and body cues were clear for valence and less clear for arousal. Haptic interoceptive patterns emulating heart-rate levels matched the desired feedback urgency levels with a saturation frequency. The integrated visuo-haptic representations matched to participants own affective experience. From the results, we derived three design guidelines for future robot mirroring wearable systems: physical embodiment, interoceptive feedback, and customization.
{"title":"Robot Mirroring: Promoting Empathy with an Artificial Agent by Reflecting the User’s Physiological Affective States","authors":"Monica Perusquía-Hernández, Marisabel Cuberos-Balda, David Antonio Gómez Jáuregui, Diego F. Paez-Granados, Felix Dollack, José Salazar","doi":"10.1109/RO-MAN47096.2020.9223598","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223598","url":null,"abstract":"Self-tracking aims to increase awareness, decrease undesired behaviors, and ultimately lead towards a healthier lifestyle. However, inappropriate communication of self- tracking results might cause the opposite effect. Subtle self- tracking feedback is an alternative that can be provided with the aid of an artificial agent representing the self. Hence, we propose a wearable pet that reflects the user’s affective states through visual and haptic feedback. By eliciting empathy and fostering helping behaviors towards it, users would indirectly help themselves. A wearable prototype was built, and three user studies performed to evaluate the appropriateness of the proposed affective representations. Visual representations using facial and body cues were clear for valence and less clear for arousal. Haptic interoceptive patterns emulating heart-rate levels matched the desired feedback urgency levels with a saturation frequency. The integrated visuo-haptic representations matched to participants own affective experience. From the results, we derived three design guidelines for future robot mirroring wearable systems: physical embodiment, interoceptive feedback, and customization.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123416926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223507
Vera Fink, Andy Börner, Maximilian Eibl
If robotic assistance is to be used in the near future by aging adults, it must have an acceptable design. In the process of applying a MEESTAR model in project to measure, predict, and justify the acceptance of robot assistants in a supermarket setting, we investigated the ethical ramifications of these robots. The method used was very well suited for participatory technology development. Does the appearance of the robot affect acceptance? The aim of the exploratory workshops was to gain insights into this question before evaluation in tangible environment. Our research approach, in addition to the construction of the robot, poses a significant difference to the traditional design and evaluation procedure.
{"title":"Living-Lab and Experimental Workshops for Design of I-RobEka Assistive Shopping Robot: ELSI Aspects with MEESTAR","authors":"Vera Fink, Andy Börner, Maximilian Eibl","doi":"10.1109/RO-MAN47096.2020.9223507","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223507","url":null,"abstract":"If robotic assistance is to be used in the near future by aging adults, it must have an acceptable design. In the process of applying a MEESTAR model in project to measure, predict, and justify the acceptance of robot assistants in a supermarket setting, we investigated the ethical ramifications of these robots. The method used was very well suited for participatory technology development. Does the appearance of the robot affect acceptance? The aim of the exploratory workshops was to gain insights into this question before evaluation in tangible environment. Our research approach, in addition to the construction of the robot, poses a significant difference to the traditional design and evaluation procedure.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123567732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223491
Bahar Irfan, Nathalia Céspedes Gómez, Jonathan Casas, Emmanuel Senft, Luisa F. Gutiérrez, Mónica Rincon-Roncancio, M. Múnera, Tony Belpaeme, C. Cifuentes
This paper presents a longitudinal case study of Robot Assisted Therapy for cardiac rehabilitation. The patient, who is a 60-year old male that suffered a myocardial infarction and received angioplasty surgery, successfully recovered after 35 sessions of rehabilitation with a social robot, lasting 18 weeks. The sessions took place directly at the clinic and relied on an exercise regime which was designed by the clinicians and delivered with the support of a social robot and a sensor suite. The robot monitored the patient’s progress, and provided personalised encouragement and feedback. We discuss the recovery of the patient and illustrate how the use of a social robot, its sensory systems and its personalised interaction was instrumental to maintain engagement with the programme and to the patient’s recovery. Of note is a critical event that was promptly detected by the robot, which allowed fast intervention measures to be taken by the medical staff for the referral of the patient for further surgery.
{"title":"Using a Personalised Socially Assistive Robot for Cardiac Rehabilitation: A Long-Term Case Study","authors":"Bahar Irfan, Nathalia Céspedes Gómez, Jonathan Casas, Emmanuel Senft, Luisa F. Gutiérrez, Mónica Rincon-Roncancio, M. Múnera, Tony Belpaeme, C. Cifuentes","doi":"10.1109/RO-MAN47096.2020.9223491","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223491","url":null,"abstract":"This paper presents a longitudinal case study of Robot Assisted Therapy for cardiac rehabilitation. The patient, who is a 60-year old male that suffered a myocardial infarction and received angioplasty surgery, successfully recovered after 35 sessions of rehabilitation with a social robot, lasting 18 weeks. The sessions took place directly at the clinic and relied on an exercise regime which was designed by the clinicians and delivered with the support of a social robot and a sensor suite. The robot monitored the patient’s progress, and provided personalised encouragement and feedback. We discuss the recovery of the patient and illustrate how the use of a social robot, its sensory systems and its personalised interaction was instrumental to maintain engagement with the programme and to the patient’s recovery. Of note is a critical event that was promptly detected by the robot, which allowed fast intervention measures to be taken by the medical staff for the referral of the patient for further surgery.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125398843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223548
E. V. Zoelen, E. Barakova, G.W.M. Rauterberg
As developments in artificial intelligence and robotics progress, more tasks arise in which humans and robots need to collaborate. With changing levels of complementarity in their capabilities, leadership roles will constantly shift. The research presented explores how people adapt their behavior to initiate or accommodate continuous leadership shifts in human-robot collaboration and how this influences trust and understanding. We conducted an experiment in which participants were confronted with seemingly conflicting interests between robot and human in a collaborative task. This was embedded in a physical navigation task with a robot on a leash, inspired by the interaction between guide dogs and blind people. Explicit and implicit feedback factors from the task and the robot partner proved to trigger humans to reconsider when to lead and when to follow, while the outcome of this differed across participants. Overall the participants evaluated the collaboration more positively over time, while participants who took the lead more often valued the collaboration more negatively than other participants.
{"title":"Adaptive Leader-Follower Behavior in Human-Robot Collaboration","authors":"E. V. Zoelen, E. Barakova, G.W.M. Rauterberg","doi":"10.1109/RO-MAN47096.2020.9223548","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223548","url":null,"abstract":"As developments in artificial intelligence and robotics progress, more tasks arise in which humans and robots need to collaborate. With changing levels of complementarity in their capabilities, leadership roles will constantly shift. The research presented explores how people adapt their behavior to initiate or accommodate continuous leadership shifts in human-robot collaboration and how this influences trust and understanding. We conducted an experiment in which participants were confronted with seemingly conflicting interests between robot and human in a collaborative task. This was embedded in a physical navigation task with a robot on a leash, inspired by the interaction between guide dogs and blind people. Explicit and implicit feedback factors from the task and the robot partner proved to trigger humans to reconsider when to lead and when to follow, while the outcome of this differed across participants. Overall the participants evaluated the collaboration more positively over time, while participants who took the lead more often valued the collaboration more negatively than other participants.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124292858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223587
Marc Maceira, Alberto Olivares Alarcos, G. Alenyà
Industrial robots are evolving to work closely with humans in shared spaces. Hence, robotic tasks are increasingly shared between humans and robots in collaborative settings. To enable a fluent human robot collaboration, robots need to predict and respond in real-time to worker’s intentions. We present a method for early decision using force infor-mation. Forces are provided naturally by the user through the manipulation of a shared object in a collaborative task. The proposed algorithm uses a recurrent neural network to recognize operator’s intentions. The algorithm is evaluated in terms of action recognition on a force dataset. It excels at detecting intentions when partial data is provided, enabling early detection and facilitating a quick robot reaction.
{"title":"Recurrent Neural Networks for Inferring Intentions in Shared Tasks for Industrial Collaborative Robots","authors":"Marc Maceira, Alberto Olivares Alarcos, G. Alenyà","doi":"10.1109/RO-MAN47096.2020.9223587","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223587","url":null,"abstract":"Industrial robots are evolving to work closely with humans in shared spaces. Hence, robotic tasks are increasingly shared between humans and robots in collaborative settings. To enable a fluent human robot collaboration, robots need to predict and respond in real-time to worker’s intentions. We present a method for early decision using force infor-mation. Forces are provided naturally by the user through the manipulation of a shared object in a collaborative task. The proposed algorithm uses a recurrent neural network to recognize operator’s intentions. The algorithm is evaluated in terms of action recognition on a force dataset. It excels at detecting intentions when partial data is provided, enabling early detection and facilitating a quick robot reaction.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"335 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124303362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}